Self-Driving Cars

27 minutes, 28 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

Whether you think you can, or you think you can’t, you’re right.*

Many companies currently build technology for autonomous cars, and others are just entering the field. The three most transformative players in the space: Tesla, Google’s Waymo, and George Hotz’s Comma.ai. Each of these companies tackles the problem with very different approaches. In some ways, self-driving cars are robots that require solving both hardware and software problems. A self-driving car needs to identify its surrounding environment with cameras, radar, or other instruments. Its software needs to understand what is around the car, know its physical location, and plan the next steps it needs to take to reach its destination.

Tesla

Tesla, founded by Martin Eberhard and Marc Tarpenning in 2003, is known as the Apple of cars because of its revolutionary car design and outside-the-box thinking when creating its vehicles.* Tesla develops its cars based on first principles, from the air conditioning system that uses perpendicular vents to how they form their chassis and suspension. With its innovation and work, the Tesla Model 3 is the safest car in the world,* followed by the Tesla Model S and Model X.* But Tesla is not only innovative with their hardware, it also invests heavily in its Autopilot technology.

In 2014, Tesla quietly installed several pieces of hardware to increase the safety of their vehicles—12 ultrasonic sensors, a forward-facing camera, a front radar, a GPS, and digitally controlled brakes.* A few months later, they released a technology package for an additional $4,250 to enable the use of the sensors. In a rapid release streak, Tesla launched features in the upcoming months, and a year later, rolled out its first version of the Autopilot—known as Tesla Version 7.0—to 60,000 cars.

Autopilot gave drivers features like steering within a lane, changing lanes, and automatic parking. Other companies, including Mercedes, BMW, and GM, already offered some of the capabilities, however. But self-steering was a giant leap toward autonomy that was released suddenly, overnight, as a software update. Tesla customers were delighted with the software update, releasing videos on the internet of the software “driving” their Teslas, hands-free.

Tesla not only makes the software but also the hardware for its cars, enabling it to release new features and update its software over the air (OTA). Because it has released cars that have the necessary hardware components for self-driving capability since 2014, Tesla has a widely distributed test fleet. Other car manufacturers, like Google and GM, only have a small fleet of cars with the required hardware for self-driving.

From the introduction of the Tesla hardware package until November 2018,* a total of 50 months, Tesla accrued around 1 billion miles driven with the newest hardware.* Not only that, but the Tesla servers store the data these cars accumulate so that the Autopilot team can make changes to its software based on what it learns. At the time of this writing, Tesla had collected around 5.5 million miles of data per day for its newest system, taking only around four hours to gather 1 million miles. For comparison, Waymo has the next most data with about 10 million miles driven in its lifetime. In two days, Tesla acquires more data from its cars than Waymo has in its lifetime.

This data collection rate increases with more cars on the streets, and Tesla has been speeding up their production pace. Even though Tesla has more miles accumulated than its competitors,* when it tested its self-driving capability with the California Department of Motor Vehicles (DMV)—the state government organization that regulates vehicle registration—Tesla had a much higher count of disengagements compared to other competitors.*

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

Disengagements are a metric that the average person can use to compare autonomous systems.* It provides a rough count of how often the car’s system fails so badly that the test driver takes over. It is only a proxy of the performance because this metric does not take into account variables that may affect the vehicle, like weather, or how and where these problems occurred. An increase in disengagement could mean that a major problem exists or that the company is testing its software in more challenging situations such as a city.

At the end of 2015, Tesla numbers showed that it was far behind its competitors. If we normalize the numbers of miles per disengagement, Tesla had 1,000 times worse software compared to Waymo. But Tesla continues to hone its system, year after year. And, Tesla has an advantage over other carmakers: It can update the system over the air and make it better without having to sell new cars or have existing ones serviced.

Figure: Comparing miles per disengagement.*

Waymo’s self-driving fleet has the lowest number of disengagements per mile, but even this metric does not yet approach human performance. Waymo has 1 disengagement per 1,000 miles. If we consider a human disengagement as being when a human is driving and there is an accident, then theoretically, humans have around 100 times fewer disengagements than Waymo’s self-driving software.

But Tesla has another advantage: It has a large fleet of cars enabled for testing its newest self-driving car software update. This technology enables Tesla to develop software in-house and release it in shadow mode for millions of miles before releasing the software to the public.* Shadow mode allows Tesla to silently test its algorithms in customers’ cars, which provides the company with an abundant testbed of real-world data.

Figure: Image courtesy of Velodyne LiDAR.*

LIDAR or light detection and ranging is a sensor similar to a radar—its name came from a portmanteau of light and radar.* LIDAR maps physical space by bouncing laser beams off objects. Radar cannot see much detail, and cameras do not perform as well in conditions of low light or glare.* LIDAR lets a car “see” what is around it with much more detail than other sensors. The problem with LIDAR is that it does not work well in several different lighting conditions, including when it is foggy, raining, or snowing.*

Unlike other companies, Tesla bets that they can run a self-driving car that performs better than a human without a LIDAR hardware device.

Another problem is that LIDAR is expensive, originally starting at around $75K, although the cost is now considerably less,* and the hardware is bulky, resembling KFC buckets.* LIDAR helps autonomous cars process and build a 3D model of the world around them, called simultaneous localization and mapping (SLAM). Still, Tesla continues to improve their software and lower their disengagement rate, which is one of the reasons Tesla bet on not using such a device. To perform as well as humans, cars need the same type of hardware. Humans drive only with their eyes. So, it makes sense that self-driving cars could perform as well as humans with cameras alone.

A Tesla vehicle running the Autopilot software ran into a tractor-trailer in June 2016 after its software could not detect the trailer against the bright sky, resulting in the death of its driver. According to some, LIDAR could have prevented that accident. Since then, Tesla added radars to its cars for these situations. One of the providers of the base software, Mobileye, parted ways with Tesla because of the fatality. They thought Tesla was too bullish when introducing its software to the masses and that it needed more testing to ensure safety for all. Unfortunately, fatalities with self-driving software will always occur, just as with human drivers. Over time, the technology will improve, and the disengagement rates will decrease. I predict a time when cars are better than humans at driving, at which point cars will be safer drivers than humans. But deaths will inevitably occur.

Before that fatality, Tesla used Mobileye software to detect cars, people, and other objects in the street. Because of the split, Tesla had to develop the Autopilot 2 package from scratch, meaning it built new software to recognize objects and act on them. It took Tesla two years to be in the same state as before the breakup. But once it caught up with the old system, it quickly moved past its initial features.

For example, the newest Tesla Autopilot software 9.0, has the largest vision neural network ever trained.* They based the neural network on Google’s famous vision neural network architecture Inception. Tesla’s version, however, is ten times larger than Inception and has five times the number of parameters (weights). I expect that Tesla will continue to push the envelope.

Waymo

Tesla is not the only self-driving company at the forefront of technology. In fact, Google’s Waymo was one of the first companies to start developing software for autonomous cars. Waymo is a continuation of a project started in a laboratory at Stanford 10 years before the first release of the Tesla Autopilot. It won the DARPA Grand Challenge for self-driving cars, and because of its notoriety, Google acquired it five years later, forming Waymo. Waymo’s cars perform much better than any other self-driving system, but what is surprising is that they have many fewer miles driven in the real world than Tesla and other self-driving car makers.*

The DARPA Grand Challenge began in 2004 with a 150-mile course through the desert to spur development of self-driving cars. During the first year, the winner, Waymo, completed seven of the miles, but every vehicle crashed, failed, or caught fire.* The technology required for these first-generation cars was sophisticated, expensive, bulky, and not visually attractive. But over time, the cars improved, needing less hardware. While the initial challenge was limited to a single location in the desert, it expanded to city courses in later years.

With Waymo as the first winner of the competition, they became the leader of the autonomous car sector. Having the lowest disengagement rate per mile of any self-driving car system means that they have the best software. Some argue that the primary reason for Waymo performing better than the competition is that it tests its software in a simulated world. Waymo, located in a corner of Alphabet’s campus, developed a simulated virtual world called Carcraft—a play on words referring to the popular game World of Warcraft.* Originally, this simulated world was developed to replay scenes that the car experienced on public roads, including the times when the car disengaged. Eventually, Carcraft took an even larger role in Waymo’s self-driving car software development because it simulated thousands of scenarios to probe the car’s capability.

Waymo used this virtual reality to test its software before releasing it to the real-world test cars. In the simulation, Waymo created fully modeled versions of cities like Austin, Mountain View, and Phoenix as well as other test track simulations. It tested different scenarios in many simulated cars—around 25,000 of these at any single time. Collectively, the cars drive about 8 million miles per day in this virtual world. In 2016 alone, the virtual autonomous cars logged approximately 2.5 billion virtual miles, much more than the 3 million miles Waymo’s cars drove on the public roads. Its simulated world has logged 1,000 times more miles than its actual cars have.

The power of these simulations is that they train and test the models with software created for interesting and difficult interactions instead of the car simply putting in miles. For example, Carcraft simulates traffic circles that have many lanes and are hard to navigate. It mimics when other vehicles cut off the simulated car or when a pedestrian unexpectedly crosses the street. These situations rarely happen in the real world, but when they do, they can be fatal. These reasons are why Waymo has a leg up on its competitors. It trains and tests its software in situations other competitors cannot without the simulated world, regardless of how many miles they log. Personally, I believe testing in the simulated world is essential for making a safe system that can perform better than humans.

The simulation makes the software development cycle much, much faster. For developers, the iteration cycle is extremely important. Instead of taking weeks like in the early days of Waymo’s software construction, the cycle changed to a matter of minutes after developing Carcraft, meaning engineers can tweak their code and test it quickly instead of waiting long periods of time for testing results.

Carcraft tweaks the software and makes it better, but the problem is that a simulation does not test situations where there are oil slicks on the road, sinkhole-sized potholes, or other weird anomalies that might be present in the real world but not part of the virtual world. To test that, Waymo created an actual test track that simulates the diverse scenarios that these cars can encounter.

As the software improves, Waymo downloads it to their cars and runs and tests it on the test track before uploading it to the cars in the real world. To put this into perspective, Waymo reduced the disengagement rate per mile by 75% from 2015 to 2016.* Even though Waymo had a head start in creating a simulated world for testing its software, many other automakers now have programs to create their own simulations and testbeds.

Some report that the strategy for Waymo is to build the operating system for self-driving cars. Google had the same strategy when building Android, the operating system for smartphones. They built the software stack for smartphones and let other companies, like Samsung and Motorola, build the hardware. For self-driving cars, Waymo is building the software stack and wants the carmakers to build the hardware. It reportedly tried to sell its software stack to automakers but was unsuccessful. Auto companies want to build their own self-driving systems. So, Waymo took matters into their own hands and developed an Early Rider taxi service with about 62,000 minivans.* In December 2018, Waymo One launched a 24-hour service in the Phoenix area that opened up its ride-sharing service to a few hundred preselected people, expanding its private taxi service. These vans, however, will have a Waymo employee in the driver’s seat. This might be the solution to run its self-driving cars in the real world at first, but it will be difficult to see that solution scale up.

Comma.ai

One of the other most important players in the self-driving ecosystem is Comma.ai, started by a hacker in his mid-twenties, George Hotz, in 2015.* In 2007, at the age of 17, he became famous for being the first person to hack the iPhone to use on networks other than AT&T. He was also the first person to hack the Sony PlayStation 3 in 2010. Before building a self-driving car, Hotz lived in Silicon Valley and worked for a few companies including Google, Facebook, and an AI startup called Vicarious.

Figure: George Hotz and his first self-driving car, an Acura.

Hotz started hacking self-driving cars by retrofitting a white 2016 Acura ILX with a LIDAR on the roof and a camera mounted near the rearview mirror. He added a large monitor where the dashboard sits and a wooden box with a joystick, where you typically find the gearshift, that enables the self-driving software to take over the car. It took him about a month to retrofit his Acura and develop the software needed for the car to drive itself. Hotz spent most of his time adding sensors, the computer, and electronics. Once the systems were up and running, he drove the car for two and a half hours to let the computer observe him driving. He returned home and downloaded the data so that the algorithm could analyze his driving patterns.

The software learned that Hotz tended to stay in the middle lane and maintained a safe distance from the car in front of it. Two weeks later, he went for a second drive to provide more hours of training and also to test the software. The car drove itself for long stretches while remaining within the lanes. The lines on the dash screen—one showed the car’s actual path and the other where the computer wanted to go—overlapped almost perfectly. Sometimes, the Acura seemed to lock onto the car in front of it or take cues from a nearby car. After automating the car’s steering as well as the gas and brake pedals, Hotz took the car for a third drive, and it stayed in the center of the lane perfectly for miles and miles, and when a car in front of it slowed, so did the Acura.

Figure: George Hotz’s self-driving car.

The technology he built as an entrepreneur represents a fundamental shift from the expensive systems designed by Google into much cheaper systems that depend on software more than hardware. His work impressed many technology companies including Tesla. Elon Musk, who joined Tesla after a Series A funding round and is their current CEO, and Holz met at Tesla’s Fremont, California, factory and discussed artificial intelligence. The two settled on a deal where Hotz would create software better than Mobileye’s, and Musk would compensate him with a contract worth about $1M per year. Unfortunately, Holz walked away after Musk continually changed the terms of the deal. “Frankly, I think you should just work at Tesla,” Musk wrote to Hotz in an email. “I’m happy to work out a multimillion-dollar bonus with a longer time horizon that pays out as soon as we discontinue Mobileye.” “I appreciate the offer,” Hotz replied, “but like I’ve said, I’m not looking for a job. I’ll ping you when I crush Mobileye.” Musk simply answered, “OK.”*

Since then, Holz has been working on what he calls the Android of self-driving cars, comparing Tesla to the iPhone of autonomous vehicles. He launched a smartphone-like device, which sells for $699 with software installed. The dash cam simply plugs into the most popular cars made in the United States after 2012 and provides the equivalent capability of Tesla Autopilot, meaning cars drive themselves on highways from Mountain View to San Francisco with no one touching the wheel.*

Figure: EON dash cam running chffrplus.*

But soon after launching the product, the National Highway Traffic Safety Administration (NHTSA) sent an inquiry and threatened penalties if Hotz did not submit to oversight considerations. In response, Hotz pulled the product from sale and pursued another path. He decided to market another product that was the hardware-only version of the product.

Then, in 2016, he open-sourced the software so that anyone could install it in the appropriate hardware. And with that, Comma.ai abstained from the responsibility of running its software in cars. But consumers still had access to the technology, allowing their cars to drive themselves. Comma.ai continues to develop its software, and drivers can buy the hardware and install the software in their cars. Some people estimate that around 1,000 of these modified cars run on the streets now.

new Recently, Comma.ai has announced that they have become profitable.*

The Brain of the Self-Driving Car

Figure: The parts of the autonomous car’s brain.

Three main parts form the brain of an autonomous car: localization, perception, and planning. But even before tackling these three items, the software must integrate the data from different sensors, such as cameras, radars, LIDAR, and GPS. Different techniques ensure that if data from a given sensor is noisy, meaning it contains unwanted or unclear data, then other sensors help out with their information. And, there are methods for merging data from these different sensors.

Once data has been acquired, the next step for the software is to know where it is. This process includes finding the physical location of the vehicle and which direction the car needs to head, for example, which exits it needs to take to deliver the passenger to their destination. One potential solution is to use LIDAR with background subtraction to match the sensor data to a high-definition map.

Figure: Long tail.

The next part of the software stack is harder. Perception basically involves answering the question of what is around the vehicle. A car needs to find traffic signs and determine which color they are. It needs to see where the lane markings are and where cars, trucks, and buses are. Perception includes lane detection, traffic light detection, object detection and tracking, and free space detection.

The hardest part of this problem is in the long tail, which describes the diverse scenarios that show up only occasionally. When driving, that means situations like traffic lights with different colors from the standard red, yellow, and green or roundabouts with multiple lanes. These scenarios happen infrequently, but because there are so many different possibilities, it is essential to have a dataset large enough to cover them all.

The last step, path planning, is by far the hardest. Given the car’s location, its surroundings, and its passengers’ destination, how does it get there? The software must calculate the next steps to getting to the desired place, including route planning, prediction, behavior planning, and trajectory planning. The solution ideally includes mimicking human behavior based on actual data from people driving.

These three steps combine to form the actions cars need to take based on the information given. The system decides whether the vehicle needs to turn left, brake, or accelerate. The instructions fed to a control system ensure the car does not do anything unacceptable. This system comes together to make cars drive themselves through the streets and forms the “magic” behind cars driven by Tesla, Waymo, Comma.ai, and many others.

Ethics and Self-Driving Cars

As stated earlier, traffic fatalities are inevitable, and, therefore, these companies must address the ethical concerns associated with the technology. The software algorithms determine what action autonomous vehicles perform. When a collision is unavoidable, in what order should the events occur?

This is a thought experiment described as the trolley problem. For example, it is a straightforward decision to have the car run into a fire hydrant instead of hitting a pedestrian. And while some may disagree, it is more humane to hit a dog in a crosswalk rather than a mother pushing a baby in a stroller. But that, I believe, is where the easy decisions end. What about hitting an older adult as opposed to two young adults? Or, in a most extreme case, is it better to choose to run the car off a cliff, killing the driver and all passengers, instead of plowing into a group of kindergarten students?*

Society sometimes focuses too much on technology instead of looking at the complete picture. In my opinion, we must encourage the ethical use of science, and, as such, we need to invest the proper resources into delving into this topic. It is by no means easy to solve, but allocating the appropriate means for discussing this topic only betters our society.

The Future of Self-Driving Cars

But the worries about operatorless elevators were quite similar to the concerns we hear today about driverless cars.Garry Kasparov*

There is a lot of talk about self-driving cars and how they will one day replace truck drivers, and some say that the transition will happen all of a sudden. In fact, the change will happen in steps, and it will start in a few locations and then expand rapidly. For example, Tesla is releasing software updates that make their car more and more autonomous. It first started releasing software that let its cars drive on highways, and with a later software update, its cars were able to merge into traffic and change lanes. Waymo is now testing its self-driving cars in downtown Phoenix. But it might not be surprising if Waymo starts rolling out their service in other areas.

The industry talks about five levels of autonomy to compare different cars’ systems and their capabilities. Level 0 is when the driver is completely in control, and Level 5 is when the car drives itself and does not need driver assistance. The other levels range between these two. I am not going to delve into the details of each level because the boundaries are blurry at best, and I prefer to use other ways to compare them, such as disengagements per mile. However they are measured, as the systems improve, autonomous cars can prevent humans from making mistakes and help avoid accidents caused by other drivers.

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!