Why are ‘driverless’ cars still hitting things?

Share

Late last month, a Tesla owner shared shocking dashcam footage of his Model 3 appearing to collide with and drive through a deer at high speeds. The car, which the driver says was engaged in Tesla’s driver-assist Full-Self Driving (FSD) mode, never detected the deer standing in the middle of the road and didn’t hit the brakes or maneuver to avoid it. That case came just a few months after a vehicle from Waymo, a leading self-driving company, reportedly ran over and killed a pet dog in a collision the company says was “unavoidable.” Neither driverless cars, according to reports detailing the incidents, spotted the animals on the road fast enough to avoid them. 

High-profile “edge cases” like these quickly gain attention and play on deep underlying anxieties around autonomous vehicle safety. Less than one in four US adults surveyed by Pew Research in 2022 said they would be very comfortable sharing a road with a driverless car. So far, these examples remain rare but they could become more common as more cities around the country allow self-driving vehicles to fill public roads. As that happens, it’s important to understand what these cars can and can’t “see.” AV manufacturers are improving the detection of potential hazards in several different ways. Currently, most of the industry is mostly coalescing on an approach that blends a diverse array of sensors and cameras with predictive AI models. Together, these systems create 3D maps surrounding vehicles that supporters of the technology say can detect potential hazards with “superhuman” like abilities. These models, while potentially better at detecting hazards than humans, still aren’t perfect. 

Cameras, Radar, and LiDAR: The eyes and ears of driverless cars 

The terms “driverless” and “self-driving” are often more descriptive than scientific–engineers and researchers in the space prefer the term “autonomous vehicles.” There are multiple levels of autonomy laid out by The Society of Automotive Engineers (SAE) ranging from 0 to 5. Tesla, which confusingly has “Autopilot” and “Full Self Driving features” that automate some aspects of driving like braking and lane control, still technically requires human drivers to have their hands on the steering wheel and eyes facing the road. University of San Francisco Professor and autonomous vehicle expert William Riggs told Popular Science this falls somewhere between levels 2 and 3 and should really be called “advanced driver assist.” More advanced autonomous systems like those offered by Waymo or Amazon-owned Zoox are really in a different league. Riggs described the gaps between Waymos and Tesla’s as “night and day.” These technical distinctions play a key role in determining what certain vehicles can see and how much they can be trusted.  

Driverless Vehicles need to be able to identify roads and objects in the world around them with a level of accuracy approaching or surpassing that of an ordinary human driver. To do that, most major manufacturers rely on a variety of different sensors, usually cameras, radar, and LiDAR placed around the vehicle working in tandem, a concept Riggs refers to as “sensor fusion.” This smattering of sensors is used to detect everything around the car and straight ahead of it. They are, in other words, the car’s eyes and ears. 

“The sophistication really is in connecting the numerous sensors to the central computer or what is the general processing unit,” Riggs noted. 

LiDAR sends out millions of laser pulses around the vehicle to create a 3D map of its surroundings. Credit: Waymo.

For more advanced driverless car systems, this process actually begins long before an AV ever winds down a road without a human behind the wheel. Waymo and Zoox, for example, have human drivers collect real-world data and map out roads where they are planning to deploy driverless vehicles. This process leads to detailed rich 3D digital maps filled with important markers like lane dividers, stop signs, and crosswalks. (If you’ve ever seen a Waymo and Zoox vehicle rifling through neighborhoods with a human behind the wheel, there’s a good chance they are mapping out the area.) The job isn’t ever totally finished. Cars are constantly mapping routes to look for changes that may have occurred due to construction or other environmental factors. 

[ Related: Tesla seeks human ‘remote operators’ to help ‘autonomous’ robotaxi service ]

But mapping only goes so far. Once the vehicles are ready to hit the road, the “eyes” come by way of various RGB cameras spread out around the vehicle. A single Waymo vehicle, for context, has 29 cameras. Combined, all these digital eyes work together to create a 360-degree view of the world around the car. There are downsides. Camera vision can struggle with determining distance, sometimes making objects appear closer or further away than they really are. They can perform poorly in inclement weather. 

That’s where radar comes in. In a nutshell, radar works by sending out pulsating radio waves toward other objects. Once the pulses hit an object they return to the sensors and reveal useful information about the other objects, most notably their speed and distance from the vehicle. Many driverless car systems utilize radar to help vehicles safely judge their distance from and navigate around other cars in motion. Though it can help determine speed and location, the radar isn’t accurate enough to determine whether an object on a road is an old tire or a living animal.

If you’ve ever seen a driverless vehicle with an odd-looking spinning top adorning its roof, those are LiDAR, or Light Detection and Ranging, sensors. LiDAR systems send out millions of laser pulses in all directions around the vehicle and then measure how quickly those lasers bounce back to the vehicle and then use that information to create an impressively accurate 3D map of the car’s surroundings. This digital image of light pulses can detect the presence of pedestrians, cyclists, and other vehicles. It can also detect variations in topography which could be useful for car navigating around potholes or other hazards. All of this happens nearly instantaneously. LiDAR was once prohibitively expensive for some tech companies to implement at scale but those costs have come trended down in recent years.

University of Illinois at Urbana Champaign electrical and computer engineering professor and autonomous safety expert Sayan Mitra told Popular Science AVs then use their assortment of sensors to create a “digital representation” of the environment around them. This software, which Mitra and other engineers call a “perception module” will include the position, orientation, and speed of the car in its own lane as well as the vehicles in surrounding lanes. These modules also use deep neural networks (DNN) to try and identify what exactly any object is, be that a pedestrian or a broken tree, in real time.

Waymo vehicles equipped with LiDAR sensors can already be seen mapping out roads in several US cities. Credit: Waymo.

This combination of cameras, radar, and LiDAR, though increasingly common, isn’t the only approach being considered. Tesla famously abandoned radar years ago in its FSD stack and now only uses camera vision. CEO Elon Musk has criticized LiDAR as a “crutch” and a “fool’s errand.” Though both Riggs and Mitra said it’s possible Tesla or another automaker could one day figure out a way to reach full autonomy using only camera vision, that approach currently lacks the level of precision achievable by using LiDAR.

“It’s [LiDAR] going to tell you how quickly that object is moving from space,” Riggs said. “And it’s not going to estimate it like a camera would do when a Tesla is using FSD.”  

What happens when things go wrong?

That’s how all these driverless systems are supposed to work, but in reality, they aren’t perfect. In the recent case of the Tesla plowing through the deer, Mitra says the mistake may have stemmed from the vehicle’s perception module failing to detect the deer reliably in the camera image. The relatively small gray deer lined up against a similarly gray pavement and aligned with lines on the road likely lead to an image that was “feature-poor.” Both Mitra and Riggs said it’s possible Tesla’s deep neural networks (DNN) may not have been adequately trained on images of deer from that angle or position. 

“If the software had never encountered a deer and didn’t know what a deer was, but also didn’t actually know the precise distance or the precise speed that the deer was running in through, then I’m not surprised that [the car] would plow through it,” Riggs said. “It’s a product of the type of information that the system can ingest.” 

Engineers and researchers refer to potentially unexpected or undertrained scenarios like those as “edge cases.” These can range from the rather mundane (Riggs told of a case of a Level Four vehicle failing to recognize a trailer hitched behind a truck) to life-threatening. The latter case occurred last year in San Francisco last year when a pedestrian was struck by a car and flung underneath a Cruise robotaxi operating in the adjacent lane. Several technical errors reportedly occurred resulting in the car failing to see the woman. She was then dragged 20 feet underneath the car. In this case, Riggs said AV makers simply had not thought to put in place cameras or sensors to look for pedestrians underneath the vehicle. 

“There wasn’t a camera underneath the vehicle, the engineers couldn’t see somebody was there,” Riggs said. “It was truly something that no one had ever thought of.”

How driverless cars deal with tricky choices 

Seeing and detecting obstacles in the road is only half the battle. Once detected, the vehicle needs to then know how to respond. In most cases, that will mean pressing the brakes or steering out of the way to avoid a collision. But that’s not necessarily always the best course of action. A driverless car likely wouldn’t make it far if it had to stop or make an evasive maneuver every time it detects small branches, brush, or a snowbank in its path. The onboard AI models need to ensure the objects in front of them are indeed branches and not a small dog.

There are other cases where suddenly braking to avoid a collision may also cause greater harm. Mitra provided the example of a small foal cooler falling off a truck on a busy highway with autonomous vehicles behind it and another vehicle tailgating the AV. If the driverless car were to brake hard to avoid the cooler, Mitra noted, then it might be rear-ended by the tailgater causing a potential pile-up.

“This is not just about avoiding obstacles,” Mitra said. “This [sic] type of trade-offs between safety of passengers, safety of others, speed, damage, and comfort come up in many other scenarios.”

Mitra went on to say he believes there’s an “urgent need” for more transparency and public conversations around what driverless cars’ high-level objectives should be. 

In the past, journalists and some researchers have compared these tradeoffs to the famous “trolley problem” in philosophy. That utilitarian thought experiment, first coined in 1967 centers on whether or not a trolley operator should actively choose to kill one person in order to prevent greater harm done to a larger group of people. Though it’s tempting to apply that same line of thinking when understanding how an AV reacts in dangerous situations, Riggs said the comparison misses the mark. AVs, taking in massive amounts of data and reacting on it in real-time, are really working with a “series of probabilistic choice sets.” That’s fundamentally different from a programming decision made by any single engineer. 

[ Related: GM brings hands free driving to rural America ]

“The vehicle isn’t making an ethical decision in any of these cases,” Riggs said. “Self-driving cars are going to be designed and are designed to basically evade collision and do so in a way that’s probabilistically the best pathway for the vehicle.”   

Even with those edge cases in mind, Riggs says he’s still bullish on a future where more driverless cars are on the road. Unlike humans, AVs won’t be tempted to speed, roll through stop signs, or send out text messages while driving. These automated drivers also aren’t distracted and should not violate laws. All of those factors combined, he argues, means AV could be safer than humans. Early research out of the University of Central Florida comparing accident rates between AVs and human drivers appears to show driverless vehicles drove safer during routine circumstances. Mitra said more peer-reviewed research on self-driving software safety will be needed as the technology rolls out more broadly to maintain public trust. 

“The more we can increase things that take humans out of the driving decision, the closer we’re going to get to zero collisions on our road,” Riggs said. “Keeping people from dying is a good thing.”

This story is part of Popular Science’s Ask Us Anything series, where we answer your most outlandish, mind-burning questions, from the ordinary to the off-the-wall. Have something you’ve always wanted to know? Ask us.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.

 
Mack DeGeurin Avatar

Mack DeGeurin

Contributor

Mack DeGeurin is a tech reporter who’s spent years investigating where technology and politics collide. His work has previously appeared in Gizmodo, Insider, New York Magazine, and Vice.