Note: The opinions expressed in this column are those of the author alone and do not necessarily reflect the views of PitchBook or the editorial staff. For another side to this debate, click here for Adam Putz's column: "We should expect better from Tesla as it develops driverless tech."
"Tesla drivers death using car’s ‘Autopilot’ probed by NHTSA" — The Washington Post
"Tesla Draws Scrutiny After Autopilot Feature Linked to a Death" — The Wall Street Journal
These are some of the headlines that followed the first fatal crash involving a car driven under Tesla’s Autopilot, a system designed to shift some (not all) of the car’s control from the driver to an autonomous technology that uses radars, cameras and GPS.
The crash is tragic. The death is tragic.
Many technological advances are met with skeptical eyes, especially those propelling humans far into a future that seems close to "The Jetsons."
Tesla’s Autopilot is that technology at the moment, and the crash that resulted from it will only cause doubters of its potential to push back even more on its introduction into the auto systems of today and tomorrow.
It’s easy to blame the death on the technology, as Tesla has admitted that Autopilot was engaged. But the reason the technology is so easy to blame isn’t its safety—it’s that the technology is so unfamiliar to drivers. The first fatal car crash using Autopilot didn’t happen until 130 million miles had been driven with it turned on; fatal crashes happen in the U.S. every 93 million miles without the technology.
Too often we hear of drivers killed by another vehicle, the sad truth that a fatality doesn’t need to be caused by victim error. Whether the other vehicle was driven by someone impaired, using their phone, distracted by their passenger or if that driver simply made a split-second decision that was wrong, it doesn’t change the fact that driving two-ton car is inherently dangerous.
Autonomous driving technologies, such as Tesla’s Autopilot, are designed to make driving a semi-hands-free endeavor (mitigating poor decision-making by humans), and also to create safer roads for everyone. Autopilot arms the car—in this case the Tesla Model S—with forward-looking radar and camera; long-range ultrasonic sensors that detect objects around the car at any speed; and a digitally controlled electric-assisted braking system to help position the car, manage its speed and avoid collisions. Theoretically, real-time feedback and machine learning capabilities should help the Tesla fleet upon itself and become safer the more Autopilot miles driven.
In April, the U.S. Federal Highway Administration estimated that Americans drove 267.3 billion miles during the month (roughly 3.21 trillion extended over 12 months). If one person was killed in a crash every 130 million of those miles with Autopilot, roughly 24,692 people would be victim to a deadly crash during the year. If that miles-driven to death ratio stays at 93 million miles, then 34,516 people will be victim to a fatal traffic collisions, 9,824 fatalities more.
I agree that it’s a horrible way to think about technology and safety. Ideally there would be zero traffic-related deaths (or unnatural deaths of any kind for that matter), but at some point that hope is unreasonable. Safety features will never make driving 100% safe, especially because of the human element needed for maneuvering and decision making. In fact, estimates put the percentage of U.S. car crashes caused by human error at 94%.
Comparing autonomous driving technology to other car safety advancement is tough. No feature created so far has taken as much control away from the driver. Seatbelts and front-impact airbags are now mandatory features of all new cars in the U.S. An estimate of lives saved by both of these safety features in 2014 was 15,198 lives; 12,806 from seatbelts and 2,392 from airbags. Each of these features costs auto manufacturers billions to design, develop, implement and (in many cases) recall, and are also the cause of major, minor and fatal injuries each year—yet we never question their safety or use because, in the end, lives are saved. If autonomous technologies also become a common part of our lives, we won’t question their safety anymore either, because the technology will save lives.
Autopilot and similar technologies will undoubtedly contribute to future deaths, but if that rate continues at a slower pace than human error, it's foolish to believe we are safer off without them. Attributing a death to a machine causes fear because of the perceived lack of control, but unfortunately as we have seen in many traffic accidents, the entire event is often out of the victim’s control. Cars and trucks aren’t going away, and we can’t expect humans to become better drivers. It’s time to embrace Autopilot and other autonomous driving technologies for the future and not judge them unsafe when the data says otherwise. Maybe we can move the statistics to one death every 150 million miles or 175 million. Maybe someday that goal of zero total traffic deaths will be within reach.
Read the counterpoint to this column: "We should expect better from Tesla as it develops driverless tech."