According to a recent report by McKinsey & Co., the adoption of self-driving vehicles is projected to prevent 90 percent of all car accidents in the United States. However, a large question remains regarding how autonomous vehicles should make decisions when faced with a potential accident: should the self-driving car prioritize the safety of the car’s passengers or that of pedestrians and other drivers on the road?
Simply put, imagine a scenario in which a self-driving car is about to hit a pedestrian; the car will have to make the choice whether to hit the pedestrian or swerve, potentially harming the car’s passengers. A study published in Science noted that Americans have conflicting preferences regarding what choice they would have the self-driving car make. Americans want other people’s autonomous cars to prioritize the safety of everyone else on the road, but they would want their own self-driving car to protect them above all.
This dichotomy presents an interesting dilemma facing self-driving cars. These vehicles have the potential to greatly reduce accidents–but only if Americans adopt them in wide numbers. If Americans don’t trust that their self-driving vehicle will protect them when faced with an accident, rates of autonomous car adoption may not great enough to realize the promise that these cars have for making American roads safer.
In short, self-driving cars have the most potential to drastically reduce the rate of car accidents in this country in the absence of programming (and regulations) that force the autonomous car to prioritize the safety of others over the car’s passengers.
Elon Musk projects that the first completely autonomous car will be ready by 2020. As this date draws closer, car industry regulators need to make sure that the rules governing self-driving car’s actions don’t prematurely kill their potential.