A Driving Dilemma

Should your car make life-or-death decisions?
Self-driving Google Car at the Googleplex

Self-driving cars are coming. They’re expected to dramatically improve traffic flow and eliminate as much as 90 percent of accidents.

There’s just one problem: if you’re in one and a crowd of people suddenly appear in your path, the car might decide that the moral thing to do is swerve into a wall, saving many and sacrificing one—namely, you.

That’s a scenario that Azim Shariff, an assistant professor of psychology, used to test the willingness of people to buy “autonomous vehicles,” which use radar, global positioning systems, and the Internet to sense their surroundings and maneuver down roads without driver assistance. The Google Car, for example, has already covered thousands of miles of test driving.

Working with colleagues in France and Massachusetts, Shariff (below) found that people generally support programming self-driving cars to minimize an accident’s death toll. But the picture changes when the car owner’s life is at stake: people generally wanted to see other people buy self-driving cars, rather than purchase one themselves.

Azim Shariff“What we observe here is the classic signature of a social dilemma,” the researchers reported. “People mostly agree on what should be done for the greater good of everyone, but it is in everybody’s self-interest not to do it themselves.”

It’s for this reason that respondents felt manufacturers won’t even make cars that might one day choose to claim the life of the person behind the wheel. This feature is not exactly a selling point when you’re kicking tires on the lot. One maker, Tesla Motors, requires the driver to activate the “automated overtaking” function, toward reducing the company’s liability for choices that the car makes.

The decision-making that guides self-driving cars will be dictated by “moral algorithms”—formulas that will be based on society’s expectations for what should occur in any given situation. Shariff’s team argues that psychologists need to be at the table, defining the algorithms and basing them on tests in experimental ethics involving self-driving cars and unavoidable harm.

“It is a formidable challenge to define the algorithms that will guide self-driving cars confronted with moral dilemmas,” the team reported. “These algorithms must accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers.”

The work has generated considerable interest, not just among academics and researchers, but also the lay audience—the team even heard from car maker Renault.

“There has been a lot of media attention and interview requests,” Shariff said. “We also heard from a philosopher who works on the issue.”

—Matt Cooper

Credit for Google car photo: Google