Self-Driving Cars and Moral Dilemmas with No Solutions

If you do a Google search on the ethics of self-driving cars, you’ll find an abundance of essays and news articles, all very similar. You’ll be asked to imagine scenarios in which your brakes fail and you must decide which group of pedestrians to crash into and kill. You’ll learn about the “trolley problem,” a classic dilemma in moral philosophy that exposes tensions between our deeply held moral intuitions.

And you’ll likely come across the Moral Machine platform, a website designed by researchers at MIT to collect data on how people decide in trolley problem-like cases that could arise with self-driving cars — crash scenarios in which a destructive outcome is inevitable. In one scenario, for example, you must decide whether a self-driving car with sudden brake failure should plow through a man who’s illegally crossing the street or swerve into the other lane and take out a male athlete who is crossing legally.

Another scenario involves an elderly woman and a young girl crossing the road legally. You’re asked whether a self-driving car with failed brakes should continue on its path and kill the elderly woman and the girl or swerve into a concrete barrier, killing the passengers — a different elderly woman and a pregnant woman. And here’s one more. A young boy and a young girl are crossing legally over one lane of the road, and an elderly man and an elderly woman are crossing legally over the other lane of the road. Should the self-driving car keep straight and kill the two kids, or should it swerve and kill the elderly couple?

You get the sense from the Moral Machine project and popular press articles that although these inevitable-harm scenarios present very difficult design questions, the hardworking engineers, academics, and policymakers will eventually come up with satisfactory solutions. They have to, right? Self-driving cars are just around the corner.

There Are No Solutions

The problem with this optimism is that some possible scenarios, as rare as they may be, have no fully satisfactory solutions. They’re true moral dilemmas, meaning that no matter what one does, one has failed to meet some moral obligation.

A driver on a four-lane highway who swerves to miss four young children standing in one lane only to run over and kill an adult tying his shoe in the other could justify his actions according to the utilitarian calculation that he minimized harm (assuming those were the only options available to him). But he still actively turned his steering wheel and sentenced an independent party to death.

The driver had no good options. But this scenario is more of a tragedy than a moral dilemma. He acted spontaneously, almost instinctively, making no such moral calculation regarding who lives and who dies. Having had no time to deliberate, he may feel some guilt for what happened, but he’s unlikely to feel moral distress. There’s no moral dilemma here because there’s no decision maker.

But what if, as his car approached the children on the highway, he was somehow able to slow everything down and consciously decide what to do? He may well do exactly the same thing, and for defensible reasons (according to some perspectives), but in saving the lives of the four children, he could be taking a father away from other children. And he knows and appreciates this reality when he decides to spare the children. This is a moral dilemma.

Like the example above does, the prospect of self-driving cars introduces conscious deliberation to inevitable-harm crash scenarios. The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.

Of course, if we want self-driving cars, then we have to accept that such decisions must be made. But whatever decisions are made — and integrated into our legal structure and permanently coded into the cars’ software — will not be true solutions. They will be faits accomplis. Over time, most people will accept them, and they will even appeal to existing moral principles to justify them. The decisions will have effectively killed the moral dilemmas, but by no means will they have solved them.

The Broader Moral Justification for Self-Driving Cars

The primary motive for developing self-driving cars is likely financial, and the primary driver of consumer demand is probably novelty and convenience, but there is a moral justification for favoring them over human-driven cars: overall, they will be far less deadly.

There will indeed be winners and losers in the rare moral dilemmas that self-driving cars face. And it is indeed a bit dizzying that they will be picked beforehand by decision makers that won’t even be present for the misfortune.

But there are already winners and losers in the inevitable-harm crash scenarios with humans at the wheel. Who the winners and losers turn out to be is a result more of the driver’s reflex than his conscious forethought, but somebody still dies and somebody still lives. So, the question seems to be whether this quasi-random selection is worth preserving at the cost of the lives that self-driving cars will save.

Probably not.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.