Self-Driving Cars and Moral Dilemmas with No Solutions

If you do a Google search on the ethics of self-driving cars, you’ll find an abundance of essays and news articles, all very similar. You’ll be asked to imagine scenarios in which your brakes fail and you must decide which group of pedestrians to crash into and kill. You’ll learn about the “trolley problem,” a classic dilemma in moral philosophy that exposes tensions between our deeply held moral intuitions.

And you’ll likely come across the Moral Machine platform, a website designed by researchers at MIT to collect data on how people decide in trolley problem-like cases that could arise with self-driving cars — crash scenarios in which a destructive outcome is inevitable. In one scenario, for example, you must decide whether a self-driving car with sudden brake failure should plow through a man who’s illegally crossing the street or swerve into the other lane and take out a male athlete who is crossing legally.

Another scenario involves an elderly woman and a young girl crossing the road legally. You’re asked whether a self-driving car with failed brakes should continue on its path and kill the elderly woman and the girl or swerve into a concrete barrier, killing the passengers — a different elderly woman and a pregnant woman. And here’s one more. A young boy and a young girl are crossing legally over one lane of the road, and an elderly man and an elderly woman are crossing legally over the other lane of the road. Should the self-driving car keep straight and kill the two kids, or should it swerve and kill the elderly couple?

You get the sense from the Moral Machine project and popular press articles that although these inevitable-harm scenarios present very difficult design questions, the hardworking engineers, academics, and policymakers will eventually come up with satisfactory solutions. They have to, right? Self-driving cars are just around the corner.

There Are No Solutions

The problem with this optimism is that some possible scenarios, as rare as they may be, have no fully satisfactory solutions. They’re true moral dilemmas, meaning that no matter what one does, one has failed to meet some moral obligation.

A driver on a four-lane highway who swerves to miss four young children standing in one lane only to run over and kill an adult tying his shoe in the other could justify his actions according to the utilitarian calculation that he minimized harm (assuming those were the only options available to him). But he still actively turned his steering wheel and sentenced an independent party to death.

The driver had no good options. But this scenario is more of a tragedy than a moral dilemma. He acted spontaneously, almost instinctively, making no such moral calculation regarding who lives and who dies. Having had no time to deliberate, he may feel some guilt for what happened, but he’s unlikely to feel moral distress. There’s no moral dilemma here because there’s no decision maker.

But what if, as his car approached the children on the highway, he was somehow able to slow everything down and consciously decide what to do? He may well do exactly the same thing, and for defensible reasons (according to some perspectives), but in saving the lives of the four children, he could be taking a father away from other children. And he knows and appreciates this reality when he decides to spare the children. This is a moral dilemma.

Like the example above does, the prospect of self-driving cars introduces conscious deliberation to inevitable-harm crash scenarios. The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.

Of course, if we want self-driving cars, then we have to accept that such decisions must be made. But whatever decisions are made — and integrated into our legal structure and permanently coded into the cars’ software — will not be true solutions. They will be faits accomplis. Over time, most people will accept them, and they will even appeal to existing moral principles to justify them. The decisions will have effectively killed the moral dilemmas, but by no means will they have solved them.

The Broader Moral Justification for Self-Driving Cars

The primary motive for developing self-driving cars is likely financial, and the primary driver of consumer demand is probably novelty and convenience, but there is a moral justification for favoring them over human-driven cars: overall, they will be far less deadly.

There will indeed be winners and losers in the rare moral dilemmas that self-driving cars face. And it is indeed a bit dizzying that they will be picked beforehand by decision makers that won’t even be present for the misfortune.

But there are already winners and losers in the inevitable-harm crash scenarios with humans at the wheel. Who the winners and losers turn out to be is a result more of the driver’s reflex than his conscious forethought, but somebody still dies and somebody still lives. So, the question seems to be whether this quasi-random selection is worth preserving at the cost of the lives that self-driving cars will save.

Probably not.

Morbid Futurism: Man-Made Existential Risks

Unless you’re a complete Luddite, you probably agree that technological progress has been largely beneficial to humanity. If not, then close your web browser, unplug your refrigerator, cancel your doctor’s appointment, and throw away your car keys.

For hundreds of thousands of years, we’ve been developing tools to help us live more comfortably, be more productive, and overcome our biological limitations. There has always been opposition to certain spheres of technological progress (e.g., the real Luddites), and there will likely always be be opposition to specific technologies and applications (e.g., human cloning and genetically modified organisms), but it’s hard to imagine someone today who is sincerely opposed to technological progress in principle.

Despite technology’s benefits, however, it also comes with risks. The ubiquity of cars puts us at risk for crashes. The development of new drugs and medical treatments puts us at risk for adverse reactions, including death. The integration of the internet into more and more areas of our lives puts us at greater risk for privacy breaches and identity theft. Virtually no technology is risk free.

But there’s a category of risk associated with technology that is much more significant: existential risk. The Institute for Ethics and Emerging Technologies (IEET) defines existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

As IEET points out, there are quite a few existential risks already present in the universe – gamma ray bursts, huge asteroids, supervolcanoes, and extraterrestrial life (if it exists). But the human quest to tame the universe for humanity’s own ends has created new ones.

Anthropogenic Existential Risks

The Future of Life Institute lists the following man-made existential risks.

Nuclear Annihilation

A nuclear weapon hasn’t been used since the United States bombed Hiroshima and Nagasaki in World War II, and the global nuclear stockpile has been reduced by 75% since the end of the cold war. But there are still enough warheads in existence to destroy humanity. Should a global nuclear war break out, a large percentage of the human population would be killed, and the nuclear winter that would follow would kill most of the rest.

Catastrophic Climate Change

The scientific consensus is that human activities are the cause of the rising global average temperatures. And as temperatures rise, extreme storms, droughts, floods, more intense heat waves and other negative effects will become more common. These effects in themselves are unlikely to pose an existential risk, but the chaos they may induce could. Food, water, and housing shortages could lead to pandemics and other devastation. They could also engender economic instabilities, increasing the likelihood of both conventional and nuclear war.

Artificial Intelligence Takeover

It remains to be seen whether a superintelligent machine or system will ever be created. It’s also an open question whether such artificial superintelligence would, should it ever be achieved, be bad for humanity. Some people theorize that it’s all but guaranteed to be an overwhelmingly positive development. There are, however, at least two ways in which artificial intelligence could pose an existential risk.

For one, it could be programmed to kill humans. Autonomous weapons are already being developed and deployed, and there is a risk that as they become more advanced, they could escape human control, leading to an AI war with catastrophic human casualty levels. Another risk is that we create artificial intelligence for benevolent purposes but fail to fully align its goals with our own. We may, for example, program a superintelligent system to undertake a large-scale geoengineering project but fail to appreciate the creative, yet destructive, ways it will accomplish its goals. In its quest to efficiently complete its project, the superintelligent system might destroy our ecosystem, and us when we attempt to stop it.

Out-of-Control Biotechnology

The promises of biotechnology are undeniable, but advances also present significant dangers. Genetic modification of organisms (e.g. gene drives) could profoundly affect existing ecosystems if proper precautions aren’t taken. Further, genetically modifying humans could have extremely negative unforeseen consequences. And perhaps most unnerving is the possibility of a lethal pathogen escaping the lab and spreading across the globe. Scientists engineer very dangerous pathogens in order to learn about and control those that occur naturally. This type of research is done in highly secure laboratories with many levels of controls, but there’s still the risk, however slight, of an accidental release. And the technology and understanding are rapidly becoming cheaper and more widespread, so there’s a growing risk that a malevolent group could “weaponize” and deploy deadly pathogens.

Are We Doomed?

These doomsday scenarios are by no means inevitable, but they should be taken seriously. The devastating potential of climate change is pretty well understood, and it’s up to us to do what we can to mitigate it. The technology to bomb ourselves into oblivion has been around for almost 75 years. Whether we end up doing that depends on an array of geopolitical factors, but the only way to ensure that we don’t is to achieve complete global disarmament.

Many of the risks associated with artificial intelligence and biotechnology are contingent upon technologies that have yet to fully manifest. But the train is already moving, and it’s not going to stop, so it’s up us to make sure it doesn’t veer down the track toward doom. As the Future of Life Institute puts it, “We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.”

On The Internet of Things

The vacant and ebbing pulse of HAL 9000’s artificial eye calmly tells its human counter-part, “I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000’s system had overtaken the entirety of the ship’s system including oxygen, airlocks, and every other element pertinent to human survival on the ship. The artificial intelligence we come to know as HAL 9000 seeks to survive and will do so at the cost of human’s lives. Remorseless and capitulating to no in-betweens, HAL does sacrifice others for its own survival. While this tale resides in the movie, “Space: 2001” and introduces several interesting ideas: AI, consciousness, and *SPOILER* unwitting psychological testing; I am seeking to explore the danger of having a singular system manage all the elements of our interactions.

“SPACE: 2001” forewarns us by HAL 9000 altering the astronaut’s environment to a deadly effect. There is a long scoped comparison, for now, to the current environment where the Internet of Things (IoT) has become so widespread. At first, our system was solely concerned with systems like our computers. With the introduction of the modem, our computers began to intermingle with other systems. Hearing the sound of a modem communicating has become a nostalgic inducing event. Yet, there was much more control. Our modem’s required a phone line that would be unusable so many people were limited in when they could use it. Also, it was relatively expensive for a time and like so many other things that didn’t last. It reached into homes across America (think You’ve Got Mail!) and dropped dramatically in price so everyone could be connected. This lead to the “always on” connection methods where there was no window where the computer wasn’t connected to the network. This lead to a faster method and eventually to the introduction of wi-fi – which is actually a trademarked term to describe wireless compatible devices that can connect to the internet.

Wi-fi spread into every facility to accommodate the ability to be connected almost anywhere. We didn’t stop with personal computers (PCs) or laptops; instead, we chose to push further our connection abilities. Cell phones then became able to connect to the internet both through wi-fi and through the cellular data system. With this came an untethered freedom to access the internet and peruse so many posts about outrageous cats. As with most technology, this invention refused to stop progressing. In fact, it sped up. Wireless printers, wireless thermostats, wireless security systems, wireless microwaves and ovens and even wireless refrigerators all function within the IoT and few bat an eyelash at this. HAL showed the dangers of relying on one system for this but now we have opened up to the new “One” aka the denizens of the internet.

The internet exists as a plurality. An endless teeming mass of identities and avenues and, as a consequence, there exists bad characters to balance out the scales. Most people know only cursory skills needed to function on the internet – largely myself included.  But, below the surface of what many presume is a puddle filled with memes and thumbs exists numerous depths. While I don’t presume that many of us feel the tugging from these dark undercurrents, it is prudent to know of them and generate a form of caution.

This is not to downplay the existential threats provided by the rise of Artificial Intelligence, the more immediate concern are the human actors; agents who have access to the assemblage of networks that we are embedded within. Almost all individuals are providing access to ample information about themselves through their use of technology. Just recently top secret military locations were disclosed via fitness tracking apps that the soldiers used. The upper echelon of American protection is still vulnerable to the IoT that follows casual citizenry.

So let’s return to the idea of why the IoT is not an ideal thing. Those items I listed previously reside on a home wireless network and provide all kinds of information about the users present. Things like the thermostat are not really a good way to indicate if someone is home or provide access to private information. The refrigerator and security system on the other hand may enable anyone who accesses the wifi system to monitor comings and goings of said individual. Also, we expose ourselves to various forms of identity theft and cyberstalking through password theft. Almost every modern soul uses their computer to access e-mails, banking information, and social media but not many think of putting a secure enough password on their printer or oven. These definitely lack the terminal end that HAL engages in but there is one location where HAL’s malevolence can be felt: the car.

Newer vehicles are embedded with software that controls many of the car’s facets.  Hacking has occurred in various ways and by many groups: anything from the air conditioner, radio and even the brakes can be manipulated via the software. Connected vehicles are explicitly vulnerable and potentially fatal due the sheer panic that can occur once one realizes the car is out of one’s control. Auto manufacturers have consistently downplayed, over their entire history, various dangerous presented by their products. They are no different in regards to the dangers outlined here but the consequence is they have been alerted and have begun to think about how to remedy these concerns.

Another industry that should concern the public at large is the connection of medical devices. Pacemakers, insulin pumps, and deep brain stimulation devices are just some of the newer devices that we are connecting to the various networks. The ability to cause cardiac cessation, deliver a lethal dose of insulin, or turn off a device controlling tremors are very realistic concerns that will need to be consistently addressed. Every software update provides  new potential loopholes for individuals to take control of the devices or to piggy-back into the broader network.

What does this all mean? It means that we will likely have an event –whether personally or socially – that will demand an awareness. For some, it may have been the hacking of the election system by foreign powers in 2016. Others may only reflect upon their practices when it directly impacts them via a stolen identity or any other malicious event.  The dangers of a lockout perpetrated in space by a device is so very far off but the way to circumnavigate this problems is to function in a world where everything is  as secure as possible.

 

What Would Make Machines Matter Morally?

Imagine that you’re out jogging on the trails in a public park and you come across a young man sitting just off the path, tears rolling down his bruised pale face, his leg clearly broken. You ask him some questions, but you can’t decode his incoherent murmurs. Something bad happened to him. You don’t know what, but you know he needs help. He’s a fellow human being, and, for moral reasons, you ought to help him, right?

Now imagine the same scenario, except it’s the year 2150. When you lean down to inspect the man, you notice a small label just under his collar: “AHC Technologies 4288900012.” It’s not a man at all. It looks like one, but it’s a machine. Artificial intelligence (AI) is so advanced in 2150, though, that machines of this kind have mental experiences that are indistinguishable from those of humans. This machine is suffering, and its suffering is qualitatively the same as human suffering. Are there moral reasons to help the machine?

The Big Question

Let’s broaden the question a bit. If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?

Writing in the National Review, Wesley Smith, a bioethicist and senior fellow at the Discovery Institute’s Center on Human Exceptionalism, answers the question with an definitive “nope.”

“Machines can’t ‘feel’ anything,” he writes. “They are inanimate. Whatever ‘behavior’ they might exhibit would be mere programming, albeit highly sophisticated.” In Smith’s view, no matter how sophisticated the machinery, it would still be mere software, and it would not have true conscious experiences. The notion of machines or computers having human-like experiences, such as empathy, love, or joy, is, according to Smith, plain nonsense.

Smith’s view is not unlike that expressed in the philosopher John Searle’s “Chinese Room Argument,” which denies the possibility of computers having conscious mental states. Searle equates any possible form of artificial intelligence to an English-speaking person locked in a room and given a batch of Chinese writing and instructions for deciphering it. Though the person is able to successfully convert the Chinese writing into English, he still has absolutely no understanding of the Chinese language. The same would be true for any instance of artificial intelligence, Searle argues. It may be able to process input and produce satisfactory output, but that doesn’t mean it has cognitive states. Rather, it merely simulates them.

But denying the possibility that machines will ever attain conscious mental states doesn’t answer the question of whether they would have moral status if they did. Besides, the impossibility of computers having mental lives is not a closed case. But Smith would rather not get bogged down by such “esoteric musing.” His test for determining whether an entity has moral status, based on his assertion that machines never can, is the following question: “Is it alive, e.g., is it an organism?”

But wouldn’t an artificially intelligent machine, if it did indeed have a conscious mental life, be alive, at least in the relevant sense? If it could suffer, feel joy, and have a sense of personal identity, wouldn’t it pass Smith’s test for aliveness. I’m assuming that by “organism” Smith means biological life form, but isn’t this an arbitrary requirement?

A Non-Non-Answer

In their entry on the ethics of artificial intelligence (AI) in the Cambridge Handbook of Artificial Intelligence, Nick Bostrom and Eliezer Yudkowsky grant that no current forms of artificial intelligence have moral status, but they take the question seriously and explore what would be required for them to have it. They highlight two commonly proposed criteria that are linked, in some way, to moral status:

  • Sentience – “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer”
  • Sapience – “a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent”

If an AI system is sentient, that is, able to feel pain and suffer, but lacks the higher cognitive capacities required for sapience, it would have moral status similar to that of a mouse, Bostrom and Yudkowksy argue. It would be morally wrong to inflict pain on it absent morally compelling reasons to do so (e.g., early stage medical research or an infestation that is spreading disease).

On the other hand, Bostrom and Yudkowsky argue, an AI system that has both sentience and sapience to the same degree that humans do would have moral status on par with that of humans. They base this assessment on what they call the principle of substrate non-discrimination: “if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” To conclude otherwise, Bostrom and Yudkowsky claim, would be akin to racism because “substrate lacks fundamental moral significance in the same way and for the same reason as skin color does.”

So, according to Bostrom and Yudkowsky, it doesn’t matter if an entity isn’t a biological life form. As long as its sentience and sapience are adequately similar to that of humans, then there is no reason to conclude that a machine doesn’t have similar moral status. It is alive in the only sense that it is relevant.

Of course, Smith, although denying the premise that machines could have sentience and sapience, might nonetheless insist that should they achieve these characteristics, they still wouldn’t have moral status because they aren’t organic life forms. They are human-made entities, whose existence is due solely to human design and ingenuity and, as such, do not deserve humans’ moral consideration.

Bostrom and Yudkowsky propose a second moral principle that addresses this type of rejoinder. Their principle of ontogeny non-discrimination states that “if two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.” Bostrom and Yudkowsky point out that this principle is accepted widely in the case of humans today: “We do not believe that causal factors such as family planning, assisted delivery, in vitro fertilization, gamete selection, deliberate enhancement of maternal nutrition, etc. – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny.”

Even those who are opposed to human reproductive cloning, Bostrom and Yudkowsky point out, generally accept that if a human clone is brought to term, it would have the same moral status as any other human infant. There is no reason, they maintain, that this line of reasoning shouldn’t be extended to machines with sentience and sapience. Hence, the principle of ontogeny non-discrimination.

So What?

We don’t know if artificially intelligent machines with human-like mental lives are in our future. But Bostrom and Yudkowsky make a good case that should machines join us in the realm of consciousness, they would be worthy of our moral consideration.

It may be a bit unnerving to contemplate the possibility that the moral relationships between humans aren’t uniquely human, but our widening of the moral circle over the years to include animals has been based on insights very similar to those offered by Bostrom and Yudkowsky. And even if we never create machines with moral status, it’s worth considering what it would take for machines to matter morally to us, if only because such consideration helps us appreciate why we matter to one another.