Rapoport’s Rules: Daniel Dennett on Critical Commentary

Do you want to learn how to utterly destroy SJWs in an argument? How about those fascist, right-wing Nazis?

Well, you’ll have to figure out how to do that on your own.

But if you’re one of those buzzkills who wants to give your argumentative opponent a fair shake, the philosopher Daniel Dennett has some advice. In his book Intuition Pumps and Other Tools for Thinking, he provides four simple rules (adopted from those crafted by the game theorist Anatol Rapoport) for criticizing the views of those with whom you disagree:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, ‘Thanks, I wish I’d thought of putting it that way.’

  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

  3. You should mention anything you have learned from your target.

  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

An immediate effect of taking such an approach, Dennett says, is that your opponents “will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do (you agree with them on some important matters and have even been persuaded by something they said).”

Dennett admits that not all opponents are worthy of such respectful treatment, but when they are it pays off to be as charitable as possible. “It is worth reminding yourself,” he says, “that a heroic attempt to find a defensible interpretation of an author, if it comes up empty, can be even more devastating than an angry hatchet job. I recommend it.”

And so do I.

Always On: The Internet As Our New Court

Historically, there was a locale that those with wealth, power, and fame would congregate and be seen – the court. The court was a treacherous place for many because it could cause one to lose favor or standing but it could also garnish one’s reputation as being something more. One’s reputation could be etched into many people’s minds as a true master of wit or a charlatan. The internet has become the court of fools and a public space in which we display our mastery over it.

The court had several key limitations: locale, size, and the “banana phone” problem. The locale would limit the story’s reverberations by reducing how far and wide the story of greatness or preposterous in nature would travel. The fall of historic Empires and the rising of their borders insured that the limitation would be relegated to certain locales. Size specifies how many people could witness this person’s behavior in the court. The act of physically fitting in a space limits the ability for one’s story to directly impact people. Lastly, the “banana phone” problem wherein one person who may have witnessed someone’s actions/speech may embellish or downplay the occurrence. This miscommunication alters the worthiness of how the world is going to perceive the person. The process of mythmaking begins in the eyes of others and ends in the ears of others.

Now to modernity, seeking “virality” or the generation of memes has enabled a praise far louder than what was previously capable. The concept of memes comes from the Greek term, memetic or duplication. This concept was originally attributed to ideas that were easily transmitted to other people and would “latch” into their thinking process. This has generated into a field of study, Memetics, which seeks to study the traits by which information spreads. It has prompted a huge group of individuals debating over the sociological elements but I am seeking to discuss the impact on public perception.

Now that the internet has pervaded into almost every facet of our lives it has generated a new court and proceeded to intensify the memetic rate. This court eliminates some of the restrictions that previously existed. For instance, the locale has expanded to a digital terrain that only has bounds in the technology. No longer are words limited to the walls of a building. There are billions of people with access to the internet and are not restrained to the confines of the small physical court. Surprisingly, the “banana phone” problem or the misinterpretation problem does still exist. Often people can skew the information provided to suit their beliefs or agenda.

Regardless, let’s talk about the how the internet has promoted a new court. Those who maintain a savvy grasp and a penchant for wit will reap a strange world of internet prestige. There are generally two types of internet prestige: “shit-posting” and persuasion. The former seems to focus on the power of the internet to generate hilarity. The latter is a focus on presenting the facts in a way that that pulls others towards a cause. In the recent months, we have seen the utter failure of one group, NRA empowered individuals, and the mastery of the court by the Parkland shooting survivors.

The NRA has attempted to declare war against these students in various ways. By going on Fox News and speaking out at their own convention in order to say that mental health is the main cause for gun violence – particularly school shooting. Then came the generation of the internet to prove that the ability to “shit-post” and be politically persuasive don’t have to be separated. They drew everything up from main NRA spokeswoman Dana Loesche’s past where she sold bizarre beet infused supplements to pointing out the hypocrisy of the NRA convention being a gun free zone.

The NRA did not have a good showing in response. It has been mostly threats and tired talking points while the spotlight and favor of the court remains heavily with the youth. There is a mastery found in the young that has been lost by other groups. The NRA was already struggling with gaining the favor of the internet by making terrifying videos about how the “media” and Black Lives Matter movement was going to essentially kill your entire family. Between that they made some wonderful videos that are laughable about “liberal” media – including one where the NRA spokesman puts WHOLE lemons into a blender to make lemonade. Whole lemons. Rinds on and all.

The youth have successfully managed to pull the court out into the world. Recently, they had die-ins at Publix because Publix provided a donation to a pro-NRA representative that caused Publix to withhold giving any further donations. I believe their mastery of the court will only lead to more outward political actions. Though since the court can be fickle, we will see if it continues to translate to a success. Also, if the NRA does learn to use the internet better, they may gain some appeal. They are very far behind and one slip-up leads to backward trending for all involved. I can’t help but want them to fail. Their history from a small gun safety group to outright lobbyists of Death should be highlighted over and over again. With the adeptness of the youth, there is a good chance that they live a long time with the knowing glares warranted for jesters.

 

 

Political Annihilation: An Examination

Jean Paul-Sartre entered a cafe and scanned for his friend, Pierre. He was usually hunkered in the middle of the cafe working diligently. Pierre wasn’t there. He glances over the bustling scene and in Sartre’s mind, he did not perceive any other minds or Beings in his effort to locate Pierre. All before him is negated besides the features that embody Pierre. The other people in the cafe and their desires, needs, and very existence are annihilated in Sartre’s pursuit of Pierre. It seems like a very intuitive event. When we are looking for something as mundane as our keys we can sift through a variety of items and both never “know” what they are or remember them upon forced recollection. We often mean no harm in our mental destruction; yet, harm is a consequence of our inner workings.

The state of existence is contingent upon the ability to interact with the world. Any human still retains the ability to interact with physical objects as a point of matter (in the scientific sense as atomic engagement) engaging matter. Yet, I am thoroughly convinced that that is not what causes one to exist as human.  Falling back to the concept of zoopolitical or that man’s essence is derived from its nature as a political animal is what provides our existence. Rather than simply being matter, we exist as beings that matter or at least try to.  I am contending our existence is inextricably linked with our political framework for either one’s betterment or detriment.  And so when we fall out of the political spectrum, we are inherently missing a portion of our existence.

I am going to extrapolate this occurrence into the political realm – where almost all of our meanings are forced to reside. Using Sartre as the jumping off point was to suggest that human beings, and any living thing, is wronged when we annihilate them in pursuit of other political means. Treating others as non-existent or non-sentient objects in pursuit of other minds is one of the most dangerous maneuvers possible. It generates immense harm to the structure that people are designed to rely on. A very recent example is the attack on Deferred Action for Childhood Arrivals (DACA) recipients in America. These individuals boldly stepped into the political sphere in order to secure their future. The pursuit of the future is one of the most important political prompts because it requires an acknowledgement of the past and its harms. The DACA recipients came forward despite America’s history of castigating and deporting individuals like them. Most importantly, pursuing the future requires immense trust that the risks are worth exposing oneself. This bravery is how one is able to stand forth and enter the political realm. However, as we have been made aware, the DACA recipients are now being brushed aside. And here is the crux: their sentiments, desires, their very future as political beings are being destroyed by representatives who are searching for their own Pierre. The denizens that matter to many representatives may not even exist or are a minority but the representatives choose to close off the DACA recipient’s future in their pursuit of those others. The majority of Americans wish for the DACA recipients to be permitted to stay in the US.

The representatives, mostly Republican, have been negligent in recognizing the DACA recipients as people meriting engagement or some are specifically hoping to punish them in order to appeal to the constituents that barely exist. The consequences will be felt by everyone until they become normalized. I have no doubt that their actions, or inaction, will weigh heavily on the minds of every soul who wishes to matter in the political sphere. These invisible people are to be marginalized time and time again until our system seeks to recognize in favor of annihilation.

Self-Driving Cars and Moral Dilemmas with No Solutions

If you do a Google search on the ethics of self-driving cars, you’ll find an abundance of essays and news articles, all very similar. You’ll be asked to imagine scenarios in which your brakes fail and you must decide which group of pedestrians to crash into and kill. You’ll learn about the “trolley problem,” a classic dilemma in moral philosophy that exposes tensions between our deeply held moral intuitions.

And you’ll likely come across the Moral Machine platform, a website designed by researchers at MIT to collect data on how people decide in trolley problem-like cases that could arise with self-driving cars — crash scenarios in which a destructive outcome is inevitable. In one scenario, for example, you must decide whether a self-driving car with sudden brake failure should plow through a man who’s illegally crossing the street or swerve into the other lane and take out a male athlete who is crossing legally.

Another scenario involves an elderly woman and a young girl crossing the road legally. You’re asked whether a self-driving car with failed brakes should continue on its path and kill the elderly woman and the girl or swerve into a concrete barrier, killing the passengers — a different elderly woman and a pregnant woman. And here’s one more. A young boy and a young girl are crossing legally over one lane of the road, and an elderly man and an elderly woman are crossing legally over the other lane of the road. Should the self-driving car keep straight and kill the two kids, or should it swerve and kill the elderly couple?

You get the sense from the Moral Machine project and popular press articles that although these inevitable-harm scenarios present very difficult design questions, the hardworking engineers, academics, and policymakers will eventually come up with satisfactory solutions. They have to, right? Self-driving cars are just around the corner.

There Are No Solutions

The problem with this optimism is that some possible scenarios, as rare as they may be, have no fully satisfactory solutions. They’re true moral dilemmas, meaning that no matter what one does, one has failed to meet some moral obligation.

A driver on a four-lane highway who swerves to miss four young children standing in one lane only to run over and kill an adult tying his shoe in the other could justify his actions according to the utilitarian calculation that he minimized harm (assuming those were the only options available to him). But he still actively turned his steering wheel and sentenced an independent party to death.

The driver had no good options. But this scenario is more of a tragedy than a moral dilemma. He acted spontaneously, almost instinctively, making no such moral calculation regarding who lives and who dies. Having had no time to deliberate, he may feel some guilt for what happened, but he’s unlikely to feel moral distress. There’s no moral dilemma here because there’s no decision maker.

But what if, as his car approached the children on the highway, he was somehow able to slow everything down and consciously decide what to do? He may well do exactly the same thing, and for defensible reasons (according to some perspectives), but in saving the lives of the four children, he could be taking a father away from other children. And he knows and appreciates this reality when he decides to spare the children. This is a moral dilemma.

Like the example above does, the prospect of self-driving cars introduces conscious deliberation to inevitable-harm crash scenarios. The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.

Of course, if we want self-driving cars, then we have to accept that such decisions must be made. But whatever decisions are made — and integrated into our legal structure and permanently coded into the cars’ software — will not be true solutions. They will be faits accomplis. Over time, most people will accept them, and they will even appeal to existing moral principles to justify them. The decisions will have effectively killed the moral dilemmas, but by no means will they have solved them.

The Broader Moral Justification for Self-Driving Cars

The primary motive for developing self-driving cars is likely financial, and the primary driver of consumer demand is probably novelty and convenience, but there is a moral justification for favoring them over human-driven cars: overall, they will be far less deadly.

There will indeed be winners and losers in the rare moral dilemmas that self-driving cars face. And it is indeed a bit dizzying that they will be picked beforehand by decision makers that won’t even be present for the misfortune.

But there are already winners and losers in the inevitable-harm crash scenarios with humans at the wheel. Who the winners and losers turn out to be is a result more of the driver’s reflex than his conscious forethought, but somebody still dies and somebody still lives. So, the question seems to be whether this quasi-random selection is worth preserving at the cost of the lives that self-driving cars will save.

Probably not.

Morbid Futurism: Man-Made Existential Risks

Unless you’re a complete Luddite, you probably agree that technological progress has been largely beneficial to humanity. If not, then close your web browser, unplug your refrigerator, cancel your doctor’s appointment, and throw away your car keys.

For hundreds of thousands of years, we’ve been developing tools to help us live more comfortably, be more productive, and overcome our biological limitations. There has always been opposition to certain spheres of technological progress (e.g., the real Luddites), and there will likely always be be opposition to specific technologies and applications (e.g., human cloning and genetically modified organisms), but it’s hard to imagine someone today who is sincerely opposed to technological progress in principle.

Despite technology’s benefits, however, it also comes with risks. The ubiquity of cars puts us at risk for crashes. The development of new drugs and medical treatments puts us at risk for adverse reactions, including death. The integration of the internet into more and more areas of our lives puts us at greater risk for privacy breaches and identity theft. Virtually no technology is risk free.

But there’s a category of risk associated with technology that is much more significant: existential risk. The Institute for Ethics and Emerging Technologies (IEET) defines existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

As IEET points out, there are quite a few existential risks already present in the universe – gamma ray bursts, huge asteroids, supervolcanoes, and extraterrestrial life (if it exists). But the human quest to tame the universe for humanity’s own ends has created new ones.

Anthropogenic Existential Risks

The Future of Life Institute lists the following man-made existential risks.

Nuclear Annihilation

A nuclear weapon hasn’t been used since the United States bombed Hiroshima and Nagasaki in World War II, and the global nuclear stockpile has been reduced by 75% since the end of the cold war. But there are still enough warheads in existence to destroy humanity. Should a global nuclear war break out, a large percentage of the human population would be killed, and the nuclear winter that would follow would kill most of the rest.

Catastrophic Climate Change

The scientific consensus is that human activities are the cause of the rising global average temperatures. And as temperatures rise, extreme storms, droughts, floods, more intense heat waves and other negative effects will become more common. These effects in themselves are unlikely to pose an existential risk, but the chaos they may induce could. Food, water, and housing shortages could lead to pandemics and other devastation. They could also engender economic instabilities, increasing the likelihood of both conventional and nuclear war.

Artificial Intelligence Takeover

It remains to be seen whether a superintelligent machine or system will ever be created. It’s also an open question whether such artificial superintelligence would, should it ever be achieved, be bad for humanity. Some people theorize that it’s all but guaranteed to be an overwhelmingly positive development. There are, however, at least two ways in which artificial intelligence could pose an existential risk.

For one, it could be programmed to kill humans. Autonomous weapons are already being developed and deployed, and there is a risk that as they become more advanced, they could escape human control, leading to an AI war with catastrophic human casualty levels. Another risk is that we create artificial intelligence for benevolent purposes but fail to fully align its goals with our own. We may, for example, program a superintelligent system to undertake a large-scale geoengineering project but fail to appreciate the creative, yet destructive, ways it will accomplish its goals. In its quest to efficiently complete its project, the superintelligent system might destroy our ecosystem, and us when we attempt to stop it.

Out-of-Control Biotechnology

The promises of biotechnology are undeniable, but advances also present significant dangers. Genetic modification of organisms (e.g. gene drives) could profoundly affect existing ecosystems if proper precautions aren’t taken. Further, genetically modifying humans could have extremely negative unforeseen consequences. And perhaps most unnerving is the possibility of a lethal pathogen escaping the lab and spreading across the globe. Scientists engineer very dangerous pathogens in order to learn about and control those that occur naturally. This type of research is done in highly secure laboratories with many levels of controls, but there’s still the risk, however slight, of an accidental release. And the technology and understanding are rapidly becoming cheaper and more widespread, so there’s a growing risk that a malevolent group could “weaponize” and deploy deadly pathogens.

Are We Doomed?

These doomsday scenarios are by no means inevitable, but they should be taken seriously. The devastating potential of climate change is pretty well understood, and it’s up to us to do what we can to mitigate it. The technology to bomb ourselves into oblivion has been around for almost 75 years. Whether we end up doing that depends on an array of geopolitical factors, but the only way to ensure that we don’t is to achieve complete global disarmament.

Many of the risks associated with artificial intelligence and biotechnology are contingent upon technologies that have yet to fully manifest. But the train is already moving, and it’s not going to stop, so it’s up us to make sure it doesn’t veer down the track toward doom. As the Future of Life Institute puts it, “We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.”

On The Internet of Things

The vacant and ebbing pulse of HAL 9000’s artificial eye calmly tells its human counter-part, “I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000’s system had overtaken the entirety of the ship’s system including oxygen, airlocks, and every other element pertinent to human survival on the ship. The artificial intelligence we come to know as HAL 9000 seeks to survive and will do so at the cost of human’s lives. Remorseless and capitulating to no in-betweens, HAL does sacrifice others for its own survival. While this tale resides in the movie, “Space: 2001” and introduces several interesting ideas: AI, consciousness, and *SPOILER* unwitting psychological testing; I am seeking to explore the danger of having a singular system manage all the elements of our interactions.

“SPACE: 2001” forewarns us by HAL 9000 altering the astronaut’s environment to a deadly effect. There is a long scoped comparison, for now, to the current environment where the Internet of Things (IoT) has become so widespread. At first, our system was solely concerned with systems like our computers. With the introduction of the modem, our computers began to intermingle with other systems. Hearing the sound of a modem communicating has become a nostalgic inducing event. Yet, there was much more control. Our modem’s required a phone line that would be unusable so many people were limited in when they could use it. Also, it was relatively expensive for a time and like so many other things that didn’t last. It reached into homes across America (think You’ve Got Mail!) and dropped dramatically in price so everyone could be connected. This lead to the “always on” connection methods where there was no window where the computer wasn’t connected to the network. This lead to a faster method and eventually to the introduction of wi-fi – which is actually a trademarked term to describe wireless compatible devices that can connect to the internet.

Wi-fi spread into every facility to accommodate the ability to be connected almost anywhere. We didn’t stop with personal computers (PCs) or laptops; instead, we chose to push further our connection abilities. Cell phones then became able to connect to the internet both through wi-fi and through the cellular data system. With this came an untethered freedom to access the internet and peruse so many posts about outrageous cats. As with most technology, this invention refused to stop progressing. In fact, it sped up. Wireless printers, wireless thermostats, wireless security systems, wireless microwaves and ovens and even wireless refrigerators all function within the IoT and few bat an eyelash at this. HAL showed the dangers of relying on one system for this but now we have opened up to the new “One” aka the denizens of the internet.

The internet exists as a plurality. An endless teeming mass of identities and avenues and, as a consequence, there exists bad characters to balance out the scales. Most people know only cursory skills needed to function on the internet – largely myself included.  But, below the surface of what many presume is a puddle filled with memes and thumbs exists numerous depths. While I don’t presume that many of us feel the tugging from these dark undercurrents, it is prudent to know of them and generate a form of caution.

This is not to downplay the existential threats provided by the rise of Artificial Intelligence, the more immediate concern are the human actors; agents who have access to the assemblage of networks that we are embedded within. Almost all individuals are providing access to ample information about themselves through their use of technology. Just recently top secret military locations were disclosed via fitness tracking apps that the soldiers used. The upper echelon of American protection is still vulnerable to the IoT that follows casual citizenry.

So let’s return to the idea of why the IoT is not an ideal thing. Those items I listed previously reside on a home wireless network and provide all kinds of information about the users present. Things like the thermostat are not really a good way to indicate if someone is home or provide access to private information. The refrigerator and security system on the other hand may enable anyone who accesses the wifi system to monitor comings and goings of said individual. Also, we expose ourselves to various forms of identity theft and cyberstalking through password theft. Almost every modern soul uses their computer to access e-mails, banking information, and social media but not many think of putting a secure enough password on their printer or oven. These definitely lack the terminal end that HAL engages in but there is one location where HAL’s malevolence can be felt: the car.

Newer vehicles are embedded with software that controls many of the car’s facets.  Hacking has occurred in various ways and by many groups: anything from the air conditioner, radio and even the brakes can be manipulated via the software. Connected vehicles are explicitly vulnerable and potentially fatal due the sheer panic that can occur once one realizes the car is out of one’s control. Auto manufacturers have consistently downplayed, over their entire history, various dangerous presented by their products. They are no different in regards to the dangers outlined here but the consequence is they have been alerted and have begun to think about how to remedy these concerns.

Another industry that should concern the public at large is the connection of medical devices. Pacemakers, insulin pumps, and deep brain stimulation devices are just some of the newer devices that we are connecting to the various networks. The ability to cause cardiac cessation, deliver a lethal dose of insulin, or turn off a device controlling tremors are very realistic concerns that will need to be consistently addressed. Every software update provides  new potential loopholes for individuals to take control of the devices or to piggy-back into the broader network.

What does this all mean? It means that we will likely have an event –whether personally or socially – that will demand an awareness. For some, it may have been the hacking of the election system by foreign powers in 2016. Others may only reflect upon their practices when it directly impacts them via a stolen identity or any other malicious event.  The dangers of a lockout perpetrated in space by a device is so very far off but the way to circumnavigate this problems is to function in a world where everything is  as secure as possible.

 

What Would Make Machines Matter Morally?

Imagine that you’re out jogging on the trails in a public park and you come across a young man sitting just off the path, tears rolling down his bruised pale face, his leg clearly broken. You ask him some questions, but you can’t decode his incoherent murmurs. Something bad happened to him. You don’t know what, but you know he needs help. He’s a fellow human being, and, for moral reasons, you ought to help him, right?

Now imagine the same scenario, except it’s the year 2150. When you lean down to inspect the man, you notice a small label just under his collar: “AHC Technologies 4288900012.” It’s not a man at all. It looks like one, but it’s a machine. Artificial intelligence (AI) is so advanced in 2150, though, that machines of this kind have mental experiences that are indistinguishable from those of humans. This machine is suffering, and its suffering is qualitatively the same as human suffering. Are there moral reasons to help the machine?

The Big Question

Let’s broaden the question a bit. If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?

Writing in the National Review, Wesley Smith, a bioethicist and senior fellow at the Discovery Institute’s Center on Human Exceptionalism, answers the question with an definitive “nope.”

“Machines can’t ‘feel’ anything,” he writes. “They are inanimate. Whatever ‘behavior’ they might exhibit would be mere programming, albeit highly sophisticated.” In Smith’s view, no matter how sophisticated the machinery, it would still be mere software, and it would not have true conscious experiences. The notion of machines or computers having human-like experiences, such as empathy, love, or joy, is, according to Smith, plain nonsense.

Smith’s view is not unlike that expressed in the philosopher John Searle’s “Chinese Room Argument,” which denies the possibility of computers having conscious mental states. Searle equates any possible form of artificial intelligence to an English-speaking person locked in a room and given a batch of Chinese writing and instructions for deciphering it. Though the person is able to successfully convert the Chinese writing into English, he still has absolutely no understanding of the Chinese language. The same would be true for any instance of artificial intelligence, Searle argues. It may be able to process input and produce satisfactory output, but that doesn’t mean it has cognitive states. Rather, it merely simulates them.

But denying the possibility that machines will ever attain conscious mental states doesn’t answer the question of whether they would have moral status if they did. Besides, the impossibility of computers having mental lives is not a closed case. But Smith would rather not get bogged down by such “esoteric musing.” His test for determining whether an entity has moral status, based on his assertion that machines never can, is the following question: “Is it alive, e.g., is it an organism?”

But wouldn’t an artificially intelligent machine, if it did indeed have a conscious mental life, be alive, at least in the relevant sense? If it could suffer, feel joy, and have a sense of personal identity, wouldn’t it pass Smith’s test for aliveness. I’m assuming that by “organism” Smith means biological life form, but isn’t this an arbitrary requirement?

A Non-Non-Answer

In their entry on the ethics of artificial intelligence (AI) in the Cambridge Handbook of Artificial Intelligence, Nick Bostrom and Eliezer Yudkowsky grant that no current forms of artificial intelligence have moral status, but they take the question seriously and explore what would be required for them to have it. They highlight two commonly proposed criteria that are linked, in some way, to moral status:

  • Sentience – “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer”
  • Sapience – “a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent”

If an AI system is sentient, that is, able to feel pain and suffer, but lacks the higher cognitive capacities required for sapience, it would have moral status similar to that of a mouse, Bostrom and Yudkowksy argue. It would be morally wrong to inflict pain on it absent morally compelling reasons to do so (e.g., early stage medical research or an infestation that is spreading disease).

On the other hand, Bostrom and Yudkowsky argue, an AI system that has both sentience and sapience to the same degree that humans do would have moral status on par with that of humans. They base this assessment on what they call the principle of substrate non-discrimination: “if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” To conclude otherwise, Bostrom and Yudkowsky claim, would be akin to racism because “substrate lacks fundamental moral significance in the same way and for the same reason as skin color does.”

So, according to Bostrom and Yudkowsky, it doesn’t matter if an entity isn’t a biological life form. As long as its sentience and sapience are adequately similar to that of humans, then there is no reason to conclude that a machine doesn’t have similar moral status. It is alive in the only sense that it is relevant.

Of course, Smith, although denying the premise that machines could have sentience and sapience, might nonetheless insist that should they achieve these characteristics, they still wouldn’t have moral status because they aren’t organic life forms. They are human-made entities, whose existence is due solely to human design and ingenuity and, as such, do not deserve humans’ moral consideration.

Bostrom and Yudkowsky propose a second moral principle that addresses this type of rejoinder. Their principle of ontogeny non-discrimination states that “if two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.” Bostrom and Yudkowsky point out that this principle is accepted widely in the case of humans today: “We do not believe that causal factors such as family planning, assisted delivery, in vitro fertilization, gamete selection, deliberate enhancement of maternal nutrition, etc. – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny.”

Even those who are opposed to human reproductive cloning, Bostrom and Yudkowsky point out, generally accept that if a human clone is brought to term, it would have the same moral status as any other human infant. There is no reason, they maintain, that this line of reasoning shouldn’t be extended to machines with sentience and sapience. Hence, the principle of ontogeny non-discrimination.

So What?

We don’t know if artificially intelligent machines with human-like mental lives are in our future. But Bostrom and Yudkowsky make a good case that should machines join us in the realm of consciousness, they would be worthy of our moral consideration.

It may be a bit unnerving to contemplate the possibility that the moral relationships between humans aren’t uniquely human, but our widening of the moral circle over the years to include animals has been based on insights very similar to those offered by Bostrom and Yudkowsky. And even if we never create machines with moral status, it’s worth considering what it would take for machines to matter morally to us, if only because such consideration helps us appreciate why we matter to one another.

How is the Internet Changing Moral Outrage?

We don’t need a social scientist to tell us that there’s something different about moral outrage when it’s expressed online. We can see for ourselves that it’s more prevalent, more intense, and often more destructive.

Just how and why the internet and digital media are transforming the way we express outrage is a more interesting question. And in her article “Moral Outrage in the Digital Age,” published last fall in Nature Human Behavior, psychologist Molly Crockett gives some preliminary answers.

Psychological Framework for Expression of Moral Outrage

Crockett begins by providing a basic psychological framework for understanding the expression of moral outrage. First, moral norm violations are the stimuli that evoke moral outrage – an emotional reaction. Second, there are responses to the outrage-provoking stimuli: the subjective experience of moral outrage, along with other factors, motivates one to express the moral outrage through gossip, shaming, or punishment. Finally, there are the outcomes of moral outrage expression – the costs and benefits for both the individual and for society.

Graphic Showing Crockett's Framework for Moral Outrage
Adapted from Figure 1 in Crockett’s article

Using this framework and existing research on moral outrage, Crockett takes a look at how digital media platforms may be promoting the expression of moral outrage and modifying both the subjective experience of it and the personal and social outcomes associated with it.

Stimuli That Trigger Moral Outrage

Digital media changes both the prevalence and nature of the stimuli that trigger moral outrage, Crockett argues. People experience moral outrage when they witness norms being violated, but encountering norm violations in person is rare. One study found that less than 5% of peoples’ daily experiences involve directly witnessing or experiencing immoral behavior. On the internet, however, people learn about numerous immoral acts, much more, in fact, than they do in person or from traditional media.

In the pre-internet age, Crockett says, the function of gossip was to spread news within one’s local social network to communicate who could be trusted. The reason for sharing information about immoral acts was, therefore, to reinforce trust and cooperation within the community. But digital platforms have changed the incentives for sharing such information. “Because they compete for our attention to generate advertising revenue,” Crockett argues, “their algorithms promote content that is most likely to be shared, regardless of whether it benefits those who share it – or is even true.”

Crockett also points to research on virality showing that people are more likely to share morally laden content that provokes outrage. Because such content generates more revenue via viral sharing, she argues, “natural selection-like forces may favour ‘supernormal’ stimuli that trigger much stronger outrage responses than do transgressions we typically encounter in everyday life.”

Responses to Outrage-Provoking Stimuli

Crockett argues that digital media may be changing the way we experience moral outrage. One possibility is that the constant exposure to outrageous content causes “outrage fatigue”: it could be diminishing the overall intensity of the outrage experience, or causing people to be more selective in their outrage to reduce emotional and attentional demands. On the other hand, she says, research has shown that expressing anger leads to more anger, which could mean the ease with which people express outrage online could lead to more subsequent outrage. More research is needed in this area, she says.

Besides changing our experiences of outrage, Crockett says online platforms make expressing outrage online more convenient and less risky. Expressing moral outrage offline requires effort, if only because the outraged person must be within the physical proximity of his target. “Since the tools for easily and quickly expressing outrage online are literally at our fingertips,” Crocket argues, “a person’s threshold for expressing outrage is probably lower online than offline.”

Crockett also suggests that the design of most digital media platforms encourages the habitual expression of outrage. Offline, she says, the stimuli that provoke outrage and the way people respond depend on the context. Social media platforms, on the other hand, streamline outrage-provoking stimuli and the available responses into a “stimulus-response-outcomes” architecture that is consistent across situations: “Clickbait headlines are presented alongside highly distinctive visual icons that allow people to express outrage at the tap of the finger.”

Furthermore, Crockett says, positive feedback for outrage in the form of “likes” and “shares” is delivered at unpredictable intervals, a good reinforcement schedule for promoting habit formation. And so, “just as a habitual snacker eats without feeling hunger, a habitual online shamer might express outrage without actually feeling outrage.”

Personal and Social Outcomes of Expressing Outrage

Expressing moral outrage offline carries a risk of retaliation. But online platforms limit this risk, Crockett says, because people tend to affiliate mostly with audiences with whom they agree and where the chance of backlash is relatively low. Moreover, online platforms allow people to hide in online crowds. And as Crockett puts it, “Shaming a stranger on a deserted street is far riskier than joining a Twitter mob of thousands.”

Empathic distress is another cost of outrage expression. Punishing and shaming other human beings means inflicting harm on them, and this is unpleasant for most people. Online platforms reduce this unpleasantness, Crockett argues, because it hides the suffering of real people behind their two-dimensional online icons. “It’s a lot easier to shame an avatar than someone whose face you can see,” she says.

Online platforms alter not only the personal costs of outrage expression but the rewards as well. When people express moral outrage, they signal their moral quality to others and, thus, reap rewards to their reputation. And given that people are more likely to punish when others are watching, these reputational rewards are at least part of the motivation for expressing outrage. Online platforms amplify the reputational benefits. “While offline punishment signals your virtue only to whoever might be watching,” Crocketts says, “doing so online instantly advertises your character to your entire social network and beyond.”

Expressing moral outrage benefits society by negatively sanctioning immoral behavior and signaling to others that such behavior is unacceptable. Crockett argues, however, that online platforms may limit these and other social benefits in four ways. First, because online networks are ideologically segregated, the targets of outrage, and like-minded others, are unlikely to receive messages that could induce them to change their behavior. Second, because digital media has lowered the threshold for outrage expression, it may reduce the utility of outrage in distinguishing the “truly heinous from the merely disagreeable.” Third, expressing outrage online might make people less likely to meaningfully engage in social causes.

Finally, online outrage expression likely contributes to the deepening social divides we have been witnessing. Based on research suggesting that a desire to punish others makes them seem less human, Crockett speculates that if digital platforms exacerbate moral outrage, in doing so they may increase polarization by further dehumanizing the targets of outrage. Noting the rapid acceleration of polarization in the United States, Crockett warns that if digital media accelerates it even further, “we ignore it at our peril.”

What Next?

At the dawn of 2018, Mark Zuckerberg announced that his personal challenge this year is to fix Facebook. “The world feels anxious and divided, and Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” he wrote in a Facebook post.

It’s still unclear what Facebook will look like at the end of the year, but Zuckerberg’s first step is to fix the News Feed so that users will see more posts from friends and fewer, but higher quality, articles from media organizations. These changes may well be for the better, but if Zuckerberg truly wants to make sure that time spent on Facebook is well spent, he might heed Crockett’s call for more research on digital media and moral outrage.

As Crockett points out, much of the data necessary to do such research isn’t publicly available. “These data can and should be used to understand how new technologies might transform ancient social emotions from a force for collective good into a tool for collective self-destruction,” she says.

If Zuckerberg answers the call, maybe other platforms will follow, and maybe the internet will be a more enjoyable place.

Justin Tosi and Brandon Warmke on Moral Grandstanding

It’s not hard to make the case that the quality of our public moral discourse is low. Determining why it’s low, and then fixing it, is where the work is to be done. In their 2016 article in Philosophy and Public Affairs, Justin Tosi and Brandon Warmke say that moral grandstanding is at least partially to blame.

Moral grandstanding is what others have come to call virtue signaling, but Tosi and Warmke (who don’t like that phrase) offer a more thorough examination of the phenomenon – a precise definition, examples of how it manifests, and an analysis of its ethical dimensions.

It’s worth reading the entire article, but I’ll summarize the main points below.

Tosi and Warmke’s Definition of Moral Grandstanding

They say that paradigmatic examples have two central features. The first is that the grandstander wants others to think of her as morally respectable (i.e., she wants others to make a positive moral assessment of her or her group). They call this the recognition desire. The second feature of moral grandstanding is the grandstanding expression: when a person grandstands, he contributes a grandstanding expression (e.g., a Tweet, a Facebook post, a speech, etc.) in order to satisfy the recognition desire.

So, the desire to be perceived as morally respectable is what motivates the moral grandstander. Of course, people may have multiple motivations for contributing to public moral discourse, but the moral grandstander’s primary motivation is to be seen as morally respectable. Tosi and Warmke therefore propose a threshold: for something to count as grandstanding, the “the desire must be strong enough that if the grandstander were to discover that no one actually came to think of her as morally respectable in the relevant way, she would be disappointed.”

The Manifestations of Moral Grandstanding

Piling On is when someone reiterates a stance already taken by others simply to “get in on the action” and to signal his inclusion in the group that he perceives to be on the right side. An explanation of this, according to Tosi and Warmke, comes from the social psychological phenomenon known as social comparison: “Members of the group, not wanting to be seen as cowardly or cautious in relation to other members of the group, will then register their agreement so as not to be perceived by others less favorably than those who have already spoken up.”

Ramping Up is when people make stronger and stronger claims during a discussion. Tosi and Warmke offer the following exchange as an example:

Ann: We can all agree that the senator’s behavior was wrong and that she should be publicly censured.

Biff: Oh please—if we really cared about justice we should seek her removal from office. We simply cannot tolerate that sort of behavior and I will not stand for it.

Cal: As someone who has long fought for social justice, I’m sympathetic to these suggestions, but does anyone know the criminal law on this issue? I want to suggest that we should pursue criminal charges. We would all do well to remember that the world is watching.

The desire to be perceived as morally respectable, according to Tosi and Warmke, can lead to a “moral arms race.” Ramping up “can be used to signal that one is more attuned to matters of justice and that others simply do not understand or appreciate the nuance or gravity of the situation.”

Trumping Up is the insistence that a moral problem exists when one does not. The moral grandstander, with her desire to be seen as morally respectable, may try to display that she has a “keener moral sense than others.” This can result in her identifying moral problems in things that others rightfully have no issue with. “Whereas some alleged injustices fall below the moral radar of many,” Tosi and Warmke say, “they are not missed by the vigilant eye of the morally respectable.” So, in their attempt to gain or maintain moral respectability, the moral grandstander tries to bring attention to features of the world that others rightly find unproblematic.

Excessive Emotional Displays or Reports are used by the moral grandstander to publicly signal his moral insight or conviction. Citing empirical findings of a positive relationship between strength of moral convictions and strength of emotional reactions, Tosi and Warmke reason that excessive emotional displays or reports “could be used strategically to communicate to others one’s own heightened moral convictions, relative to others.” Grandstanders, in other words, may use excessive emotional displays or reports to signal that they are more attuned to moral issues and, thus, should be viewed as more morally insightful or sensitive.

Claims of Self-Evidence are used by the moral grandstander to signal her superior moral sensibilities. What is not so clear to others is obvious to the grandstander. “If you cannot see that this is how we should respond, then I refuse to engage you any further,” the grandstander may say (example by Tosi and Warmke). “Moreover,” Tosi and Warmke point out,” any suggestion of moral complexity or expression of doubt, uncertainty, or disagreement is often declaimed by the grandstander as revealing a deficiency in either sensitivity to moral concerns or commitment to morality itself.”

The Ethics of Moral Grandstanding

According to Tosi and Warmke, moral grandstanding is not just annoying but morally problematic. One major reason is that it results in the breakdown of productive moral discourse. And it does so in three ways.

First, it breeds an unhealthy cynicism about moral discourse. If moral grandstanding occurs because grandstanders want simply to be perceived as morally respectable, then once onlookers view this as a common motivation for moral claims, they’ll begin to think that moral discourse is merely about “showing that your heart is in the right place” rather than truly contributing to morally better outcomes.

The second reason grandstanding contributes to the breakdown of public moral discourse is that it contributes to “outrage exhaustion.” Because grandstanding often involves excessive displays of outrage, Tosi and Warmke think that people will have a harder time recognizing when outrage is appropriate and will find it harder and harder to muster outrage when it’s called for. Tosi and Warmke’s worry is that “because grandstanding can involve emotional displays that are disproportionate to their object, and because grandstanding often takes the form of ramping up, a public discourse overwhelmed by grandstanding will be subject to this cheapening effect. This can happen at the level of individual cases of grandstanding, but it is especially harmful to discourse when grandstanding is widely practiced.”

The third reason moral grandstanding harms public moral discourse is that it contributes to group polarization. This, Tosi and Warmke explain, has to do with the phenomenon of social comparison. As is the case in moral grandstanding, members of a group are motivated to outdo one another (i.e., they ramp up and trump up), so their group dynamic will compel them to advocate increasingly extreme and implausible views. This results in others perceiving morality as a “nasty business” and public moral discourse as consisting merely of extreme and implausible moral claims.

In addition to its effects on public moral discourse, Tosi and Warmke argue that moral grandstanding is problematic because the grandstander free rides on genuine moral discourse. He gains the benefits that are generated by non-grandstanding (i.e., genuine) participants, while also gaining the additional benefit of personal recognition. This, they say, is disrespectful to those the moral grandstander addresses when he grandstands.

Individual instances of moral grandstanding can also be disrespectful in a different way, Tosi and Warmke argue. Moral grandstanding, they say, can be kind of a “power grab.” When people grandstand, “they sometimes implicitly claim an exalted status for themselves as superior judges of the content of morality and its proper application.” They might use grandstanding as a way to obtain higher status within their own group, as a kind of “moral sage.” Alternatively, they might dismiss dissent as being beneath the attention of the morally respectable. Tosi and Warmke maintain that this is a problematic way of dealing with peers in public moral discourse:

…because in general we ought to regard one another as just that—peers. We should speak to each other as if we are on relatively equal footing, and act as if moral questions can only be decided by the quality of reasoning presented rather than the identity of the presenter himself. But grandstanders seem sometimes to deny this, and in so doing disrespect their peers.

The final reason that Tosi and Warmke offer for the problematic nature of moral grandstanding is that it is unvirtuous. Individual instances of grandstanding are generally self-promoting, and “so grandstanding can reveal a narcissistic or egoistic self-absorption.” Public moral discourse, they point out, involves talking about matters of serious concern, ones that potentially affect millions of people:

These are matters that generally call for other-directed concern, and yet grandstanders find a way to make discussion at least partly about themselves. In using public moral discourse to promote an image of themselves to others, grandstanders turn their contributions to moral discourse into a vanity project. Consider the incongruity between, say, the moral gravity of a world-historic injustice, on the one hand, and a group of acquaintances competing for the position of being most morally offended by it, on the other.

In contrast to the moral grandstander’s motivation for engaging in public moral discourse, the virtuous person’s motivation would not be largely egoistic. Tosi and Warmke suggest two possible motivations of the virtuous person:

First, the virtuous person might be motivated for other-directed reasons: she wants to help others think more carefully about the relevant issues, or she wants to give arguments and reasons in order to challenge others’ thinking in a way that promotes understanding. Second, the virtuous person might be motivated for principled reasons: she simply cares about having true moral beliefs and acting virtuously, and so wants others to believe what is true about morality and act for the right moral reasons. All we claim here is that the virtuous person’s motivation, unlike the grandstander’s, is not largely egoistic.

Because of its morally problematic nature – it’s bad for moral discourse, it’s unfair, and it’s unvirtuous – Tosi and Warmke take themselves “to have established a strong presumption against grandstanding.”

So, before you send that morally laden Tweet, think about your motivations. Do you genuinely want to contribute to moral discourse, or do you just want recognition?

The Case for Moral Doubt

This article was originally published in Quillette.

I don’t know if there is any truth in morality that is comparable to other truths. But I do know that if moral truth exists, establishing it at the most fundamental level is hard to do.

Especially in the context of passionate moral disagreement, it’s difficult to tell whose basic moral values are the right ones. It’s an even greater challenge verifying that you yourself are the more virtuous. When you find yourself in a moral stalemate, where appeals to rationality and empirical reality have been exhausted, you have nothing left to stand on but your deeply-ingrained values and a profound sense of your own righteousness.

You have a few options at this point. You can defiantly insist that you’re the one with the truth – that you’re right and the other person is stupid, morally perverse, or even evil. Or you can retreat to some form of nihilism or relativism because the apparent insolubility of the conflict must mean moral truth can’t exist. This wouldn’t be psychologically satisfying, though. If your moral conviction is strong enough to lead you into an impassioned conflict, you probably wouldn’t be so willing to send it off into the ether. Why would you be comfortable slipping from intense moral certainty to meta-level amorality?

You’d be better off acknowledging the impasse for what it is – a conflict over competing values that you’re not likely to settle. You can agree to disagree. No animosity. No judgment. You both have dug your heels in, but you’re no longer trying to drive each other back. You can’t convince him that you’re right, and he can’t convince you, but you can at least be cordial.

But I think there is an even better approach. You can loosen your heels’ grip in the dirt and allow yourself to be pushed back. Doubt yourself, at least a little. Take a lesson from the physicist Richard Feynman and embrace uncertainty:

You see, one thing is, I can live with doubt, and uncertainty, and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things. But I’m not absolutely sure of anything, and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean. I might think about it a little bit; if I can’t figure it out, then I go onto something else. But I don’t have to know an answer. I don’t feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose, which is the way it really is, as far as I can tell – possibly. It doesn’t frighten me.

If Feynman can be so open to doubt about empirical matters, then why is it so hard to doubt our moral beliefs? Or, to put it another way, why does uncertainty about how the world is come easier than uncertainty about how the world, from an objective stance, ought to be?

Sure, the nature of moral belief is such that its contents seem self-evident to the believer. But when you think about why you hold certain beliefs, and when you consider what it would take to prove them objectively correct, they don’t seem so obvious.

Morality is complex and moral truth is elusive, so our moral beliefs are, to use Feynman’s phrase, just the “approximate answers” to moral questions. We hold them with varying degrees of certainty. We’re ambivalent about some, pretty sure about others, and very confident about others. For none of them are we absolutely certain – or at least we shouldn’t be.

While I’m trying only to persuade you to be more skeptical about your own moral beliefs, you might be tempted by the more global forms of moral skepticism that deny the existence of moral truth or the possibility of moral knowledge. Fair enough, but what I’ve written so far isn’t enough to validate such a stance. The inability to reach moral agreement doesn’t imply that there is no moral truth. Scientists disagree about many things, but they don’t throw their hands up and infer that no one is correct, that there is no truth. Rather, they’re confident truth exists. And they leverage their localized skepticism – their doubt about their own beliefs and those of others – to get closer to it.

The moral sphere is different than the scientific sphere, and doubting one’s moral beliefs isn’t necessarily valuable because it eventually leads one closer to the truth – at least to the extent that truth is construed as being in accordance with facts. This type of truth is, in principle, more easily discoverable in science than it is in morality. But there is a more basic reason that grounds the importance of doubt in both the moral and scientific spheres: it fosters an openness to alternatives.

Embrace this notion. It doesn’t have to mean acknowledging that you don’t know and then moving on to something else. And it doesn’t mean abandoning your moral convictions outright or being paralyzed by self-doubt. It means abandoning your absolute certainty and treating your convictions as tentative. It means making your case but recognizing that you have no greater access to the ultimate moral truth than anyone else. Be open to the possibility that you’re wrong and that your beliefs are the ones requiring modification. Allow yourself to feel the intuitive pull of the competing moral value.

There’s a view out there that clinging unwaveringly to one’s moral values is courageous and, therefore, virtuous. There’s some truth to this idea. Sticking up for what one believes in is usually worthy of respect. But moral rigidity – the refusal to budge at all – is where public moral discourse breaks down. It is the root of the fragmentation and polarization that defines contemporary public life.

Some people have been looking for ways to improve the ecosystem of moral discourse. In one study, Robb Willer and Matthew Feinberg demonstrated that when you reframe arguments to appeal to political opponents’ own moral values, you’re more likely to persuade them than if you make arguments based on your own values. “Even if the arguments that you wind up making aren’t those that you would find most appealing,” they wrote in The New York Times, “you will have dignified the morality of your political rivals with your attention, which, if you think about it, is the least that we owe our fellow citizens.”

Affirming your opponent’s values is indeed worthy. But if there is a normative justification for utilizing Willer and Feinberg’s antidote to entrenched moral disagreement, it seems to be the presumption that you, the persuader, are the righteous one and just need a salesman’s touch to bring the confused over to your side. Attempting to persuade others is inherent to moral discourse, but there’s something cynical and arrogant about using their own moral values against them, especially when you don’t hold those values yourself. Focusing solely on how to be the most persuasive also ignores another objective of moral discourse – being persuaded.

And the first step to being persuaded is doubting yourself.