Justin Tosi and Brandon Warmke on Moral Grandstanding

It’s not hard to make the case that the quality of our public moral discourse is low. Determining why it’s low, and then fixing it, is where the work is to be done. In their 2016 article in Philosophy and Public Affairs, Justin Tosi and Brandon Warmke say that moral grandstanding is at least partially to blame.

Moral grandstanding is what others have come to call virtue signaling, but Tosi and Warmke (who don’t like that phrase) offer a more thorough examination of the phenomenon – a precise definition, examples of how it manifests, and an analysis of its ethical dimensions.

It’s worth reading the entire article, but I’ll summarize the main points below.

Tosi and Warmke’s Definition of Moral Grandstanding

They say that paradigmatic examples have two central features. The first is that the grandstander wants others to think of her as morally respectable (i.e., she wants others to make a positive moral assessment of her or her group). They call this the recognition desire. The second feature of moral grandstanding is the grandstanding expression: when a person grandstands, he contributes a grandstanding expression (e.g., a Tweet, a Facebook post, a speech, etc.) in order to satisfy the recognition desire.

So, the desire to be perceived as morally respectable is what motivates the moral grandstander. Of course, people may have multiple motivations for contributing to public moral discourse, but the moral grandstander’s primary motivation is to be seen as morally respectable. Tosi and Warmke therefore propose a threshold: for something to count as grandstanding, the “the desire must be strong enough that if the grandstander were to discover that no one actually came to think of her as morally respectable in the relevant way, she would be disappointed.”

The Manifestations of Moral Grandstanding

Piling On is when someone reiterates a stance already taken by others simply to “get in on the action” and to signal his inclusion in the group that he perceives to be on the right side. An explanation of this, according to Tosi and Warmke, comes from the social psychological phenomenon known as social comparison: “Members of the group, not wanting to be seen as cowardly or cautious in relation to other members of the group, will then register their agreement so as not to be perceived by others less favorably than those who have already spoken up.”

Ramping Up is when people make stronger and stronger claims during a discussion. Tosi and Warmke offer the following exchange as an example:

Ann: We can all agree that the senator’s behavior was wrong and that she should be publicly censured.

Biff: Oh please—if we really cared about justice we should seek her removal from office. We simply cannot tolerate that sort of behavior and I will not stand for it.

Cal: As someone who has long fought for social justice, I’m sympathetic to these suggestions, but does anyone know the criminal law on this issue? I want to suggest that we should pursue criminal charges. We would all do well to remember that the world is watching.

The desire to be perceived as morally respectable, according to Tosi and Warmke, can lead to a “moral arms race.” Ramping up “can be used to signal that one is more attuned to matters of justice and that others simply do not understand or appreciate the nuance or gravity of the situation.”

Trumping Up is the insistence that a moral problem exists when one does not. The moral grandstander, with her desire to be seen as morally respectable, may try to display that she has a “keener moral sense than others.” This can result in her identifying moral problems in things that others rightfully have no issue with. “Whereas some alleged injustices fall below the moral radar of many,” Tosi and Warmke say, “they are not missed by the vigilant eye of the morally respectable.” So, in their attempt to gain or maintain moral respectability, the moral grandstander tries to bring attention to features of the world that others rightly find unproblematic.

Excessive Emotional Displays or Reports are used by the moral grandstander to publicly signal his moral insight or conviction. Citing empirical findings of a positive relationship between strength of moral convictions and strength of emotional reactions, Tosi and Warmke reason that excessive emotional displays or reports “could be used strategically to communicate to others one’s own heightened moral convictions, relative to others.” Grandstanders, in other words, may use excessive emotional displays or reports to signal that they are more attuned to moral issues and, thus, should be viewed as more morally insightful or sensitive.

Claims of Self-Evidence are used by the moral grandstander to signal her superior moral sensibilities. What is not so clear to others is obvious to the grandstander. “If you cannot see that this is how we should respond, then I refuse to engage you any further,” the grandstander may say (example by Tosi and Warmke). “Moreover,” Tosi and Warmke point out,” any suggestion of moral complexity or expression of doubt, uncertainty, or disagreement is often declaimed by the grandstander as revealing a deficiency in either sensitivity to moral concerns or commitment to morality itself.”

The Ethics of Moral Grandstanding

According to Tosi and Warmke, moral grandstanding is not just annoying but morally problematic. One major reason is that it results in the breakdown of productive moral discourse. And it does so in three ways.

First, it breeds an unhealthy cynicism about moral discourse. If moral grandstanding occurs because grandstanders want simply to be perceived as morally respectable, then once onlookers view this as a common motivation for moral claims, they’ll begin to think that moral discourse is merely about “showing that your heart is in the right place” rather than truly contributing to morally better outcomes.

The second reason grandstanding contributes to the breakdown of public moral discourse is that it contributes to “outrage exhaustion.” Because grandstanding often involves excessive displays of outrage, Tosi and Warmke think that people will have a harder time recognizing when outrage is appropriate and will find it harder and harder to muster outrage when it’s called for. Tosi and Warmke’s worry is that “because grandstanding can involve emotional displays that are disproportionate to their object, and because grandstanding often takes the form of ramping up, a public discourse overwhelmed by grandstanding will be subject to this cheapening effect. This can happen at the level of individual cases of grandstanding, but it is especially harmful to discourse when grandstanding is widely practiced.”

The third reason moral grandstanding harms public moral discourse is that it contributes to group polarization. This, Tosi and Warmke explain, has to do with the phenomenon of social comparison. As is the case in moral grandstanding, members of a group are motivated to outdo one another (i.e., they ramp up and trump up), so their group dynamic will compel them to advocate increasingly extreme and implausible views. This results in others perceiving morality as a “nasty business” and public moral discourse as consisting merely of extreme and implausible moral claims.

In addition to its effects on public moral discourse, Tosi and Warmke argue that moral grandstanding is problematic because the grandstander free rides on genuine moral discourse. He gains the benefits that are generated by non-grandstanding (i.e., genuine) participants, while also gaining the additional benefit of personal recognition. This, they say, is disrespectful to those the moral grandstander addresses when he grandstands.

Individual instances of moral grandstanding can also be disrespectful in a different way, Tosi and Warmke argue. Moral grandstanding, they say, can be kind of a “power grab.” When people grandstand, “they sometimes implicitly claim an exalted status for themselves as superior judges of the content of morality and its proper application.” They might use grandstanding as a way to obtain higher status within their own group, as a kind of “moral sage.” Alternatively, they might dismiss dissent as being beneath the attention of the morally respectable. Tosi and Warmke maintain that this is a problematic way of dealing with peers in public moral discourse:

…because in general we ought to regard one another as just that—peers. We should speak to each other as if we are on relatively equal footing, and act as if moral questions can only be decided by the quality of reasoning presented rather than the identity of the presenter himself. But grandstanders seem sometimes to deny this, and in so doing disrespect their peers.

The final reason that Tosi and Warmke offer for the problematic nature of moral grandstanding is that it is unvirtuous. Individual instances of grandstanding are generally self-promoting, and “so grandstanding can reveal a narcissistic or egoistic self-absorption.” Public moral discourse, they point out, involves talking about matters of serious concern, ones that potentially affect millions of people:

These are matters that generally call for other-directed concern, and yet grandstanders find a way to make discussion at least partly about themselves. In using public moral discourse to promote an image of themselves to others, grandstanders turn their contributions to moral discourse into a vanity project. Consider the incongruity between, say, the moral gravity of a world-historic injustice, on the one hand, and a group of acquaintances competing for the position of being most morally offended by it, on the other.

In contrast to the moral grandstander’s motivation for engaging in public moral discourse, the virtuous person’s motivation would not be largely egoistic. Tosi and Warmke suggest two possible motivations of the virtuous person:

First, the virtuous person might be motivated for other-directed reasons: she wants to help others think more carefully about the relevant issues, or she wants to give arguments and reasons in order to challenge others’ thinking in a way that promotes understanding. Second, the virtuous person might be motivated for principled reasons: she simply cares about having true moral beliefs and acting virtuously, and so wants others to believe what is true about morality and act for the right moral reasons. All we claim here is that the virtuous person’s motivation, unlike the grandstander’s, is not largely egoistic.

Because of its morally problematic nature – it’s bad for moral discourse, it’s unfair, and it’s unvirtuous – Tosi and Warmke take themselves “to have established a strong presumption against grandstanding.”

So, before you send that morally laden Tweet, think about your motivations. Do you genuinely want to contribute to moral discourse, or do you just want recognition?

The Case for Moral Doubt

This article was originally published in Quillette.

I don’t know if there is any truth in morality that is comparable to other truths. But I do know that if moral truth exists, establishing it at the most fundamental level is hard to do.

Especially in the context of passionate moral disagreement, it’s difficult to tell whose basic moral values are the right ones. It’s an even greater challenge verifying that you yourself are the more virtuous. When you find yourself in a moral stalemate, where appeals to rationality and empirical reality have been exhausted, you have nothing left to stand on but your deeply-ingrained values and a profound sense of your own righteousness.

You have a few options at this point. You can defiantly insist that you’re the one with the truth – that you’re right and the other person is stupid, morally perverse, or even evil. Or you can retreat to some form of nihilism or relativism because the apparent insolubility of the conflict must mean moral truth can’t exist. This wouldn’t be psychologically satisfying, though. If your moral conviction is strong enough to lead you into an impassioned conflict, you probably wouldn’t be so willing to send it off into the ether. Why would you be comfortable slipping from intense moral certainty to meta-level amorality?

You’d be better off acknowledging the impasse for what it is – a conflict over competing values that you’re not likely to settle. You can agree to disagree. No animosity. No judgment. You both have dug your heels in, but you’re no longer trying to drive each other back. You can’t convince him that you’re right, and he can’t convince you, but you can at least be cordial.

But I think there is an even better approach. You can loosen your heels’ grip in the dirt and allow yourself to be pushed back. Doubt yourself, at least a little. Take a lesson from the physicist Richard Feynman and embrace uncertainty:

You see, one thing is, I can live with doubt, and uncertainty, and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things. But I’m not absolutely sure of anything, and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean. I might think about it a little bit; if I can’t figure it out, then I go onto something else. But I don’t have to know an answer. I don’t feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose, which is the way it really is, as far as I can tell – possibly. It doesn’t frighten me.

If Feynman can be so open to doubt about empirical matters, then why is it so hard to doubt our moral beliefs? Or, to put it another way, why does uncertainty about how the world is come easier than uncertainty about how the world, from an objective stance, ought to be?

Sure, the nature of moral belief is such that its contents seem self-evident to the believer. But when you think about why you hold certain beliefs, and when you consider what it would take to prove them objectively correct, they don’t seem so obvious.

Morality is complex and moral truth is elusive, so our moral beliefs are, to use Feynman’s phrase, just the “approximate answers” to moral questions. We hold them with varying degrees of certainty. We’re ambivalent about some, pretty sure about others, and very confident about others. For none of them are we absolutely certain – or at least we shouldn’t be.

While I’m trying only to persuade you to be more skeptical about your own moral beliefs, you might be tempted by the more global forms of moral skepticism that deny the existence of moral truth or the possibility of moral knowledge. Fair enough, but what I’ve written so far isn’t enough to validate such a stance. The inability to reach moral agreement doesn’t imply that there is no moral truth. Scientists disagree about many things, but they don’t throw their hands up and infer that no one is correct, that there is no truth. Rather, they’re confident truth exists. And they leverage their localized skepticism – their doubt about their own beliefs and those of others – to get closer to it.

The moral sphere is different than the scientific sphere, and doubting one’s moral beliefs isn’t necessarily valuable because it eventually leads one closer to the truth – at least to the extent that truth is construed as being in accordance with facts. This type of truth is, in principle, more easily discoverable in science than it is in morality. But there is a more basic reason that grounds the importance of doubt in both the moral and scientific spheres: it fosters an openness to alternatives.

Embrace this notion. It doesn’t have to mean acknowledging that you don’t know and then moving on to something else. And it doesn’t mean abandoning your moral convictions outright or being paralyzed by self-doubt. It means abandoning your absolute certainty and treating your convictions as tentative. It means making your case but recognizing that you have no greater access to the ultimate moral truth than anyone else. Be open to the possibility that you’re wrong and that your beliefs are the ones requiring modification. Allow yourself to feel the intuitive pull of the competing moral value.

There’s a view out there that clinging unwaveringly to one’s moral values is courageous and, therefore, virtuous. There’s some truth to this idea. Sticking up for what one believes in is usually worthy of respect. But moral rigidity – the refusal to budge at all – is where public moral discourse breaks down. It is the root of the fragmentation and polarization that defines contemporary public life.

Some people have been looking for ways to improve the ecosystem of moral discourse. In one study, Robb Willer and Matthew Feinberg demonstrated that when you reframe arguments to appeal to political opponents’ own moral values, you’re more likely to persuade them than if you make arguments based on your own values. “Even if the arguments that you wind up making aren’t those that you would find most appealing,” they wrote in The New York Times, “you will have dignified the morality of your political rivals with your attention, which, if you think about it, is the least that we owe our fellow citizens.”

Affirming your opponent’s values is indeed worthy. But if there is a normative justification for utilizing Willer and Feinberg’s antidote to entrenched moral disagreement, it seems to be the presumption that you, the persuader, are the righteous one and just need a salesman’s touch to bring the confused over to your side. Attempting to persuade others is inherent to moral discourse, but there’s something cynical and arrogant about using their own moral values against them, especially when you don’t hold those values yourself. Focusing solely on how to be the most persuasive also ignores another objective of moral discourse – being persuaded.

And the first step to being persuaded is doubting yourself.

Thomas Nagel on Moral Luck

The philosopher Thomas Nagel points out that for people to find a moral judgment fitting, whatever it is for which the person is judged must be under his control. If it turns out that he didn’t have control over the action, then we ordinarily think that morally judging him is inappropriate. “So a clear absence of control,” he says, “produced by involuntary movement, physical force, or ignorance of the circumstances, excuses what is done from moral judgment.”

This seems pretty uncontroversial, doesn’t it? You can’t be judged for what is not your fault.

The problem, according to Nagel, is that the degree to which we lack control is greater than we ordinarily recognize. It’s in this broad range of influences, over which we have no control, that Nagel identifies what he calls moral luck:

Where a significant aspect of what someone does depends on factors beyond his control, yet we continue to treat him in that respect as an object of moral judgment, it can be called moral luck.

If we apply consistently the idea that moral judgment is appropriate only for things people control, Nagel argues, it leaves few moral judgments intact. “Ultimately,” he says, “nothing or almost nothing about what a person does seems to be under his control.”

Four Different Types of Moral Luck

Nagel distiguishes between four different types moral luck. They’re all influences that, because they’re beyond people’s control, should be morally irrelevant.

Luck in How Things Turn Out

These are cases in which moral judgments are affected by an action’s results when those results were beyond the person’s control. Nagel uses as an example a truck driver who accidentally runs over a child. If the driver is without fault, he will regret his role in the child’s death, but he won’t morally blame himself. But if he is the slightest bit negligent – by, for example, failing to have his brakes checked – and his negligence is a factor in the child’s death, then he will morally reproach himself for the death.

This is a case of moral luck because the driver would have to blame himself only slightly if there weren’t a situation in which he had to brake suddenly to avoid hitting a child. The driver’s degree of negligence is the same in both cases, but the moral assessment differs based on factors outside of the driver’s control – i.e., whether a child runs out in front of him.

Another example of luck in how things turn out is attempted murder. The intentions and motives behind an attempted murder can be exactly the same as those behind a successful one, but the penalty is, nonetheless, less severe. And the degree of culpability can depend on things outside of the committed murderer’s control, such as whether the victim was wearing a bullet-proof vest or whether a bird flew in the bullet’s path.

From the commonsense perspective that moral responsibility depends upon control, Nagel says, this seems absurd. “How is it possible to be more or less culpable depending on whether a child gets into the path of one’s car, or a bird into the path of one’s bullet?”

Constitutive Luck

One’s character traits are often the object of moral assessment. We blame people for being greedy, envious, cowardly, cold, or ungenerous, and we praise them for having the opposite traits. To some extent, Nagel allows, such character traits may be the product of earlier choices. And to some extent, the person may be able to modify his character traits.

But character traits are largely a product of genetic predispositions and environmental circumstances, both of which are beyond people’s control. “Yet,” Nagel says, “people are morally condemned for such qualities, and esteemed for others equally beyond the control of the will: they are assessed for what they are like.”

Luck in One’s Circumstances

How people are morally assessed often depends on the circumstances in which they find themselves, even though those circumstances are largely beyond their control. “It may be true of someone that in a dangerous situation he would behave in a cowardly or heroic fashion,” Nagel says, “but if the situation never arises, he will never have the chance to distinguish or disgrace himself in this way, and his moral record will be different.”

Nagel also gives a more provocative example. In Nazi Germany, he says, ordinary citizens had the opportunity to either be heroes and oppose the regime or to behave badly and support it (and even participate in its atrocities). Many of the citizens failed this moral test. But the test, Nagel argues, was one “to which the citizens of other countries were not subjected, with the result that even if they, or some of them, would have behaved as badly as the Germans in like circumstances, they simply did not and therefore are not similarly culpable.”

Those people who would have behaved as badly as the Germans who supported the Nazi regime but didn’t find themselves in such circumstances were morally lucky. What they did or didn’t do was due circumstances beyond their control.

Luck in How One is Determined by Antecedent Circumstances

This is essentially the problem of free will and moral responsibility. The extent to which the laws of nature and other circumstances that precedes one’s actions and choices (i.e., antecedent circumstances) seems to shrink the areas in which people are responsible for their actions. Nagel doesn’t expound upon this problem, but he does point out its connection to the other kinds of moral luck:

If one cannot be responsible for consequences of one’s acts due to factors beyond one’s control, or for antecedents of one’s acts that are properties of temperament no subject to one’s will, or for the circumstances that pose one’s moral choices, then how can one be responsible even for the stripped down acts of the will itself, if they are the product of antecedent circumstances outside of the will’s control?

Nagel’s Solution

Nagel doesn’t think there is a solution to the problem moral luck poses for moral responsibility and moral judgment. The problem arises because the influence of things not under our control causes our responsible selves to vanish, making those acts that are ordinarily the objects of moral judgment mere events. “Eventually nothing remains which can be ascribed to the responsible self,” Nagel says, “and we are left with nothing but a portion of the larger sequence of events, which can be deplored or celebrated, but not blamed or praised.”

The Three Levels of Ethics

It’s hard to come up with a definition of ethics that is both precise and satisfactory to everyone. But it helps to think about the levels at which ethical discussion and analysis take place.

Most concrete ethical issues involve questions about what we ought to do in a given situation. Underlying these questions are more abstract ones about right and wrong and good and bad more generally. And some discourse in moral philosophy is even more abstract.

Philosophers divide ethics into into three different levels, which range from the very abstract to the concrete: metaethics, normative ethics, and applied ethics. Understanding these levels is a good step toward grasping the breadth of subject.

Metaethics

Metaethics is the most abstract and philosophical level of ethics. Where normative and applied ethics seek to determine what is moral, metaethics concerns itself with the nature of morality itself. It deals with the following types of questions:

  • What does it mean when someone says something is “good” or “right”?
  • What is moral value, and where does it come from?
  • Is morality objective and universal, or is it relative to specific individuals or cultures?
  • Do moral facts exists?

These and other metaethical questions are important, but if you’re trying to figure out if a particular action is right or wrong, you might never get there pondering them. On the other hand, questions like Why be ethical? or Why do the right thing? are metaethical questions that are important for anyone interested in ethics. And they’re not so easy to answer.

Normative Ethics

Normative Ethics is concerned with the appropriate standards for right and wrong behavior. Normative ethical theories establish prescriptions – whether by foundational principles or good character traits – for how one ought to act or live. The following are prominent normative ethical approaches:

  • Virtue Ethics focuses on a person’s moral character. Virtue ethicists say we ought to develop virtuous characteristics – such as generosity, courage, and compassion – and exhibit virtuous behavior. This is different from other normative theories that propose more precise principles and rules for conduct.
  • Deontological theories emphasize one’s moral duties and obligations. They focus on the act itself, as either intrinsically good or bad, regardless of its consequences.
  • Consequentialist theories determine whether something is right or wrong by looking at its consequences. The ethical thing to do is that which has the best  consequences (i.e., results in the most benefit, happiness, good, etc.) among the alternatives.

Applied Ethics

Applied ethics consists of the analysis of specific moral issues that arise in public or private life. Whereas normative ethics attempts to develop general standards for morality, applied ethics is concerned with specific moral controversies. Abortion, stem cell research, environmental concerns, and the appropriate treatment of animals are all applied ethics issues.

Applied ethics can use normative ethical theories, principles or rules derived from such theories, or analogical reasoning (which analyzes moral issues by drawing analogies between alike cases). Context-specific norms or expectations, such as those characterizing a particular profession (e.g., medicine or journalism), arrangement (e.g., an agreement between two parties), or relationship (e.g., the parent-child relationship) are also relevant to applied ethical analysis.

Bioethics, business ethics, legal ethics, environmental ethics, and media ethics are all applied ethics fields.

The different levels of ethics can overlap and inform one another. Normative theories, for instance, are based on metaethical assumptions (or even explicit metaethical propositions), such as the existence or non-existence of objective and universal notions of right and wrong. And, as noted above, applied ethics can draw on normative theories to resolve moral disputes. Metaethical perspectives can also drip into applied ethical analysis. A moral relativist, for example, may contend that a practice deemed egregious by his own culture’s standards is truly morally permissible, or even obligatory, in the culture in which it occurs.

Despite the overlap between the three levels, distinguishing between them is useful for clarifying one’s own views and analyzing those of others.

 

Three of the Best Podcasts About Ethics

Anyone can start a podcast, so you can run into everything from the conspiracist muttering into his cousin’s microphone to Barack Obama’s speechwriter talking about politics. The topics are endless, and the quality is variable, but there’s something for everyone.

If you’re looking for podcasts that seriously grapple with ideas in ethics and morality, then here are three of the best ones out there.

  • Very Bad Wizards

    This one is brought to you by Tamler Sommers, a philosopher at the University of Houston, and Dave Pizarro, a social psychologist at Cornell who studies morality. Of the three podcasts listed here, this one is the most entertaining, and maybe even the most informative. Tamler and Dave are funny, and irreverent at times, but they also have nuanced conversations. Each episode revolves around an article or two in moral philosophy or moral psychology. There’s an occasional guest, but typically it’s just the two of them disagreeing with each other.

  • Waking Up

    If you like longform interviews with scholarly people, then you’ll like Waking Up, a podcast by the neuroscientist and independent scholar Sam Harris. While the topics aren’t confined to ethics, there is almost always an ethical component to the discussion. It’s clear that Sam likes examining ethical issues. He even wrote a book called The Moral Landscape, where he argues that the is-ought gap and the naturalistic fallacy need not worry anyone because science can answer questions about morality. I’m not convinced, but Sam’s book is still worth reading, and his podcast is worth listening to. He’s thoughtful and systematic when approaching ideas, and you can learn a lot from him and his guests.

  • The Partially Examined Life

    The Partially Examined Life is a podcast “by some guys who were at one point set on doing philosophy for a living but then thought better of it.” Mark Linsenmayer, Seth Paskin, and Wes Alwan all went to graduate school in philosophy but called it quits before finishing their PhDs. Now they, along with Mark’s brother-in-law Dylan Casey, philosophize on their podcast. The show covers philosophy broadly, but a good bit of the episodes are devoted to moral philosophy. In each episode, the guys have an informal but serious discussion of a philosophical text, sometimes with a guest. You can follow along and learn without reading the text, but you’ll get much more out of the episodes if you do the reading, which they link to in the show notes, beforehand.

These podcasters aren’t unknown to each other. Tamler appeared on The Partially Examined Life to discuss free will and moral responsibility, and he frequently jokes about needing to surpass them in the iTunes rankings. Dave announced that he’ll be joining them soon. Sam has thrice been on Very Bad Wizards, and Tamler and Dave were very recently Sam’s guest on Waking Up.

All three of the podcasts have been running for several years, so there are plenty of places to jump in. Take a break from Radiolab and This American Life and check them out.

The Unseemliness of Saintliness

There’s a psychological process called ethical fading that Max Bazerman and Ann Tenbrusel talk about in their business ethics book Blind Spots: Why We Fail to Do What’s Right and What to Do about It. Basically, when we make decisions, the ethical implications can fade from the decision criteria, allowing us to violate our own moral convictions without even realizing it. No one is immune to this phenomenon, not even you.

The good news is there are simple strategies – of the behavioral economics stripe – that can keep the moral dimensions of decisions from disappearing into the dark. For instance, publicly pre-committing to an ethical action, or pre-committing to an ethical action and sharing it with an unbiased and ethical person, makes people more likely to follow through with an ethical action in the future. There are also ways to ensure that abstract ethical values are salient during the decision process. These include imagining whether you’d be comfortable telling your mom about the decision, or thinking about your eulogy and what you’d want to be written about the values and principles you lived by.

We can, in effect, nudge ourselves toward morality.

It’s not hard to argue that battling meta-moral defects like ethical fading is a good thing. For Bazerman and Tenbrusel’s target – the business and organizational context, where misconduct tends to flourish – this seems self-evident. What is difficult, however, is figuring out how hard and how often we ought to battle our default psychology in the name of morality. Should we inject ethics into every decision we make and every action we take, or is there a limit to how morally good we want to be?

Bazerman and Tenbrusel don’t advocate taking on our psycho-ethical inadequacies at every turn, nor do they suggest that it’s even possible, but it’s still an interesting question whether as much moral goodness as possible is a good thing – whether we should, if we could, hack our psychology all the way to moral perfection. Whether we would want to be moral saints.

Moral Sainthood

Benjamin Franklin embarked on a systematic quest for moral perfection, which he described in his autobiography. He failed but wrote that he became a “better and happier man” than he otherwise would have been. His desire to be morally perfect, he said, was like “those who aim at perfect writing by imitating the engraved copies, tho’ they never reach the wished-for excellence of those copies, their hand is mended by the endeavor, and tolerable, while it continues fair and legible.” In other words, moral perfection is probably unattainable, but it’s nonetheless worth striving for.

George Orwell disagreed. In his essay on Gandhi, he wrote of his “aesthetic distaste” for the man’s saintliness, and he argued that sainthood is incompatible with what it means to be human:

The essence of being human is that one does not seek perfection, that one is sometimes willing to commit sins for the sake of loyalty, that one does not push asceticism to the point where it makes friendly intercourse impossible, and that one is prepared in the end to be defeated and broken up by life, which is the inevitable price of fastening one’s love upon other human individuals. No doubt alcohol, tobacco, and so forth, are things that a saint must avoid, but sainthood is also a thing that human beings must avoid.

It isn’t just that the average human is simply a failed saint who snubs the ideal of sainthood because it’s too hard to achieve. “Many people genuinely do not wish to be saints,” Orwell said, “and it is probable that some who achieve or aspire to sainthood have never felt much temptation to be human beings.”

The philosopher Susan Wolf was equally unimpressed with the moral saint. In her 1982 article “Moral Saints,” she argued that moral perfection isn’t “a model of personal well-being toward which it would be particularly rational or good or desirable for a human being to strive.”

There are two possible models of the moral saint, Wolf said. The Loving Saint is motivated solely by the wellbeing of others. Where most people derive a significant portion of their wellbeing from the ordinary joys of life – material comforts, fulfilling activities, love, companionship, etc. – the moral saint’s happiness genuinely lies “in the happiness of others, and so he would devote himself to others gladly, and with a whole and open heart.” The Rational Saint, on the other hand, is as tempted as the non-saint by the ordinary constituents of happiness, but she denies herself life’s pleasures out of moral duty. Broader moral concerns make her sacrifice her own interests to the interests of others.

Though the motivations of these types of saints differ, the difference would have little effect on the saints’ ultimate commitment – being as morally good and treating others as justly and kindly as possible. The moral saint, no matter which type, “will have the standard moral virtues to a non-standard degree.” The problem with moral perfection, according to Wolf, is that it conflicts with our ideals of personal excellence and wellbeing:

For the moral virtues, given that they are, by hypothesis, all present in the same individual, and to an extreme degree, are apt to crowd out the nonmoral virtues, as well as many of the interests and personal characteristics that we generally think contribute to a healthy, well-rounded, richly developed character.

If the moral saint spends his days pursuing nothing but moral goals, Wolf argued, then he has no time to pursue other worthwhile goals and interests. He’s not reading novels, playing music, or participating in athletics. There are also nonmoral characteristics that people value – a cynical or sarcastic wit, or sense of humor that appreciates one – that would be incongruent with sainthood. A moral saint, Wolf said, would oppose such characteristics because they require a mindset of resignation and cynicism about the darker aspects of the world. The morally perfect person “should try to look for the best in people, give them the benefit of the doubt as long as possible, try to improve regrettable situations as long as there is any hope of success.”

Nor could the moral saint take an interest in things like gourmet cooking, high fashion, or interior design. If there is a justification for such activities, Wolf said, “it is one which rests on the decision not to justify every activity against morally beneficial alternatives, and this is a decision a moral saint will never make.”

Our ideals of excellence, Wolf said, contain a mixture of moral and nonmoral virtues. We want our models to be morally good – but “not just morally good, but talented or accomplished or attractive in nonmoral ways as well”:

We may make ideals out of athletes, scholars, artists – more frivolously out of cowboys, private eyes, and rock stars. We may strive for Katharine Hepburn’s grace, Paul Newman’s “cool”; we are attracted to the high-spirited passionate nature of Natasha Rostov; we admire the keen perceptiveness of Lambert Strether. Though there is certainly nothing immoral about the ideal characters or traits I have in mind, they cannot be superimposed upon the ideal of the moral saint. For although it is a part of many of these ideals that the characters set high, and not merely acceptable, moral standards for themselves, it is also essential to their power and attractiveness that the moral strengths go, so to speak, alongside of specific, independently admirable, nonmoral ground projects and dominant personal traits.

According to Wolf, although we include moral virtues in our ideals of personal excellence, we look in our models of moral excellence for people whose moral virtues occur alongside interests or traits of lower moral salience – “there seems to be a limit to how much morality we can stand.”

And, to be sure, it’s not just that we value well-roundedness and can’t stand saints’ singular commitment to morality. We don’t usually object to those who are passionately committed, above all else, to become, say, Olympic athletes or accomplished musicians. Such people might decide that their commitment to these goals are strong enough to be worth sacrificing other things that life might have to offer. Desiring to be a moral saint is different, however:

The desire to be as morally good as possible is apt to have the character not just of a stronger, but of a higher desire, which does not merely successfully compete with one’s other desires but which rather subsumes or demotes them. The sacrifice of other interests for the interest in morality, then, will have the character, not of a choice, but of an imperative.

There is something odd, Wolf continued, about morality or moral goodness being the object of a dominant passion. When the Loving Saint happily gives up life’s pleasures in the name of morality, it’s striking not because of how much he loves morality, but because of how little he seems to love life’s nonmoral pleasures. “One thinks that, if he can give these up so easily, he does not know what it is to truly love them,” Wolf wrote. The Rational Saint might desire what life offers in a way that the Loving Saint cannot, but in denying himself these pleasures out of moral duty, his position is equally disturbing – one reckons that he has “a pathological fear of damnation, perhaps, or an extreme form of self-hatred that interferes with his ability to enjoy the enjoyable life.”

Like Orwell, Wolf confronted the possibility that we are put off by models of moral saints because they highlight our own weaknesses and because sainthood would require us to sacrifice things we enjoy. She granted that our being unattracted to the requirements of sainthood is not, in itself, sufficient for condemning the ideal, but some of the qualities that the moral saint lacks are good qualities, ones that we find desirable, ones that we ought to like. And this, she said, provides us with reasons to discourage moral sainthood as an ideal:

In advocating the development of these varieties of excellence, we advocate nonmoral reasons for acting, and in thinking that it is good for a person to strive for an ideal that gives a substantial role to the interests and values that correspond to these virtues, we implicitly acknowledge the goodness of ideals incompatible with that of the moral saint.

So, Wolf agreed with Orwell that people don’t, and shouldn’t, strive to be moral saints – not because sainthood is incompatible with being human, however, but because it is incompatible with being an excellent one. And although Ben Franklin endorsed the pursuit of moral perfection, his life story seems perfectly harmonious with Wolf’s view: he may have failed to become morally perfect, but he succeeded in achieving personal excellence.

If moral sainthood is not a model of personal excellence and well-being toward which we should aspire, then maybe the psychological constraints on our morality aren’t defects at all. Maybe we need them to attain and enjoy the nonmoral goods in life. Sometimes it might be good to just let the ethics fade. The big question is, when?

The Knobe Effect and the Intentionality of Side Effects

Imagine that the chairman of a company decides to implement an initiative that will reap profits for his company but will also have a particular side effect. The chairman knows the side effect will occur, but he couldn’t care less. Making money is his only reason for making his decision.

The chairman goes forward with the initiative, his company makes a lot of money, and the side effect occurs as anticipated.

Did the chairman bring about the side effect intentionally? Don’t answer yet.

In a famous 2003 experiment, the philosopher Joshua Knobe showed that people’s judgments about whether a side effect is intentional or not depend on what the side effect is. He randomly assigned subjects to read one of two scenarios, which were the same as the one above, except the actual side effects of the chairman’s decision were presented. In the first scenario, the initiative harms the environment. In the second, it helps the environment.

Of the subjects who read the scenario in which the initiative harmed the environment, 82% said the chairman intentionally brought about the harm. Subjects made the opposite judgment in the other condition. Asked whether the chairman intentionally helped the environment by undertaking the initiative, 77% said that he didn’t.

So, there is an asymmetry in the way we ascribe intentionality to side effects – now known as the “Knobe Effect” or the “Side Effect Effect” – and this study suggests that it stems from our moral evaluations of those side effects. As Knobe concluded in the 2003 study, people “seem considerably more willing to say that a side-effect was brought about intentionally when they regard that side-effect as bad than when they regard it as good.”

But subsequent research by Knobe and others has shown that it’s not that simple. Sometimes people judge good side effects, such as when an action violates an unjust Nazi law, as intentional and bad side effects, like complying with the Nazi law, as not intentional. Richard Holton argues that it is whether a norm is knowingly violated, and not necessarily whether a side effect is morally good or bad, that influences people’s judgments of intentionality.

You probably would’ve struggled to say whether the chairman intended the “particular side effect” because there was nothing to guide your intuitions. If there were, though, you likely would’ve exemplified the Knobe Effect.

The “Forgotten” Bioethicist

In the bioethics field, praise is heaped upon Beauchamp and Childress (B and C)for their guiding text, “The Principles of Biomedical Ethics.” They assuredly reap rewards by adding revisions to this book – however so minor.

Before I furthered my bioethics training, I encountered another ethicist W.D. Ross. I thoroughly enjoyed his book, “The Right and the Good” for its insights and attempts to generate a complete ethical theory. It was the most robust work I had encountered through my readings.

Ross was a Scottish philosopher who died in 1971. He is well-regarded in the academic institutions which makes it sensible that B and C encountered his seminal text. He attempted to make a sound ethical theory. That is, one that could assist in almost every ethical problem encountered.

“Fidelity; reparation; gratitude; non-maleficence; justice; beneficence; and self-improvement”

These are the derived “principles” that Ross used to create his ethical guidance.

B and C, clearly looked to these principles for guidance. Even directly pulling “justice” and “beneficence” from him for their book.

He even addresses the critical element called “moral residue”.  This is an instance in which the principles have taken you to their limit, where you leave doing the best you can. It is an essential action that you took but it leaves you dissatisfied. Ross openly admits that life functions like this.

Leaning too heavily on the “perfect” action would cripple many people’s decision making. Prescriptive ethics can be pleasant and neat at times. Yet, there is an absence in them too. Are they “true” moral dilemmas if they can be resolved in a quick formula? Life can certainly function as a serious of simple events. When it gets difficult, that is when the more robust systems work. That is where true ethics lives.

The most significant feature is the limit and humility found in Ross’s text. He acknowledges our innate inability to morally fail. He doesn’t seek to coddle our damaged ego after we fail. He acknowledges life’s messy and imprecise nature; compared to the approach of B and C, which is used to inform bioethics and the entire field of human subject research ethics, it clarifies the difficulty of doing our moral duties. Even that we may struggle morally and ethical resolution may never happen.

I am convinced that these ethical complications make for our most stimulating works of fiction since they don’t provide a simple solution. A classic example is “Sofie’s Choice” where she is told to choose between one of her children. The other is fated to a certain death. She chooses but never recovers from knowing the choice she made. Afterwards, her conscience is torn asunder for the remainder of her life.

There is only real takeaway is that ethics shines best when it digs into the nuance. When it says, we can’t provide panacea. You will struggle with your choices…….and that is expected.

Law and Morality

It’s hard to pin down how exactly the law relates to morality.

Some people believe that acting ethically means simply following the law. If it’s legal, then it’s ethical, they say. Of course, a moment’s reflection reveals this view to be preposterous. Lying and breaking promises are legal in many contexts, but they’re nearly universally regarded as unethical. Paying workers the legal minimum wage is legal, but failing to pay workers a living wage is seen by some as immoral. Abortion has been legal since the early 1970’s, but many people still thinks it’s immoral. And discrimination based on race used to be legal, but laws outlawing it were passed because it was deemed immoral.

Law and morality do not always coincide. Sometimes the legal action isn’t the ethical one. People who realize this might have a counter-mantra: Just because it’s legal doesn’t mean that it’s ethical. This is more a sophisticated perspective than the one simply conflating law and ethics.

According to this perspective, acting legally is not necessarily acting ethically, but it is necessary for acting ethically. The law is the minimum requirement, but morality may require people to go above and beyond their basic legal obligations. From this perspective, paying employees at least the minimum wage is necessary for acting ethically, but morality may require that they be paid enough to support themselves and their families. Similarly, non-discrimination may be the minimum requirement, but one might think that actively recruiting and integrating minorities into the workplace is a moral imperative.

The notion that legal behavior is a necessary condition for ethical behavior seems to be a good general rule. Most illegal acts are indeed unethical. But what about the old laws prohibiting the education of slaves? Or anti-miscegenation laws criminalizing interracial marriage, which were on the books in some US states until the 1960s? It’s hard to argue that people who broke these laws necessarily acted unethically.

You could say that these laws are themselves immoral and that this places them in a different category than generally accepted laws. This is probably true. These legal obligations do indeed create dubious moral commitments. But how can you say that the moral commitments are dubious if law and morality are intertwined to the extent that one can’t act ethically without acting legally?

And aren’t there some conditions under which breaking a generally accepted law might be illegal but still the right thing to do?

What about breaking a generally accepted law to save a life? What if a man, after exhausting all other options, stole a cancer treatment to save his wife’s life? The legal bases for property rights and the prohibition against theft are generally accepted, and in most other contexts, the man would be condemned as both unethical and a criminal. But stealing the treatment to save his wife’s life seems, at the very least, morally acceptable. This type of situation suggests a counter-mantra to those who believe legality is a prerequisite for ethicality: Just because it’s illegal doesn’t mean it’s unethical.

This counter-mantra doesn’t suggest that the law is irrelevant to ethics. Most of the time, it’s completely relevant. Good laws are, generally speaking, or perhaps ideally speaking, a codification of our morality.

But the connection between law and morality is complex, and there may be no general rule that captures how the two are relates.

Sometimes, actions that are perfectly legal are nonetheless unethical. Other times, morality requires that we not only follow the law but that we go above and beyond our positive legal obligations. Yet, there are also those times when breaking the law is at least morally permissible.

There are also cases in which we are morally obligated to follow immoral laws, such as when defiance would be considerably more harmful than compliance. We live in a pluralistic society where laws are created democratically, so we can’t just flout all the laws we think are immoral – morality is hardly ever that black and white anyway. And respect for the rule of law is necessary for the stability of our society, so there should be a pretty high threshold for determining that breaking a law is morally obligatory.

If there is a mantra that adequately describes the relationship between law and morality, it goes something like this: It depends on the circumstances. 

 

 

 

 

 

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)