Law and Morality

It’s hard to pin down how exactly the law relates to morality.

Some people believe that acting ethically means simply following the law. If it’s legal, then it’s ethical, they say. Of course, a moment’s reflection reveals this view to be preposterous. Lying and breaking promises are legal in many contexts, but they’re nearly universally regarded as unethical. Paying workers the legal minimum wage is legal, but failing to pay workers a living wage is seen by some as immoral. Abortion has been legal since the early 1970’s, but many people still thinks it’s immoral. And discrimination based on race used to be legal, but laws outlawing it were passed because it was deemed immoral.

Law and morality do not always coincide. Sometimes the legal action isn’t the ethical one. People who realize this might have a counter-mantra: Just because it’s legal doesn’t mean that it’s ethical. This is more a sophisticated perspective than the one simply conflating law and ethics.

According to this perspective, acting legally is not necessarily acting ethically, but it is necessary for acting ethically. The law is the minimum requirement, but morality may require people to go above and beyond their basic legal obligations. From this perspective, paying employees at least the minimum wage is necessary for acting ethically, but morality may require that they be paid enough to support themselves and their families. Similarly, non-discrimination may be the minimum requirement, but one might think that actively recruiting and integrating minorities into the workplace is a moral imperative.

The notion that legal behavior is a necessary condition for ethical behavior seems to be a good general rule. Most illegal acts are indeed unethical. But what about the old laws prohibiting the education of slaves? Or anti-miscegenation laws criminalizing interracial marriage, which were on the books in some US states until the 1960s? It’s hard to argue that people who broke these laws necessarily acted unethically.

You could say that these laws are themselves immoral and that this places them in a different category than generally accepted laws. This is probably true. These legal obligations do indeed create dubious moral commitments. But how can you say that the moral commitments are dubious if law and morality are intertwined to the extent that one can’t act ethically without acting legally?

And aren’t there some conditions under which breaking a generally accepted law might be illegal but still the right thing to do?

What about breaking a generally accepted law to save a life? What if a man, after exhausting all other options, stole a cancer treatment to save his wife’s life? The legal bases for property rights and the prohibition against theft are generally accepted, and in most other contexts, the man would be condemned as both unethical and a criminal. But stealing the treatment to save his wife’s life seems, at the very least, morally acceptable. This type of situation suggests a counter-mantra to those who believe legality is a prerequisite for ethicality: Just because it’s illegal doesn’t mean it’s unethical.

This counter-mantra doesn’t suggest that the law is irrelevant to ethics. Most of the time, it’s completely relevant. Good laws are, generally speaking, or perhaps ideally speaking, a codification of our morality.

But the connection between law and morality is complex, and there may be no general rule that captures how the two are relates.

Sometimes, actions that are perfectly legal are nonetheless unethical. Other times, morality requires that we not only follow the law but that we go above and beyond our positive legal obligations. Yet, there are also those times when breaking the law is at least morally permissible.

There are also cases in which we are morally obligated to follow immoral laws, such as when defiance would be considerably more harmful than compliance. We live in a pluralistic society where laws are created democratically, so we can’t just flout all the laws we think are immoral – morality is hardly ever that black and white anyway. And respect for the rule of law is necessary for the stability of our society, so there should be a pretty high threshold for determining that breaking a law is morally obligatory.

If there is a mantra that adequately describes the relationship between law and morality, it goes something like this: It depends on the circumstances. 

 

 

 

 

 

Mother Night and the Call for Sincerity

Howard Campbell is a fictional character in the Kurt Vonnegut novel, “Mother Night”. This text has its protagonist appear to be a reprehensible soul. An American turned Nazi propagandist who we later find is working as a double agent. His charisma laden speeches are used to inspire der Volker and to provide hidden messages to the American forces. When the story begins Howard Campbell is in an Israeli holding cell – awaiting trial for his crimes as a Nazi. We learn the truth through his story.

There are so many great Kurt Vonnegut books. Why does this book mean to much to me? One quote resonates: “We are who we pretend to be, so we must be careful what we pretend to be.”

Even when I was reading the book, there drew some parallels to the various demagoguery on cable news. Almost a decade later, that quote comes home to roost. This story, much like Vonnegut’s carries some humor though.

Infowars grew to popularity in the early 2000’s and pushed conspiracy theories of all sorts. At its helm was a man named Alex Jones. He appeared to be a very zany personality that professed insane amounts of virility with deep understanding of the forces managing the world.

He is a character straight out of a wrestling promo. Until recently, it was so hard to glean anything else about this foolish being. His machismo runs rampant in the supplements he endorses. Brandishing a torso followed with explanations of how we can boost one’s manliness, muscles and, most importantly, attraction from the opposite sex.

His abrasive nature has helped push some joyful food for conspiracists such as: 9/11 was an inside job, Justice Anthony Scalia was murdered, and labeling the Newton shooting as a false flag. The last one has enabled a constant harassment of the victim’s families. An immeasurable agony inflicted by a talking head. Emboldened souls even take to calling and accusing the parents of faking everything – even their mourning.

The strongest of men. The most insightful person in media. Dodging any and all provocateurs.

Then he got divorced.

We bore witness to the travesty of his life and the struggle to contain his act. While in the courtroom, he was unable to answer simple questions and blamed a bowl of chili. The southwestern comfort food. His wife accused him of general foolishness that could only be permitted by a caricature of a man. Twelve years of marriage with someone as braggadocios as Alex Jones would garner some “interesting” tales. I am certain more tales will come.

One of the most prudent elements that came from when Jones’ lawyer. The lawyer conceded that Alex Jones’ radio and video personality was just that: a personality. What does that actually mean for our hero? He immediately released a video saying the lawyer was wrong! That it was just kabuki theater and to disregard the lawyer’s statement.

Should we get deep insights into the woes of personalities? Yes. Absolutely. The unmitigated power provided to the charlatans can only be mitigated by the light of their flaws. While Jones may be a family man, when he is not behind the microphone, his listeners/viewers can never be so certain. They can picture him as the champion for their ideology but he can see it as just something he does.

Vonnegut’s character seems to struggle. He loses his love, his freedom, and his integrity. Unlike a fictional character, I don’t have the privilege to know what resides in the hearts of men.  So, I am not sure what grew first and who is real anymore: Alex Jones or Alex Jones. Regardless, it doesn’t seem that he was sufficiently careful enough in his pretending.

 

Keep An Eye on the Rearview

“History does not repeat, but it does instruct.”

That’s Yale historian Timothy Snyder’s message in his new book On Tyranny: Twenty Lessons from the Twentieth Century. If we can learn anything from the past, it’s that democracies can collapse. It happened in Europe in the 1920s and 1930s, and then again after the Soviets began spreading authoritarian communism in the 1940s.

American democracy is no less vulnerable to tyrannical forces, Snyder warns. “Americans today are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism in the twentieth century. Our one advantage is that we might learn from their experience.”

The short 126-page treatise – written in reaction to Donald Trump’s ascension to the Oval Office – looks at failed European democracies to highlight the “deep sources of tyranny” and offer ways to resist them:

Do not obey in advance. Defend institutions. Believe in truth. Contribute to good causes. Listen for dangerous words. Be as courageous as you can.

Inspired as it may be by Trumpian politics, On Tyranny is useful guide for defending democratic governance in any political era. As Snyder notes, looking to past democracies’ failures is an American tradition. As they built our country, the Founding Fathers considered the ways ancient democracies and republics declined. “The good news is that we can draw upon more recent and relevant examples than ancient Greece and Rome,” Snyder writes. “The bad news is that the history of modern democracy is also one of decline and fall.”

The actionable steps for defending our democratic institutions make up the bulk of On Tyranny, but Snyder’s big critique is of Americans’ relationship to history. It’s not just that we don’t know any; it’s that we’re dangerously anti-historical.

In the epilogue, Snyder observes that until recently the politics of inevitability – “the sense that history could move in only one direction: toward liberal democracy” – dominated American political thinking. After communism in eastern Europe ended, and the salience of its, and fascism’s and Nazism’s, destruction lessened, the myth of the “end of history” took hold. This misguided belief in the march toward ever-greater progress made us vulnerable, Snyder writes, because it “opened the way for precisely the kinds of regimes we told ourselves could never return.”

Snyder isn’t being particularly shrewd here. Commentators have explicitly endorsed versions of the politics of inevitability. After the Cold War, the political theorist Francis Fukuyama wrote the aptly titled article “The End of History?” (which was later expanded into a book, The End of History and the Last Man) arguing that history may have indeed reached its final chapter. In the “End of History?” he writes the following:

What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of post-war history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.

This is a teleological conception of the world – it views history as unfolding according to some preordained end or purpose. Marxism had its own teleology, the inevitable rise of a socialist utopia, based on Karl Marx’s materialist rewriting of Hegelian dialectics. After Marxism took a geopolitical blow in the early 1990s, Fukuyama reclaimed Hegelianism from Marx and argued that mankind’s ideological evolution had reached its true end: Liberal democracy had triumphed, and no alternatives could possibly replace it.

But Snyder isn’t referring just to the prophecies of erudite theorists like Fukuyama. He’s pointing the finger at everyone. When the Soviet Union collapsed, he writes, Americans and other liberals “drew the wrong conclusion: Rather than rejecting teleologies, we imagined that our own story was true.” We fell into a “self-induced intellectual coma” that constrained our imaginations, that barred from consideration a future with anything but liberal democracy.

A more recent way of considering the past is the politics of eternity. It’s historically oriented, but it has a suspect relationship with historical facts, Snyder writes. It yearns for a nonexistent past; it exalts periods that were dreadful in reality. And it views everything through a lens of victimization. “Eternity politicians bring us the past as a vast misty courtyard of illegible monuments to national victimhood, all of them equally distant from the present, all of them equally accessible for manipulation. Every reference to the past seems to involve an attack by some external enemy upon the purity of the nation.”

National populists endorse the politics of eternity. They revere the era in which democracies seemed most threatened and their rivals, the Nazis and the Soviets, seemed unstoppable. Brexit advocates, the National Front in France, the leaders of Russia, Poland, and Hungary, and the current American president, Snyder points out, all want to go back to some past epoch they imagine having been great.

“In the politics of eternity,” Snyder writes, “the seduction by a mythicized past prevents us from thinking about possible futures.” And the emphasis on national victimhood dampens any urge to self-correct:

Since the nation is defined by its inherent virtue rather than by its future potential, politics becomes a discussion of good and evil rather than a discussion of possible solutions to real problems. Since the crisis is permanent, the sense of emergency is always present; planning for the future seems impossible or even disloyal. How can we even think of reform when the enemy is always at the gate?

If the politics of inevitability is like an intellectual coma, the politics of eternity is like hypnosis: We stare at the spinning vortex of cyclical myth until we fall into a trance – and then we do something shocking at someone else’s orders.

The risk of shifting from the politics of inevitability to the politics of eternity is real, Snyder writes. We’re in danger of passing “from a naïve and flawed sort of democratic republic to a confused and cynical sort of fascist oligarchy.” When the myth of inevitable progress is shattered, people will look for another way of making sense of the world, and the smoothest path is from inevitability to eternity. “If you once believed that everything turns out well in the end, you can be persuaded that nothing turns out well in the end.”

The only thing that stands in the way of these anti-historical orientations, Snyder says, is history itself. “To understand one moment is to see the possibility of being the cocreator of another. History permits us to be responsible: not for everything, but for something.”

 

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)

Reflecting Politics: Image Making and Falsities

Hannah Arendt was a mid-century German thinker that witnessed humanity at its worst. As a consequence, her writings carry a profundity that I rarely found in the many authors I have read. I can lay out several prophetic examples encountered in her texts. Given the political climate, I will pull from her seminal essay, “Lying in Politics” which is found in the collection, Crises of the Republic.

Arendt laments the chance for “image-makers” to inject themselves into politics. Lobbyists and advertising men would possess a shared disinterest in things of actual politics and instead focus on the “image” of politics. The result is a politician whose image is refined to reflect a pious family man who votes against his constituents’ interests on the regular. The subterfuge from the Mad Men image consultants has driven us to accept this political farce at its face value or provided a deep doubt about the merits of ANY politician.

She anticipated the one of modern political crisis: destruction of a shared and knowable world. Specifically, this quote gives credence to this topic:

“The point is reached when the audience to which the lies are addressed is forced to disregard altogether the distinguishing line between truth and falsehood in order to be able to survive.” (Crises of the Republic 7)

When we meld image making with a disbelief there leaves only so much mental capacity to challenge. Our perception of reality, “truths”, can’t be easily parsed. We either accept an image maker’s tale or we distrust the entire world.

Yet, modern political discourse has generated another framework for survival. The tribalism of right-wing conservatism has lived within this dichotomous reality. The espousal of lies from these sources protects their observers from acknowledging shifts in modern living.  Shifting demographics and waning labor prospects have been successfully hidden by political conservatives. Also, no longer are the viewers/listeners/constituents the majority, and they most certainly are being fleeced media/politicans – the industries generated by their disregarding of truth.We see now, there are no coal jobs to bring back, robots aren’t going to resign and give you a factory job again. The pruned politician weaves this lie into every stump speech. Empowers the people who will insure their (re)election and the politician hops away in a overly polished SUV. Not a fleck of dirt.

 

 

 

David Hume and Deriving an “Ought” from an “Is”

It seems easy to make an ethical argument against punching someone in the face. If you do it, you will physically harm the person. Therefore, you shouldn’t.

But the 18th century philosopher David Hume famously argued that inferences of this type – in which what we ought morally to do (not punch someone) is derived from non-moral states of affairs (punching him will hurt him) – are logically flawed. You cannot, according to Hume, derive an “ought” from an “is,” at least without a supporting “ought” premise. So, deciding that you ought not punch someone because it would harm him presupposes that causing harm is bad or immoral. This presupposition is good enough for most people.

But for Hume and those who subscribe to what is now commonly referred to as the “is-ought gap” or “Hume’s guillotine,” it is not enough.

Hume put the heads of preceding moral philosophers in his proverbial guillotine in Book III, Part I, Section I of his A Treatise of Human Nature. He wrote that every work of moral philosophy he had encountered proceeded from factual, non-moral observations about the world to moral conclusions – those that express what we ought or ought not do. The shift is imperceptible, but it is a significant blunder. “For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.”

The blunder, according to Hume, is one of logic. Factual statements are logically different from moral statements, so no factual statements can, by themselves, entail what people morally ought to do. The “ought” statement expresses a new relation, to use Hume’s phrase, that isn’t supported by its purely factual premises. So, a moral judgment that is arrived at by way of facts alone is suspect.

The new, unexplained relation between moral judgments and solely factual premises is characteristic of the broader distinction between facts and values. Moral judgments are value judgments – not factual ones.

In the same way, judgments about the tastiness of particular foods are value judgments. And positive and negative assessments of foods are not logically entailed by just the facts about the foods.

If a cheese enthusiast describes all the facts he knows about cheese – like that it’s made from milk, cultured bacteria, and rennet – that wouldn’t be enough to convince someone that its delicious. Nor would a cheese hater’s listing all the same facts prove cheese is disgusting. Both the cheese lover and the cheese hater make an evaluation that isn’t governed strictly by the logical relations between the facts.

Despite the logical gap between “is” and “ought” statements, and the broader distinction between facts and values, Hume didn’t think moral judgments are hogwash. He just thought they come from sentiments or feelings rather than logical deductions. In Book III, Part I, Section II of the Treatise, he wrote, “Morality … is more properly felt than judged of; though this feeling or sentiment is commonly so soft and gentle, that we are apt to confound it with an idea, according to our common custom of taking all things for the same, which have any near resemblance to each other.”

So, Hume would most likely agree that punching someone in the face is wrong. But he’d say an argument against it is unnecessary, even mistaken. People feel the wrongness. They feel that one ought not punch another in the face – just like a punched person feels the pain.

Mitch McConnell is Ugly

Any philosophical answer you seek can be found in the writings of Fredriech Nietzsche. Many accuse him of a deep Antisemitism, blatant misogyny, or just being a syphilitic madman based on his writings. His panache leads to multiple varied interpretations. I interpret him as a broker for crisis – the crisis of being human and all its comorbidities. His fascination with what it is to be alive, human, and how we are to maneuver in this world provide some philosophical answers for me.

I also find solace in his writings about how power becomes a transactional event that is often built upon anti-ethical exchanges. Nietzsche blamed an early event in Western Civilization for the way power came to be in its anti-ethical state: the rise of Socrates.

Nietzsche says the following: By birth, Socrates belonged to the lowest class: Socrates was plebeian. We are told, and can see in sculptures of him, how ugly he was. But ugliness, in itself an objection, is among the Greeks almost a refutation.  (Twilight of the Idols 3)

The weight of this statement is that Socrates had no power within “just” his existence. Being born of the lower classes was traditionally enough to condemn any individual to that class for the entirety of their life. Coupled with his “imperfect” appearance he was doomed to linger as just another Grecian.

Nietzsche thought that this combination moved Socrates to respond in a way that gave him an entry way to becoming powerful: subversion. Socrates moved the goal posts for what was good and we reside in the territory – particularly in the academic and Democratic worlds – where logic and rationality are the exegesis for power. Outthink, out speak, and out-moralize all your opponents and you have “begot that Socratic idea that reason and virtue equal happiness” (Twilight of the Idols  5)

The previous paragraph can lead to a profound analysis, entire thesis and books have been written on the subject, but I am here for a more applicable purpose.

Mitch McConnell is ugly. He was born a sickly child (Polio) and is still a sickly man – he was honorably discharged from the Army Reserve due to optic neuritis. Yet, he is a man that has relished in the subversion of power. The sickly child who is now, for all intents and purposes the most powerful man in Washington. Looming over the Senate with his baritone voice and drooping face, he has provided a cynicism and Sophistry unmatched even by Paul Ryan (he deserves his own critiques as do most politicians).

This bending of the logic, and appealing to rationality to serve one’s own interest is not a new trick for politicians. Yet, for some reason Mitch does it so well.

Which brings us to the question: why does Mitch McConnell’s ugliness matter? It is in the same vein as Socrates. There is a shrewdness to Mitch that appears to mirror the Socratic approach. I have no doubt that behind closed doors, Mitch knows and declares what he is. This exchange between Socrates and a “foreigner” tells all:

This foreigner told Socrates to his face that he was a monstrum — that he harbored in himself all the worst vices and appetites. And Socrates merely answered: “You know me, sir!”- Twilight of the Idols 3

The awareness carried by Socrates is carried in almost all those subvert systems. This is not a man who would light up the room with his charisma; his charms are found only in his ability to use a false-footed argument to stop real political movement that doesn’t suit his desires. If it directly benefits his agenda, he can get it done with a shameless approach.

McConnell has advocated for unrestricted flow of “dark money” into politics his entire career. He has even helped lead two Supreme Court decisions that assist in his goals: McConnell v. Federal Election Commission and Citizens United v. Federal Election Commission. Amounting corporate buy-in as a free speech issue is an absurdist argument that is pique Sophistry. We can deride McConnell’s own logic on this very point by enlisting this quote: “The Constitution of this country was not a rough draft. It was not a rough draft and we should not treat it as such.” This quote alone, when corporations and lobbyists didn’t exist at the time of the Constitution’s drafting, would provide a logical blow to his push for those fields. But, the joy of Sophistry is its fluidness. He will not be held accountable. How can he? He has assisted in destroying the idea of meaning and shame in America.

Once America bottoms out, Lord knows when that will happen, there will have to be a reckoning. I doubt the hemlock would be Mitch’s way out. He will likely perish rich, powerful, and persistently ugly. Our true mission will be to prevent any historical figures from generating reverie towards any of McConnell’s action. Once he is gone, we should condemn that era as one where we lost so much. The damage that McConnell’s ugliness wracked upon our system can only be “cured” if we sentence him. Parallels between Socrates and McConnell show again in Nietzsche: he forced Athens to sentence him. “Socrates is no physician,” he said softly to himself, “here death alone is the physician. Socrates himself has only been sick a long time.” Twilight of the Idols 12

Perhaps, unlike Greece, which fell deep into the decadent Night that Nietzsche accused Socrates of exemplifying not long after Socrates’ rise, we are past our Nightfall. The Dawn, hopefully, fast approaches.

 

 

 

 

 

Do Moral Facts Exist?

Virtually all non-psychopaths think murder is morally wrong. But what makes it so? Is the wrongness an objective fact, one that would exist no matter how people felt about it? Or does the wrongness of murder reside only in people’s minds, with no footing in objective reality?

The question falls in the branch of moral philosophy called metaethics. Instead of pondering topics that come up during everyday moral debate – such as whether a given action is right or wrong – metaethics is more abstract. It is concerned with the nature of morality itself. What do people mean when they say something is right or wrong? Where do moral values come from? When people make moral judgments, are they talking about objective facts or are they merely expressing their preferences?

So, the objectivity of murder’s wrongness depends on whether objective moral facts exist at all. And not all moral philosophers agree that they do.

On one side are the moral realists, who say there are moral facts and that these facts make people’s moral judgments either true or false. If it is a fact that murder is wrong, then a statement that it’s wrong would be true in the same way that saying the earth revolves around the sun would be true.

Moral antirealists hold the opposite. They say there are no moral facts and that moral judgments can’t be true or false like other judgments can.

Some argue that when people express moral judgements, they aren’t even intending to make a true statement about an action. They are simply expressing their disapproval. The philosopher A.J. Ayer popularized this perspective in his 1936 book Language, Truth, and Logic. He argued that when someone says, “Stealing money is wrong,” the predicate “is wrong” adds no factual content to the statement. Rather, it’s as if the person said, “Stealing money!!” with a tone of voice expressing disapproval.

Because moral statements are simply expressions of condemnation, Ayer said, there is no way to resolve moral disputes. “For in saying that a certain type of action is right or wrong, I am not making any factual statement . . . I am merely expressing moral sentiments. And the man who is ostensibly contradicting me is merely expressing his moral sentiments. So that there is plainly no sense in asking which of us is right.”

Other antirealists are on the realists’ side in thinking that moral discourse makes sense only if it assumes there actually are moral facts. But these antirealists – called “error theorists” – say the assumption is false. People do judge actions to be right or wrong in light of supposed moral facts, but they are mistaken – no moral facts exist. Thinking and acting as if they do is an error.

Moral Realism graphic

“The strongest argument for antirealism,” says Geoffrey Sayre-McCord, a philosopher at the University of North Carolina at Chapel Hill, “is to point out the difficulty of making good sense of what the moral facts would be like and how we would go about learning them.”

Scientists can peer through microscopes to learn facts about amoebas. Journalists can observe press conferences and report what was said. Toddlers can tell you that the animal on the sofa is a brown dog. The job of the moral realist is to show that there are moral facts on par with these readily-accepted types of non-moral facts.

Sayre-McCord, who considers himself a moral realist, says this is done best by thinking about what would have to be true for our moral thoughts to be true. This results in some sophisticated philosophical accounts, he says.

Justin McBrayer, a philosopher at Fort Lewis College, says the truth of moral claims can be evaluated by analogy to the ways non-moral truths are established. The same “epistemic norms” apply whether a moral claim or a non-moral claim is being defended. “Some arguments are good, and some are bad,” he says.

Most philosophers are moral realists, but there is a sizeable minority in the antirealist camp. In a 2009 survey of professional, PhD-level philosophers, 56% said they accepted or leaned toward moral realism, while 28% said they accepted or leaned toward moral anti-realism. Sixteen percent said they held some other position.

McBrayer and Sayre-McCord point out the lack of data on the general population’s views, but they both sense that the default position among non-philosophers is moral realism. People think, act, and speak as if there are objective moral facts. But since most have never considered the alternative, many have trouble when pressed to defend their views. “They have to stop and think about it,” McBrayer says.

Sayre-McCord says most people tend to back away from their commitment to moral realism when they’re challenged. “There is a tendency for people to be antirealists metaethically, but realists in practice.”

There is no doubt that how people think about morality affects their behavior, McBrayer says. Psychological research backs this up. In one study, researchers found that participants “primed” to think in realist terms were twice as likely to donate to a charity than participants primed to think in antirealist terms. In another study, researchers found that participants who read an antirealist argument were more likely to cheat in a raffle than those who read a realist argument.

Given these findings, even if murder’s wrongness is just a fiction, it’s hard to argue that it’s not a useful one.