Ars Philosopha — October 9, 2014, 8:00 am

Are Humans Good or Evil?

A brief philosophical debate.

In this story: philosophers,[1] the ethics of rhesus monkey testing,[2] Friedrich Nietzsche,[3] selfish altruists,[4] animal concerns,[5] sadists,[6] Immanuel Kant,[7] and Ponzi schemers.[8] For access to Harper’s 164-year archive, subscribe here. 

Philosophers Clancy Martin and Alan Strudler debate whether humans are, as Martin argues, inherently good. “Clancy Martin sees people as ‘mostly good,’” Strudler counters. “His most persuasive reason seems quantitative: people much more often act rightly than wrongly…The idea that people are good because they do mostly good things makes sense only on a desperately low standard of goodness.”

Clancy Martin: The Confucian philosopher Mencius (382-303 BCE) tells the story of how an ordinary person feels if he sees a child fall into a well. Any person, Mencius argues, will feel alarm and distress — not because he hoped for anything from the parents, nor because he feared the anger of anyone because he failed to save the child, nor again because he wanted to enhance his own reputation by this act of modest heroism. It’s simply that, Mencius says, “All men have a mind which cannot bear to see the sufferings of others.”

Mencius’s claim is too strong. Whether we think of jihadists cutting off the heads of innocent journalists or soldiers waterboarding helpless prisoners, everywhere we look we see examples of humans not only bearing the sufferings of others, but causing them, even taking pleasure in them. Surely we are both good and evil: it’s hard to imagine an argument or an experiment that would prove that we are wholly one or the other. But that we are both good and evil doesn’t mean we are an equal mix of the two. What Mencius intends is that we are mostly good — that good is our normal state of being and evil is an exceptional one, in much the way health is our normal state of being and sickness the exception.

What is our goodness? It seems to be nothing more than the natural tendency towards sympathy, and it looks like it’s not restricted to human beings. A study found that rhesus monkeys would decline to pull a chain that delivered a heaping meal once they figured out that this action simultaneously administered a shock to another rhesus monkey. “These monkeys were literally starving themselves to prevent the shock to the conspecific,” noted one of the researchers. There is also overwhelming evidence that primates experience sympathy for animals outside their own species: Kuni, a bonobo female at Twycross Zoo in England, cared for a wounded starling that fell into her enclosure until it could be removed by the zookeeper.

To be good is fundamentally to have other people’s interests in mind, and not—as Mencius is concerned to point out (and after him the philosopher Immanuel Kant)—just because of the good they can do you in return. To be good is to have some genuinely selfless motivations. Of course, circumstances often encourage us to act in selfish and cruel ways: the world is a competitive, rough-and-tumble place, with more scarcity than supply. Yet most of us, when allowed, are naturally inclined toward sympathy and fellow-feeling. As the philosopher R.J. Hankinson pointed out to me when we were discussing the problem: “It’s not just that we do act with sympathy, we are happier when we act that way — and we don’t do it just because it makes us happy.” There’s an interesting sort of self-defeating quality about trying to be altruistic for selfish reasons. But what is rare — and looks pathological — is to act purely selfishly.

The very oddness, the counter-intuitiveness of the good, suggests a primal source of its existence in us, which in turn explains our perennial insistence upon it. Something in us rebels against the idea of abandoning goodness. Indeed there are many, whether theists or no, who would be deeply offended by the mere suggestion. Here again we see that our goodness wells up from a deep, natural spring.

To make a claim that will strike some as naïve, and others as a dangerous self-deception: we must believe man is good. We must believe so even when evidence to the contrary threatens to overwhelm us — perhaps especially when evidence to the contrary begins to overwhelm us. Bruce Bubacz, another philosopher friend of mine, recently argued with me (this is the kind of thing that happens if you befriend too many philosophers) that every man can be either good or evil: it all depends, to seize on a photographic metaphor, “on the developer and the fixer.” He reminded me of the Cherokee myth that both a good wolf and a bad wolf live in the human heart, always at battle, and the winner is “the one you feed.”

Bubacz’s way of looking at it does make sense of cases like the complicity of ordinary Germans in the Holocaust, or the escalations of evil in very recent genocides in places such as Bosnia and Sudan. But these arguments ultimately strike me as both dangerous and too easy. The reason we morally accuse evil when we see it — and are right to do so, are required to do so — is that we know Lincoln’s “better angels of our nature” are there to guide us. Rainer Maria Rilke advised his young poet, when seeking moral guidance, always to do what is most difficult. Rilke was not suggesting man is naturally evil and must overcome his uglier impulses through the struggle for goodness, but that our goodness is there to be found, with difficulty, in a world that may well encourage evil.

We should remember our heroes. Even a cynic is obliged to admit that our greatest heroes are moral exemplars. The most successful billionaires are not admired in the same way (except when, like Carnegie, Gates, and Buffet, they give back). Our most brilliant scientists and artists don’t enjoy quite the same level of admiration as those figures like Buddha, Socrates, Jesus Christ, Gandhi, Mother Theresa, and Martin Luther King Jr., who represent the finest degree of moral superiority.

But what convinces me, more than anything else, of our innate goodness is the existence of moral progress. In a world of moral relativists, the idea of moral improvement is a non-starter. Changes in moral codes are merely changes in social convention, goes the argument, and the study of morality is really a subgenre of anthropology. (One of my own favorite philosophers, Friedrich Nietzsche, held exactly this view.) But even as fine a mind as Aristotle’s once believed that slavery was necessary for a good society, which we now universally condemn. The fact that human rights are spreading in the world, slowly but undeniably, also strikes me as an undeniable improvement.

Gandhi’s oft-repeated words on the subject have that hit-your-head-with-a-stick effect that drives out my occasional skeptical melancholy about the human condition: “When I despair, I remember that all through history the ways of truth and love have always won. There have been tyrants, and murderers, and for a time they can seem invincible, but in the end they always fall.” Yes, they keep rising up again, too. But when I look at the long, often vicious fight that is human history, it seems to me that the good guys are winning—and that, perhaps more importantly, we really want them to win.

Alan Strudler: Human horror should surprise nobody. Jeffrey Dahmer kills and rapes teenagers, then eats their body parts; professional hit men coolly assassinate strangers in exchange for a little money; leaders from Hitler to Pol Pot to Assad orchestrate colossal rituals of cruelty. The list goes on. Each horror involves its own unique assault on human dignity, but they share something more striking than their differences: an appetite for suffering unique to us. Maybe there is something wrong with our species; maybe there is something wrong with each of us.

It is natural to dismiss Dahmer and the others as differing fundamentally from ordinary people. They are rapists, torturers, sadists, and we are not. It is natural to conclude that even though we have our flaws, their evil cannot be ours.

But grave moral problems inhere in us all. The wrongs for which we are generally responsible are not as sensational as Dahmer’s, but they are profound. Establishing this point requires reflection on arguments from the Princeton philosopher Peter Singer and a bit of social psychology.

Imagine that I take a stroll and see a child drowning in a shallow pond (a variation, of course, on Mencius’s child falling into the well). Nobody else is around. I can easily save the child, but don’t. I am responsible for his death. I have committed a grave wrong, perhaps the moral equivalent of murder. When I can save a life without sacrificing anything of moral significance, I must do so, Singer argues.

Across the world people are dying whom you and I can save by contributing a few dollars to Oxfam, and whom we don’t save. (If you are skeptical about this argument because you doubt Oxfam’s efficacy, ask yourself whether people would behave differently if they knew that their contribution to Oxfam would save lives. It would hardly make a difference.) Perhaps we are in ways like murderers, without their cruelty or cold-bloodedness. I have been discussing Singer’s argument with students in business and the humanities for twenty years. Most of them recognize its force, acknowledge responsibility for death around the world, shrug their shoulders, and move on. Death at a distance leaves us unfazed.

In the right circumstances, we are capable of doing the right thing. Yet we are fickle about goodness, as shown by one famous psychology experiment. Subjects coming out of a phone booth encounter a woman who has just dropped some papers, which are scattered across the ground. Some subjects help pick up the paper; others do not. The difference between the two groups is largely fixed by whether a subject found a coin that the experimenter had planted in the phone booth. Those who find the coin are happy and tend to help; people who don’t find a coin tend not to help. Other experiments get similar results even when the stakes are much higher. In some situations, most people will knowingly and voluntarily expose others to death for no good reason. People cannot be trusted.

The Georgetown philosopher Judith Lichtenberg surveys these experiments, along with Peter Singer’s arguments, and finds hope. Maybe investigations of this sort will reveal enough about our psychology so that we can arrange the world in ways to make people behave better and limit human misery. Even if she is right, her hope should trouble us. If we need a crutch — a coin in a phone booth — to make us behave well, our commitment to morality seems shallow.

If further proof is needed of the unsteady relationship between humans and morality, think about Bernard Madoff. A beloved family man, a friend and trusted advisor to leaders across the political and business world, this Ponzi schemer vaporized billions of dollars in personal life savings and institutional endowments. How can a man so apparently decent do such wrong? One answer is that the decency was a well-crafted illusion: Madoff was a monster, a cold-blooded financial predator from the start, feigning friendships in a ploy to seduce clients so that he could steal their money. A different answer is that Madoff would have led a morally unexceptional life if only he hadn’t found himself in situations that brought out the worst in him.

In his remarks above, Clancy Martin sees people as “mostly good.” His most persuasive reason seems quantitative: people much more often act rightly than wrongly. Does Martin correctly tabulate our right and wrong acts? I don’t know, and neither does anybody else. Much about humans can be measured, including our body temperatures, white blood cell counts, even the extent to which we hold one peculiar belief or another — but no technology exists for measuring the comparative frequency with which we engage in wrongful conduct, and there is no reason to think that our unaided skills of observation might yield reliable judgments on the issue. It hardly matters. Even if we concede, for the sake of argument, that good acts narrowly nose out the bad ones, it does not make Martin’s case. The idea that people are good because they do mostly good things makes sense only on a desperately low standard of goodness.

Suppose that you have a morally wonderful year, which involves endless acts of showing love to your family and kindness to strangers; as a member of the volunteer fire department, you even save a few lives. Then, one day, you see a chance to get rich by killing your wealthy uncle and accelerating your inheritance, and you seize the opportunity. Your one bad act easily outweighs a year’s good acts. Your claim to decency fails. This shows that there is a basic asymmetry between being good and being bad. One must do much good to be good, but doing just a little bad trumps that good.

Indeed, talk about good and bad seems strangely untethered to empirical facts. Perhaps the point of such talk, Martin tells us, is pragmatic rather than descriptive; it aims to motivate and exhort us, not to deliver any truths. If we think of ourselves as good, it will improve our prospects for becoming so. But again, recent history has not been kind to the pragmatic argument.

Remember Liar’s Poker, Michael Lewis’s recounting of the fall of Salomon Brothers. At the center of the tale is Salomon CEO John Gutfreund, who passively watches his firm get destroyed by fraud. One of Gutfreund’s most salient traits was his perception of himself as morally superior, a man who would never tolerate wrong. Because he thought he was too good to do wrong, Gutfreund refused to recognize the depth of the wrong in which he was involved, and disaster followed. When not rooted in facts, moral confidence dulls our senses. We should try to do our best, but that requires taking our evil seriously.

Share
Single Page

More from Clancy Martin:

Conversation March 30, 2015, 2:45 pm

On Death

“I think that the would-be suicide needs, more than anything else, to talk to a person like you, who has had to fight for life.”

Ars Philosopha March 14, 2014, 12:46 pm

On Hypocrisy

Should we condemn hypocrites, when we can’t help but be hypocrites ourselves?

Ars Philosopha June 25, 2013, 2:00 pm

On Suicide

And why we should talk more about it

Get access to 168 years of
Harper’s for only $45.99

United States Canada

CATEGORIES

THE CURRENT ISSUE

August 2018

Combustion Engines

= Subscribers only.
Sign in here.
Subscribe here.

There Will Always Be Fires

= Subscribers only.
Sign in here.
Subscribe here.

The End of Eden

= Subscribers only.
Sign in here.
Subscribe here.

How to Start a Nuclear War

= Subscribers only.
Sign in here.
Subscribe here.

view Table Content

FEATURED ON HARPERS.ORG

Article
Combustion Engines·

= Subscribers only.
Sign in here.
Subscribe here.

On any given day last summer, the smoke-choked skies over Missoula, Montana, swarmed with an average of twenty-eight helicopters and eighteen fixed-wing craft, a blitz waged against Lolo Peak, Rice Ridge, and ninety-six other wildfires in the Lolo National Forest. On the ground, forty or fifty twenty-person handcrews were deployed, alongside hundreds of fire engines and bulldozers. In the battle against Rice Ridge alone, the Air Force, handcrews, loggers, dozers, parachutists, flacks, forecasters, and cooks amounted to some nine hundred people.

Rice Ridge was what is known as a mega-fire, a recently coined term for blazes that cover more than 100,000 acres. The West has always known forest fires, of course, but for much of the past century, they rarely got any bigger than 10,000 acres. No more. In 1988, a 250,000-acre anomaly, Canyon Creek, burned for months, roaring across a forty-mile stretch of Montana’s Bob Marshall Wilderness in a single night. A few decades on, that anomaly is becoming the norm. Rice Ridge, for its part, swept through 160,000 acres.

At this scale, the firefighting operation is run by an incident management team, a group of about thirty specialists drawn from a mix of state and federal agencies and trained in fields ranging from aviation to weather forecasting and accounting to public information. The management teams are ranked according to experience and ability, from type 3 (the least skilled) to type 1 (the most). The fiercest fires are assigned to type 1s. Teams take the name of their incident commander, the field general, and some of those names become recognizable, even illustrious, in the wildfire-fighting community. One such name is that of Greg Poncin, who is to fire commanders what Wyatt Earp was to federal marshals.

Smoke from the Lolo Peak fire (detail) © Laura Verhaeghe
Article
There Will Always Be Fires·

= Subscribers only.
Sign in here.
Subscribe here.

The pinhal interior, a wooded region of hills and narrow hollows in rural central Portugal, used to be farmland. Well into the latter half of the past century, the fields were worked by peasants from the old stone villages. Portugal was poor and isolated, and the pinhal interior particularly so; when they could, the peasants left. There is electricity and running water now, but most of the people have gone. The fields have been taken over by trees. Each year the forest encroaches farther, and each year the villages grow more lonely. There are remnants of the earlier life, though, and amid the trees the holdouts of the older generations still work a few small fields. The pinhal interior cannot yet be called wilderness, then, and that, in large part, is why it burns.

Thousands of fires burn in the region each summer, almost all of them started not by lightning or some other natural spark but by the remaining Portuguese. (The great majority of the blazes are started unintentionally, though not all.) The pinhal interior—the name means “interior pine forest,” though today there is at least as much eucalyptus as pine—stretches along a sort of climate border between the semiarid Iberian interior and the wet influence of the Atlantic; vegetation grows exceptionally well there, and in the summers fire conditions are ideal. Still, most of the burns are quickly contained, and although they have grown larger in recent years, residents have learned to pay them little mind. The creeping fire that began in the dry duff and twigs of an oak grove on June 17 of last year, in the district of Pe­drógão Grande, therefore occasioned no panic.

A local woman, Dora da Silva Co­sta, drove past the blaze in the midafternoon, by which time it had entered a stand of pines. Firefighters were on hand. “There were no people in the streets,” Costa told me. “It was just another fire.” She continued on her way. It was a Saturday, and she had brought her two young sons to visit their older cousin in Vila Facaia, the village of small farms in which she’d been raised.

Firefighters near Pedrógão Grande (detail) © Pablo Blazquez Dominguez/Getty Images
Article
The End of Eden·

= Subscribers only.
Sign in here.
Subscribe here.

On a blistering morning in July 2017, Ghazi Luaibi rose before dawn and set out in a worn black sedan from his home in Zubair, a town of concrete low-rises in southern Iraq. He drove for a while along sandy roads strewn with plastic bags. On the horizon, he could see gas flares from the oil refineries, pillars of amber flame rising into the sky. As he approached Basra, the largest city in the province, desert scrub gave way to empty apartment blocks and rows of withered palms. Though the sun had barely risen, the temperature was already nearing 100 degrees Fahrenheit. The previous year, Basra had registered one of the highest temperatures ever reliably recorded on earth: about 129 degrees, hot enough to cause birds to drop from the sky.

Ghazi, a sixty-two-year-old with stooped shoulders, an ash-gray beard, and lively brown eyes, would have preferred to stay home and wait out the heat. But he hadn’t had much of a choice. He was the president of the local council of Mandaeans, members of a gnostic religion that appeared in Mesopotamia in the early centuries ad. Today marked the beginning of their new year, and Ghazi, who was born into the Mandaean priestly class, was responsible for making sure everything went smoothly: he needed to find a tent to shield worshippers from the sun and, most importantly, a location near flowing water where they could carry out the ceremony.

Mandaean holidays are celebrated with a mass baptism, a ritual that is deeply rooted in their scripture and theology. Mandaeans follow the teachings of Yahia Yuhana, known to Christians as John the Baptist. Water is central to their religion. They believe that all life originates in the World of Light, a spiritual realm that is the starting point for a great river known as Yardana, or Jordan. Outside the World of Light lie the lifeless, stagnant waters of the World of Darkness. According to one version of the Mandaean creation myth, a demiurge named Ptahil set out to shape a new world from the World of Darkness, which became the material world we inhabit today. Once the world was complete, Ptahil sculpted Adam, the first man, from the same dark waters as the earth, but his soul came from the World of Light. In Mandaean scripture, rivers are manifestations of the World of Light, coursing from the heavenly Jordan to the earth to purify it. To be baptized is to be immersed in this divine realm.

Basra General Hospital (detail) July 2017 © Alex Potter
Article
How to Start a Nuclear War·

= Subscribers only.
Sign in here.
Subscribe here.

Serving as a US Air Force launch control officer for intercontinental missiles in the early Seventies, First Lieutenant Bruce Blair figured out how to start a nuclear war and kill a few hundred million people. His unit, stationed in the vast missile fields at Malmstrom Air Force Base, in Montana, oversaw one of four squadrons of Minuteman II ­ICBMs, each missile topped by a W56 thermonuclear warhead with an explosive force of 1.2 megatons—eighty times that of the bomb that destroyed Hiroshima. In theory, the missiles could be fired only by order of the president of the United States, and required mutual cooperation by the two men on duty in each of the launch control centers, of which there were five for each squadron.

In fact, as Blair recounted to me recently, the system could be bypassed with remarkable ease. Safeguards made it difficult, though not impossible, for a two-man crew (of either captains or lieutenants, some straight out of college) in a single launch control center to fire a missile. But, said Blair, “it took only a small conspiracy”—of two people in two separate control centers—to launch the entire squadron of fifty missiles, “sixty megatons targeted at the Soviet Union, China, and North Korea.” (The scheme would first necessitate the “disabling” of the conspirators’ silo crewmates, unless, of course, they, too, were complicit in the operation.) Working in conjunction, the plotters could “jury-rig the system” to send a “vote” by turning keys in their separate launch centers. The three other launch centers might see what was happening, but they would not be able to override the two votes, and the missiles would begin their firing sequence. Even more alarmingly, Blair discovered that if one of the plotters was posted at the particular launch control center in overall command of the squadron, they could together format and transmit a “valid and authentic launch order” for general nuclear war that would immediately launch the entire US strategic nuclear missile force, including a thousand Minuteman and fifty-four Titan missiles, without the possibility of recall. As he put it, “that would get everyone’s attention, for sure.” A more pacifically inclined conspiracy, on the other hand, could effectively disarm the strategic force by formatting and transmitting messages invalidating the presidential launch codes.

When he quit the Air Force in 1974, Blair was haunted by the power that had been within his grasp, andhe resolved to do something about it. But when he started lobbying his former superiors, he was met with indifference and even active hostility. “I got in a fair scrap with the Air Force over it,” he recalled. As Blair well knew, there was supposed to be a system already in place to prevent that type of unilateral launch. The civilian leadership in the Pentagon took comfort in this, not knowing that the Strategic Air Command, which then controlled the Air Force’s nuclear weapons, had quietly neutralized it.

This reluctance to implement an obviously desirable precaution might seem extraordinary, but it is explicable in light of the dominant theme in the military’s nuclear weapons culture: the strategy known as “launch under attack.” Theoretically, the president has the option of waiting through an attack before deciding how to respond. But in practice, the system of command and control has been organized so as to leave a president facing reports of incoming missiles with little option but to launch. In the words of Lee Butler, who commanded all US nuclear forces at the end of the Cold War, the system the military designed was “structured to drive the president invariably toward a decision to launch under attack” if he or she believes there is “incontrovertible proof that warheads actually are on the way.” Ensuring that all missiles and bombers would be en route before any enemy missiles actually landed meant that most of the targets in the strategic nuclear war plan would be destroyed—thereby justifying the purchase and deployment of the massive force required to execute such a strike.

Among students of nuclear command and control, this practice of precluding all options but the desired one is known as “jamming” the president. Blair’s irksome protests threatened to slow this process. When his pleas drew rejection from inside the system, he turned to Congress. Eventually the Air Force agreed to begin using “unlock codes”—codes transmitted at the time of the launch order by higher authority without which the crews could not fire—on the weapons in 1977. (Even then, the Navy held off safeguarding its submarine-launched nuclear missiles in this way for another twenty years.)

Following this small victory, Blair continued to probe the baroque architecture of nuclear command and control, and its extreme vulnerability to lethal mishap. In the early Eighties, while working with a top-secret clearance for the Office of Technology Assessment, he prepared a detailed report on such shortcomings. The Pentagon promptly classified it as SIOP-ESI—a level higher than top secret. (SIOP stands for Single Integrated Operational Plan, the US plan for conducting a nuclear war. ESI stands for Extremely Sensitive Information.) Hidden away in the Pentagon, the report was withheld from both relevant senior civilian officials and the very congressional committees that had commissioned it in the first place.

From positions in Washington’s national security think tanks, including the Brookings Institution, Blair used his expertise and scholarly approach to gain access to knowledgeable insiders at the highest ranks, even in Moscow. On visits to the Russian capital during the halcyon years between the Cold War’s end and the renewal of tensions in the twenty-first century, he learned that the Soviet Union had actually developed a “dead hand” in ultimate control of their strategic nuclear arsenal. If sensors detected signs of an enemy nuclear attack, the USSR’s entire missile force would immediately launch with a minimum of human intervention—in effect, the doomsday weapon that ends the world in Dr. Strangelove.

Needless to say, this was a tightly held arrangement, known only to a select few in Moscow. Similarly chilling secrets, Blair continued to learn, lurked in the bowels of the US system, often unknown to the civilian leadership that supposedly directed it. In 1998, for example, on a visit to the headquarters of Strategic Command (­STRATCOM), the force controlling all US strategic nuclear weapons, at Offutt Air Force Base, near Omaha, Nebraska, he discovered that the ­­­STRATCOM targeting staff had unilaterally chosen to interpret a presidential order on nuclear targeting in such a way as to reinsert China into the ­SIOP, from which it had been removed in 1982, thereby provisionally consigning a billion Chinese to nuclear immolation. Shortly thereafter, he informed a senior White House official, whose reaction Blair recalled as “surprised” and “befuddled.”

In 2006, Blair founded Global Zero, an organization dedicated to ridding the world of nuclear weapons, with an immediate goal of ending the policy of launch under attack. By that time, the Cold War that had generated the ­SIOP and all those nuclear weapons had long since come to an end. As a result, part of the nuclear war machine had been dismantled—warhead numbers were reduced, bombers taken off alert, weapons withdrawn from Europe. But at its heart, the system continued unchanged, officially ever alert and smooth running, poised to dispatch hundreds of precisely targeted weapons, but only on receipt of an order from the commander in chief.

Bombhead, by Bruce Conner (detail) © Conner Family Trust, San Francisco, and ARS, New York City. Courtesy Kohn Gallery, Los Angeles

Minimum cost of a “pleasure palace” being built for Vladimir Putin:

$1,000,000,000

Israeli researchers claimed to have identified a ruthlessness gene.

Trump and Putin puzzle out cybersecurity in Helsinki, John Kelly didn't like his breakfast in Brussels, and a family of woodchucks ate the wiring in Paul Ryan's car

Subscribe to the Weekly Review newsletter. Don’t worry, we won’t sell your email address!

HARPER’S FINEST

Happiness Is a Worn Gun

By

Illustration by Stan Fellows

Illustration by Stan Fellows

“Nowadays, most states let just about anybody who wants a concealed-handgun permit have one; in seventeen states, you don’t even have to be a resident. Nobody knows exactly how many Americans carry guns, because not all states release their numbers, and even if they did, not all permit holders carry all the time. But it’s safe to assume that as many as 6 million Americans are walking around with firearms under their clothes.”

Subscribe Today