There’s a fascinating article on morality by Steven Pinker in the Sunday New York Times magazine. Research on brains and behavior is revealing that morality has psychological and neurobiological foundations. Here’s a snip:
The starting point for appreciating that there is a distinctive part of our psychology for morality is seeing how moral judgments differ from other kinds of opinions we have on how people ought to behave. Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (â€œkilling is wrongâ€), rather than merely disagreeable (â€œI hate brussels sproutsâ€), unfashionable (â€œbell-bottoms are outâ€) or imprudent (â€œdonâ€™t scratch mosquito bitesâ€).
The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, â€œI donâ€™t like brussels sprouts, but I donâ€™t care if you eat them,â€ but no one would say, â€œI donâ€™t like killing, but I donâ€™t care if you murder someone.â€
The other hallmark is that people feel that those who commit immoral acts deserve to be punished. Not only is it allowable to inflict pain on a person who has broken a moral rule; it is wrong not to, to â€œlet them get away with it.â€ People are thus untroubled in inviting divine retribution or the power of the state to harm other people they deem immoral. Bertrand Russell wrote, â€œThe infliction of cruelty with a good conscience is a delight to moralists â€” that is why they invented hell.â€
This was particularly fascinating to me:
We all know what it feels like when the moralization switch flips inside us â€” the righteous glow, the burning dudgeon, the drive to recruit others to the cause. The psychologist Paul Rozin has studied the toggle switch by comparing two kinds of people who engage in the same behavior but with different switch settings. Health vegetarians avoid meat for practical reasons, like lowering cholesterol and avoiding toxins. Moral vegetarians avoid meat for ethical reasons: to avoid complicity in the suffering of animals. By investigating their feelings about meat-eating, Rozin showed that the moral motive sets off a cascade of opinions. Moral vegetarians are more likely to treat meat as a contaminant â€” they refuse, for example, to eat a bowl of soup into which a drop of beef broth has fallen. They are more likely to think that other people ought to be vegetarians, and are more likely to imbue their dietary habits with other virtues, like believing that meat avoidance makes people less aggressive and bestial.
I suggest that the “moralizers” here have formed an ego-attachment to their vegetarianism. It isn’t just something they do; it’s something that defines who they are. And from there they set up the ol’ Us-Them dichotomy and designate all meat eaters as the Other.
It reminds me of a wise woman I met years ago at a Zen center. Her diet was mostly vegetarian, she said, but she ate meat now and then just so she couldn’t call herself a vegetarian. Way Zen.
Anyway, Pinker goes on to explain how we as a culture moralize and un-moralize various activities. Smoking has been moralized, for example. Divorce has lost its stigma and has been un-moralized. But the list of things we get sanctimonious about seems very arbitrary.
I’ve noticed that as a culture we often will fixate on one activity and blow it up into a big bleeping deal disproportionate to the actual harm it does. Disposable diapers come to mind. When they first came out they were met with outrage by baby butt purists. They were bad for babies and taking up too much space in landfills, the purists said. But they weren’t bad for babies, and there are all sorts of other non-biodegradable items taking up even more space in landfills that no one gets outraged about. (And, anyway, washing cloth diapers puts phosphates into lakes and rivers!)
Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer. Food alone has become a minefield, with critics sermonizing about the size of sodas, the chemistry of fat, the freedom of chickens, the price of coffee beans, the species of fish and now the distance the food has traveled from farm to plate.
… But whether an activity flips our mental switches to the â€œmoralâ€ setting isnâ€™t just a matter of how much harm it does. We donâ€™t show contempt to the man who fails to change the batteries in his smoke alarms or takes his family on a driving vacation, both of which multiply the risk they will die in an accident. Driving a gas-guzzling Hummer is reprehensible, but driving a gas-guzzling old Volvo is not; eating a Big Mac is unconscionable, but not imported cheese or crÃ¨me brÃ»lÃ©e. The reason for these double standards is obvious: people tend to align their moralization with their own lifestyles.
By means of thought experiments that Pinker explains in detail, psychologists have shown that moralization often is irrational.
People donâ€™t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.
Yep, ain’t it the truth?
Researchers have found a few themes or “spheres” universal to human cultures that determine whether something is “moral” or not. These are whether an act causes harm; whether it is fair (although cultural ideas about “fairness” vary widely, I suspect); whether it shows loyalty or disloyalty to one’s designated group; whether it respects authority; and whether the act is “pure” — “they exalt purity, cleanliness and sanctity while loathing defilement, contamination and carnality.”
There’s all manner of evolutionary biology figuring into this, of course.
The ranking and placement of moral spheres also divides the cultures of liberals and conservatives in the United States. Many bones of contention, like homosexuality, atheism and one-parent families from the right, or racial imbalances, sweatshops and executive pay from the left, reflect different weightings of the spheres. In a large Web survey, Haidt found that liberals put a lopsided moral weight on harm and fairness while playing down group loyalty, authority and purity. Conservatives instead place a moderately high weight on all five. Itâ€™s not surprising that each side thinks it is driven by lofty ethical values and that the other side is base and unprincipled.
Pretty much what George Lakoff has been saying for a while.
Reassigning an activity to a different sphere, or taking it out of the moral spheres altogether, isnâ€™t easy. People think that a behavior belongs in its sphere as a matter of sacred necessity and that the very act of questioning an assignment is a moral outrage. The psychologist Philip Tetlock has shown that the mentality of taboo â€” a conviction that some thoughts are sinful to think â€” is not just a superstition of Polynesians but a mind-set that can easily be triggered in college-educated Americans. Just ask them to think about applying the sphere of reciprocity to relationships customarily governed by community or authority. When Tetlock asked subjects for their opinions on whether adoption agencies should place children with the couples willing to pay the most, whether people should have the right to sell their organs and whether they should be able to buy their way out of jury duty, the subjects not only disagreed but felt personally insulted and were outraged that anyone would raise the question.
I’m skipping big chunks of this; you really ought to read the whole thing, if you have time. I thought this paragraph fascinating:
The scientific outlook has taught us that some parts of our subjective experience are products of our biological makeup and have no objective counterpart in the world. The qualitative difference between red and green, the tastiness of fruit and foulness of carrion, the scariness of heights and prettiness of flowers are design features of our common nervous system, and if our species had evolved in a different ecosystem or if we were missing a few genes, our reactions could go the other way. Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?
Maybe it’s because I think like a Buddhist, but I don’t understand how something that’s a product of brain wiring is less “real” than something that’s not a product of brain wiring. And I think pretty much all aspects of human culture are a kind of collective hallucination. Economies, for example, are created by our thoughts, are they not? Money only has value because we all agree it does.
Putting God in charge of morality is one way to solve the problem, of course, but Plato made short work of it 2,400 years ago. Does God have a good reason for designating certain acts as moral and others as immoral? If not â€” if his dictates are divine whims â€” why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others â€” if a command to torture a child was never an option â€” then why not appeal to those reasons directly?
This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly arenâ€™t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.
Moral reasoning can be rational:
Two features of reality point any rational, self-preserving social agent in a moral direction. And they could provide a benchmark for determining when the judgments of our moral sense are aligned with morality itself.
One is the prevalence of nonzero-sum games. In many arenas of life, two parties are objectively better off if they both act in a nonselfish way than if each of them acts selfishly. You and I are both better off if we share our surpluses, rescue each otherâ€™s children in danger and refrain from shooting at each other, compared with hoarding our surpluses while they rot, letting the otherâ€™s child drown while we file our nails or feuding like the Hatfields and McCoys. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, weâ€™d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one in which we both are unselfish. These spreadsheet projections are not quirks of brain wiring, nor are they dictated by a supernatural power; they are in the nature of things.
The other external support for morality is a feature of rationality itself: that it cannot depend on the egocentric vantage point of the reasoner. If I appeal to you to do anything that affects me â€” to get off my foot, or tell me the time or not run me over with your car â€” then I canâ€™t do it in a way that privileges my interests over yours (say, retaining my right to run you over with my car) if I want you to take me seriously. Unless I am Galactic Overlord, I have to state my case in a way that would force me to treat you in kind. I canâ€™t act as if my interests are special just because Iâ€™m me and youâ€™re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.
Not coincidentally, the core of this idea â€” the interchangeability of perspectives â€” keeps reappearing in historyâ€™s best-thought-through moral philosophies, including the Golden Rule (itself discovered many times); Spinozaâ€™s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kantâ€™s Categorical Imperative; and Rawlsâ€™s Veil of Ignorance. It also underlies Peter Singerâ€™s theory of the Expanding Circle â€” the optimistic proposal that our moral sense, though shaped by evolution to overvalue self, kin and clan, can propel us on a path of moral progress, as our reasoning forces us to generalize it to larger and larger circles of sentient beings.
This resonates nicely with the Buddhist view of morality, which basically is that true morality is based on compassion, and true compassion comes from the wisdom that dividing the world into self-and-other is delusional. Morality that is based on an external set of rules is, to me, a crude and flawed kind of morality.
The moral sense, we are learning, is as vulnerable to illusions as the other senses. It is apt to confuse morality per se with purity, status and conformity. It tends to reframe practical problems as moral crusades and thus see their solution in punitive aggression. It imposes taboos that make certain ideas indiscussible. And it has the nasty habit of always putting the self on the side of the angels.
Craving and ego-attachment are the source of all evil and suffering, the Buddha said.
Our habit of moralizing problems, merging them with intuitions of purity and contamination, and resting content when we feel the right feelings, can get in the way of doing the right thing.
Far from debunking morality, then, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend. As Anton Chekhov wrote, â€œMan will become better when you show him what he is like.â€
I have a lot of thoughts about this, but I think I will save them for tomorrow.