THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING | |
The power of common ground
Surprisingly tiny factors can warp our judgments of other people. What can we do about it?By David DeSteno, 9/18/2011 Are you a fair person? Most of us like to believe that the answer to this question is yes – that when we consider what is right and wrong, who deserves forgiveness and who deserves punishment, our judgments will be guided by objective, stable ethical values. Our justice system is built on the principle that everyone is equal before the law: We trust that juries will judge two people who have committed the same transgression equally. Likewise, if two people are victims of the same tragedy, we tend to think that each should evoke similar compassion. This ideal of blind justice is one of the few principles that spans most religious, legal, and political ideologies – a moral universalism trumpeted by thinkers from Pope John Paul II to Noam Chomsky. There’s just one problem with this view of ourselves as fair and objective – our minds don’t work that way. The past decade of psychological research has revealed that much of our moral decision-making, and indeed our deepest moral instincts, are guided less by conscious principles than by feelings and intuitions that can be influenced remarkably easily. And one key factor that shapes our judgment is a surprisingly simple one: how much we see the person we’re judging as similar to us. New findings are suggesting that this similarity doesn’t have to involve anything as obvious as being part of the same group or family. It can be something as subtle as wearing similar colored shirts or wristbands. In fact, in a new experiment, my colleague Piercarlo Valdesolo and I have shown that morality can be influenced even by simply tapping your hands in time with someone else’s. What these findings suggest is that any marker of association with others – even seemingly trivial ones – can alter our underlying moral calculus. The implications are significant, for both individuals and society as a whole. Our intuitive morality has a huge impact on our decisions about people – whether we help, disparage, trust, or convict them. So if those unconscious instincts run contrary to our cherished moral beliefs about justice, it is surely in our interest to begin to recognize this fact – and perhaps even to correct for it. We all know from experience that many moral judgments are not neutral. They can be colored by team spirit, or nationalism: We would all expect an American soldier in Afghanistan to feel more compassion for a wounded fellow soldier than for a wounded member of the Taliban. Likewise, we know that people are willing to excuse loved ones for transgressions they would abhor in a stranger: There’s a reason the family members of a defendant would quickly get booted off a jury. And then there’s learned prejudice – caste systems, racism, or other divides that influence our moral judgments of others. These weighted judgments are examples of what philosophers call moral relativism – the position that what is just and moral depends on the people and contexts involved. However, we tend to imagine that we can step back and overcome such impulses when we make reasoned moral decisions. But can we really? Is this moral relativism a learned, consciously directed response – a rational choice to protect people we care about, for example – or is it something more intrinsic to how the mind works? If the latter is true, our ability to be fair judges may be more compromised than we realize. Piercarlo Valdesolo and I wanted to see just how basic moral relativism was to the mind. We suspected that if moral relativism were truly fundamental and unconscious, then the tiniest temporary differences between two people would alter responses to their plights. So we set out to observe the compassion that research subjects would feel toward various victims of the same transgression who were, in extremely trivial ways, made to seem either more or less similar to them. We knew we couldn’t simply ask people how they would respond. If morality were guided by mechanisms beneath human awareness, our subjects would be unlikely to accurately predict what they would feel or do in response to a hypothetical event. Instead, we needed to stage a situation in which research participants would view one person causing distress to another, and then see how they actually judged the situation. So, at the lab, we had subjects watch two men engage in a task where one cheated the other to avoid onerous work. As a result of the cheating, the “victim” was wrongly assigned to complete an hour of drudgery while the “cheater” got off scot-free. Afterward, we had the observers, alone and on a computer, report their evaluations of the events and how they felt toward the victim. The computer noted that they were then free to leave – or if they wanted, they could come find us and ask to help the victim complete his work. But here was the catch: Right before participants watched these events unfold, we had them sit at a table and tap their hands to musical beats with the person who would later play the role of the victim. By design, some people were made to tap their hands in unison with him; others tapped asynchronously. Why the hand tapping? It was a way to create a sense of similarity based on an entirely meaningless criterion – a criterion that no participant had ever cared, much less thought, about. And that was all it took for moral relativism to emerge. Simply having tapped their hands in time with the victim increased the level of similarity to him that people felt. It also increased the degree of compassion they felt for his plight, and directly altered the odds of their offering help. People who had tapped in time with the victim were more than 30 percent more likely to offer to help him, even though they seemed to have no conscious idea of why they decided to do so. In other work, we have demonstrated that simply having people wear different color wristbands to indicate their use of a (fictitious) style of solving math problems can lead to the same kind of bias. People judged the transgressions of individuals wearing the same color wristband as themselves to be much less objectionable than the exact same transgressions committed by someone wearing another color. Once again, individuals seemed to have no idea that the wristbands they had just put on were shaping their views, but this relatively meaningless marker nonetheless caused their moral principals to be applied differentially. Why does the mind work this way? Over the past decade, research in moral psychology has demonstrated time and again that much of moral “reasoning” does not involve conscious reasoning at all. Rather, as moral psychologists such as Joshua Greene and Jonathan Haidt have shown, our moral judgments are strongly shaped by unconscious responses and situational influences. Greene’s neuroimaging work was among the first to document the interaction of conscious and unconscious mechanisms in moral decision-making. As his findings suggest, morality is determined to a large extent by ancient, adaptive mechanisms that operate outside of awareness. In essence, we often choose to help or to harm others not because we believe it is right, but because we feel it is right. If the evolutionary chisel shaped the moral mind, it’s important to recognize that its goal is not virtue, but survival for reproductive advantage. And from that perspective, basing morality on the sliding scale of similarity to others may be a good strategy. Similarity, after all, can be a marker for people whose goals we share and who will help us in the future. Greater compassion toward those who resemble us, then, makes a certain kind of sense: It stands to improve our own chances. Still, even if our unconscious relativism is biologically advantageous, that doesn’t mean we must see it as morally right. It is the role of science to tell us how our minds work, not what we should ultimately decide. In cases where we want to be fair, these new findings suggest that avoiding moral relativism is not as simple as deliberately clearing our minds of bias. In many cases, the things that bias us are too subtle for us to perceive, meaning that we’d be unlikely to feel the need to “correct” our views. A better strategy might be to change the way we conceive of others on a more basic level – by framing them as more similar to ourselves. The trick is to make such categorizations become habitual, and thereby automatic. A white Bostonian with a new Pakistani neighbor who is a Yankees fan could, instead of dwelling on their differences, try thinking of the same person as a new fellow Bostonian who, like himself, is an aficionado of the local Starbucks. In fact, some moral systems encourage just such strategies. In Buddhist techniques of compassion meditation, practitioners train their minds to see all individuals – friends and foes alike – as interchangeable and interconnected. Based on the above experiment, it’s easy to imagine that repeatedly drawing attention to links as opposed to differences would increase our sense of the radius of similarity that surrounds us – and with it, our experience of a more universal compassion. On an anecdotal level, at least, there is evidence that such a feeling of commonality can transcend even violent conflicts. On Christmas Eve 1914, during World War I, a strange event occurred outside Ypres, Belgium. The British and German troops had been locked in a long and bloody battle. Yet as the Brits stared across the battlefield, they began to hear their German counterparts singing Christmas carols. The Brits soon answered in kind, and quite rapidly, these men, who moments before had been bitter enemies, emerged from their trenches to fraternize and celebrate the holiday together. Instead of being British and German, they were fellow Christians, and with this increased similarity, the hostility miraculously melted away. At that moment, of course, the British high command was more concerned with survival than with universal compassion. Irate, and with an excellent understanding of the instincts that allow us to fight our fellow men, they forbade fraternizing with the enemy and began rotating the troops, to ensure any such fellow feeling would not arise again.
David DeSteno is an associate professor of psychology at Northeastern University and coauthor, along with Piercarlo Valdesolo, of "Out of Character: Surprising Truths About the Liar, Cheat, Sinner (and Saint) Lurking in All of Us." |