A small group of four soldiers escape heavy gunfire. Their platoon has been massacred. In order to survive, they have to take cover and pray that they will not be found. As they run for cover, they notice a local woman and her baby girl hiding behind a burning truck. They grab the woman and her child and run into a nearby building. The soldiers hurry down the first staircase they see, and find a hidden room behind a trap door. Everyone is dead quiet as soon as they hear the building’s front door open – it’s the enemy. The baby girl starts fussing, and if she doesn’t calm down, she’ll give away their hiding spot which will inevitably lead to her death and the death of five others. With only seconds to act, the soldiers have to decide whether or not to suffocate the baby girl in order to save their own lives. What should they do?
Can taking an innocent life ever be justified? Should the soldiers kill the baby girl in order to spare the lives of five adults? The intuitive utilitarian in me leans toward yes, but a follow-up question may cast some doubt on my automatic conclusion. Can it ever be considered moral to kill an innocent member of our species? Indeed, it may be necessary, but can taking someone else’s life actually be the right thing to do? The answer may not be as easy as it first seems. What does it mean to do the right thing? What does it mean to be moral? Does morality concern itself with questions of good and bad, or right and wrong? Are there any meaningful differences between good and right, or bad and wrong?
In trying to address this issue, Jonathan Haidt, now a professor at NYU Stern School of Business, has done some interesting work in moral psychology. Along with his colleagues, Haidt has come up with a way to assess what people believe to be moral. Moral Foundations Theory is a working psychological theory that claims to explain why people have different beliefs about what constitutes morality. In short, Haidt argues that liberals and conservatives have divergent interpretations of what it means to be moral. According to aggregated data over the past decade, Haidt suggests that liberals tend to be more concerned with the individualizing foundations – that is, they rank high on foundations labeled “care” and “fairness.” Conservatives, on the other hand, tend to be more concerned with the binding foundations – that is, they rank high on the labels “in-group loyalty,” “authority,” and “purity.” Haidt concludes, based on this evidence, that liberals and conservatives view the world through different moral lenses. What one group views as moral, the other may view as amoral, or even immoral. For example, conservatives typically value religious beliefs and deference for God, whereas liberals typically rank these values much lower than they would rank something like equal rights for all citizens. This is an important theory for understanding the way that people from all over the world reason about morality. The only problem with stopping at this theory is that MFT is purely descriptive – that is, it can only tell us what people believeto be moral, not what actually is moral.
This brings us to a common philosophical problem. We’ve all heard the classic trope, “you can’t get an ought from an is.” This goes back to the time of Scottish philosopher, David Hume. Hume argued that it is impossible, through pure reason, to derive a normative (or prescriptive – an “ought”) statement from a descriptive (an “is”) statement. His justification for this was simple; you can’t reason how something ought to be simply by observing how it is. This idea has had powerful implications for the conduct of science and philosophy. Historically, people have believed science to have one purpose, and philosophy to have another. Science can answer questions about what is, and philosophy can answer questions about what ought to be. But because of Hume’s Law, science can never determine what ought to be, as science is purely descriptive. Many scientists and philosophers still cite Hume whenever anyone is witty enough to overstep his role as an “is-finder” by trying to play the “ought” game. Recently, one man seems to have stood out from the rest in making claims about the purpose of science – Sam Harris. Harris graduated from Stanford with a degree in philosophy, and from UCLA with a PhD in neuroscience. He has written some influential, yet provocative books about religion and morality. Harris claims that science and philosophy have never been partitioned in the way most people believe, and even more controversially, claims that the is-ought problem is not a real problem.
I have to admit, before reading Harris’ “The Moral Landscape,” I was a moral relativist. By this, I mean that I believed morality was relative because what was good for one person wasn’t necessarily good for another person. Throughout reading Harris’ arguments, my beliefs about morality were absolutely shaken. He argues that there has been a persistent claim that facts and values are two separate entities. Science deals in the business of facts, but cannot say anything of values. Harris contends that facts and values are not separate, but two sides of the same coin. His justification for this is that science has always been in the values business because we simply cannot talk about facts without embracing certain values. His example of water drives home his point: one of the most basic scientific statements – water is two parts hydrogen and one part oxygen – seems as value-free in utterance as anything we could ever ask for. But what if you found someone who didn’t share this view of water? How could you convince him that this is what water is? Harris concludes that in order for us to make statements of fact in science, we have to first embrace the value of understanding the universe. If someone doesn’t share the value of understanding the universe, the conversation is over (and any statements of fact are rendered arbitrary)! He then goes on to articulate a clever muse, “If someone doesn’t value evidence, what evidence could you show him to prove to him that he should value evidence? If someone doesn’t value logic, what logical argument could you make to prove that he should value logic?” It is with this reasoning that Harris redresses Hume’s Law by stating, “We simply cannot derive an is without first embracing certain oughts.”
I have only recently been persuaded out of moral relativism, so this is a relatively new worldview for me, because before this past year, I had never contemplated the deeper question: What constitutes morality – what do we even mean when we say that something is moral? Luckily, last summer, I started working in an applied social psychology lab that utilized Haidt’s Moral Foundations Theory. As I learned more about this theory, a new thought crossed my mind, “What do we have to be talking about when we say that something is moral?” A few months later, I stumbled upon the work of Kurt Gray, an experimental psychologist from the University of North Carolina. Gray wrote one of the most provocative (and to me, accurate) manuscripts in the history of moral psychology, “Mind perception is the essence of morality.” After reading his paper, I felt like every idea I had ever read on the concept of morality now made sense. It made sense because I realized that in order to talk about morality in any meaningful way, we had to be talking about the experience of minds. I suddenly understood why Moral Foundations Theory wasn’t the complete answer to studying morality. Haidt and his colleagues didn’t circumscribe moral concerns, which allowed their participants to believe that anything could be construed in moral terms, even in cases in which minds were not involved. Haidt and Gray have since argued about their competing psychological theories of morality, and I personally attended their latest debate in San Diego at the Society for Personality and Social Psychology conference. After thinking about their debate, I think it’s possible to answer the question of what constitutes morality, with the help of both current and past scientists and philosophers, of course.
Imagine a world full of sand. In this world, would there be such a concept as morality? Well, given what we know about consciousness, we would have to assume that in a world full of sand, there would be no conscious life. Given the assumption that there would be no conscious life, we would have to assume that there would be no mind perception. Given the assumption that there would be no mind perception, we would then have to assume that there would be no such thing as good and bad, or right and wrong. Why? In the context of morality, we use these words only to describe the thoughts and behaviors of sentient life, and their consequences. For example, what makes a murderer an immoral person? Is it because we believe that killing is intrinsically wrong? Not necessarily. Most of us kill cockroaches, flies, and hornets without a flinch. But what makes killing a human being seem so much more wrong? We believe (like we believe about ourselves) that other humans have minds! We assume that an entity with a mind can feel. If we assume that a mind can feel, we must assume that a mind can experiencebliss and suffering. It follows that what we find immoral about murderers is their intent to cause suffering. Gray argues that we have a cognitive template that intuitively assumes an intentional moral agent and a suffering patient. For example, if you’re walking through the grocery store, and you see a woman crying, your moral cognitive template kicks in to categorize her as a suffering patient. Gray argues that this process is automatic. You intuitively assume that there is an intentional moral agent who intended and caused this suffering. This process would be impossible without the ability to perceive minds!
I believe this is important not only because it may be possible to convince moral relativists that there actually are constraints on what can be considered moral, but I think it is also possible to convince people that things they believe to be immoral actually have nothing to do with suffering. For example, in the West, there are no clear prohibitions about what you can eat after a loved one’s death. On the other hand, in India, Oriya Hindu Brahmans believe it to be immoral for the eldest son to eat chicken after his father’s death. According to a University of Chicago cultural anthropologist, Rich Shweder, Hindus believe that the eldest son is responsible for processing the father’s “death pollution” by eating a vegetarian diet. By eating chicken, the son is thereby condemning the father’s soul to eternal suffering. While this may be a controversial claim in itself (and subject to a much richer debate about meta-physics), we could see that it may be possible to convince Hindus that eating chicken after their father’s death isn’t in fact immoral. Given the lack of evidence for the existence of souls (or minds beyond death), the action of eating chicken loses its inherent moral worth. A few psychologists have tested a similar idea.
Joshua Greene and Joe Paxton of Harvard University experimented with a story that gives rise to intuitions regarding immorality. Greene and Paxton used one of Jonathan Haidt’s social-intuitionist vignettes to see if they could manipulate participants’ reactions to perceived immoral actions. The vignette goes something like this: “A brother and sister are alone in a cabin. They decide they want to have some fun and do some experimenting. They agree to have sex, but only on the condition that the brother wears a condom and the girl is on birth control. This guarantees that there will be no procreation as a result of their actions. Is this immoral?” In one condition, participants answered right away, which was a sign that they were engaging in more intuition-driven reasoning. In a second condition, participants couldn’t answer until a certain amount of time had passed, and they were shown an evolutionary explanation for why incest is intuitively disgusting.
When Haidt conducted his original experiments and employed this vignette, he found that people intuitively judged the brother and sister’s actions as immoral, but when he asked for justification, people were morally dumbfounded – they gave answers such as, “It’s just wrong,” or “It’s gross.” Greene and Paxton’s aim was to see if an elongated period of time, along with a scientific explanation for disgust was enough to persuade participants away from intuition-driven justifications (claiming that incest was immoral). Effortful rationalization would lead one to argue something along the lines of, “Well, it’s wrong because they could end up having a child that has developmental problems,” or “Well, it’s obvious that one of them didn’t want to do it, but the other forced him/her to do it.” These justifications seem to support Kurt Gray’s contention that when we argue about morality, what we’re really arguing about is perceived suffering, not whether something is gross or impure. I would argue that concerns about consent and children born from incestual relations are really arguments about perceived suffering. Greene and Paxton’s results confirmed that a significant amount of participants did not believe the incest case to be immoral when given an elongated period of time and a scientific explanation for disgust-related reactions. I consider this study to be (hopefully) one of the first of many to suggest that we can flee the nest that evolution has woven for us.
What makes studying morality so important? Primarily, I believe that understanding moral concerns (descriptively, objectively, and prescriptively) is imperative to building a society in which everyone can thrive. If it is possible to nudge people away from their intuition-driven justifications, it may be possible to persuade people to embrace a morality built upon a continuum of harm and suffering. By doing so, we can learn to avoid meaningless suffering and maximize positive experience. Secondarily, I think it is important to recognize the importance of being open to philosophical and scientific inquiry. In the context of morality, these deliberation tools have been proven to change at least one mind – I can’t imagine that I’m the only one.
Let us reflect on the opening scenario. What should be done? I don’t claim to have a good answer. Should the soldiers kill the baby girl? I don’t know. Neither answer seems to satisfy my intuitions. If the girl is killed, my intuitions about individual rights kick in. Doesn’t the baby girl have the right to not be killed? If the soldiers decide not to kill her, my intuitions about the common good kick in. Doesn’t it make sense, actuarially, to kill one for the sake of five? It could be that these scenarios are psychologically impossible to solve given Gray’s cognitive template. In both cases, we’re left with suffering patients who have incurred the wrath of intentional agents. Because of this, it’s possible that no matter how well-reasoned our justifications are, the answer may never feel right. Maybe a deeper understanding of science and philosophy can get us closer to the true moral answer.
Book References:
Edmonds, D.. (2014). Would You Kill the Fat Man?: The Trolley Problem and What Your Answer Tells Us about Right and Wrong. Princeton University Press.
Greene, J. D. (2013). Moral tribes : emotion, reason, and the gap between us and them. New York: The Penguin Press.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon Books.
Harris, S. (2010). The moral landscape: How science can determine human values. New York, N.Y.: Free Press.
Manuscript References:
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96 (5), 1029-1046.
Gray, K. & Keeney, J. E. (2015). Impure, or just weird? Scenario sampling bias raises questions about the foundation of morality. Social Psychological and Personality Science.
Gray, K., Young, L., Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23, 101-124.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review.
J.M. Paxton, L. Ungar, and J.D. Greene (2011 ePub, 2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1) 163-177.
Schein, C., Goranson, A., & Gray, K. (2015). The uncensored truth about morality. The Psychologist.
Shweder, R. A. (2012) Relativism and Universalism. A Companion to Moral Anthropology.
There are obviously more papers and books that contributed to these ideas, but these were what I considered the most central to articulating the ideas above
Ryan M. McManus is currently a post-bac psychology research assistant at a California university. His goal is to pursue graduate studies in experimental psychology, with an emphasis on morality. In his leisure time, he enjoys reading moral philosophy and thinking about the deep mysteries that the human life has to offer. He believes that embracing philosophy and psychology can give us a greater understanding of what it means to live the good life, and how we may be able to mitigate moral conflict. Some of his idols include Sam Harris, Jonathan Haidt, Kurt Gray, and many other scientists who have helped us to understand our moral selves. He believes that we need to embrace cordial conversation, even if it means that we must set our ideological differences aside, as this may be the only way we can search for unbiased truth.
Stay in touch! Like A Tippling Philosopher on Facebook: