Reading Time: 8 minutes By Stephen Cobb [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
Reading Time: 8 minutes

Two months ago, I wrote a piece titled, “An Evolving View of Morality,” in which I argued that we could operationalize morality in a meaningful scientific way. The purpose of the argument was to explain why I thought it was important to reduce moral infractions to concerns related to harm, because I thought doing so was necessary in order to objectively study morality from a psychological standpoint. I also attempted to argue that an oversimplified version of consequentialism was the morally superior theory for how to “live the good life,” and how to operate in our day to day lives. Well, I’m back to do the work of American ventriloquist Jeff Dunham, to argue with myself.

One might wonder, “How much could have changed in your thought process over such a short span, two months?” If I was reading this as one of you, I have no doubt that I would have the same intuition. Is it really possible to make such a drastic change to one’s worldview given so little time? Well, let me quickly summarize my first write-up on morality before I get into what the voices in my head are compelling me to believe.

I originally argued that Kurt Gray’s work on mind perception (and Dyadic Morality) was the best foundation to start from when talking about the psychological study of moral values. In some sense, I still believe this is true, but not to the extreme that I first believed. I stated that because the construct of morality is mind-dependent, when making claims about moral actions, we have to be talking about the experience of minds. And by this logic, when we speak of moral infractions, we simply must be talking about the harm of some mind(s). This harm could manifest in many ways. We could speak of direct harm – slapping someone across the face – or we could speak of indirect harm – burning a Bible (scripture which is sacred to most Christians) – but the overall point I tried to make was that in any case, we only claim to be speaking of moral infractions if we perceive a mind that can suffer as a consequence of some action. However, I’ve recently learned to appreciate the work of Jonathan Haidt and Jesse Graham (and Moral Foundations Theory), as well as the work of Taje Rai and Alan Fiske (and Relationship Regulation Theory). To echo David Pizarro’s opening to the 2016 SPSP debate on purity, I can’t help but agree with all of these theories (and others) as an approach to studying morality.

In my earlier post, I also made an argument that David Hume’s is-ought distinction could be overcome in order to settle the debate on what science could contribute to the normative debate. I quoted Sam Harris from his book, “The Moral Landscape,” as a way to support my argument. Harris communicates a clever non-distinction between facts and values, and it is incredibly persuasive if you are already on his team. Now, looking back, Peter Ditto could probably use my arguments as a great example of motivated reasoning. I had every motivation to argue in favor of objective morality. I had, at the time, favored Gray’s work over Haidt’s, favored Harris over Hume, and favored simplistic explanations over simple ones. I then realized that my motivation to “be operating on pure reason” had actually been preceded by my capacity to feel empathy. I then found myself in a tough situation. If my version of consequentialism was built upon an amalgam of empathetic intuitions, then I was not being purely rational. I was being rational within a certain context. This realization made me question everything I had believed about moral values. I was no longer certain that I could move beyond a descriptive account of morality in any way that one could reasonably consider “ultimately objective.”

Why might this realization matter? After all, it may only be a realization for me. You may be reading this thinking, “Way to go, you’re just relaying my current set of beliefs.” Well, it matters because I happen to know that there are many people who find the objective morality arguments persuasive. I know this not only from personal experience, but from the plethora of moral work in the science and philosophy communities. This realization has made me completely rethink the approach I want to employ when studying morality in a social-psychological context. It has also made me wary of some of the work that has been persistently published in the moral psychology community.

The first potential problem in studying morality is that it is often difficult (for anyone) to disentangle descriptive accounts of moral beliefs from normative ones. My earlier post was a great example of that. Because of this, some bias inevitably creeps into the clockwork of methodology. If a researcher really does believe that consequentialism is the normative answer to moral concerns, that belief may affect the way he or she conducts research on moral judgments and decision-making. For example, if you believe an answer to the trolley problem measures someone’s endorsement (or lack thereof) of consequentialism, you may think that a “no” to pushing the fat man off the bridge translates to an illogical judgment clouded by emotions. This may indeed be the case, but it may also be the case that the trolley problem does not capture whether or not someone endorses consequentialism. Perhaps the trolley problem measures whether or not you’re not bothered by pushing someone to his or her death. This moves into another potential problem in the literature of moral psychology – the over-use of thought experiments.

We all enjoy thought experiments. In fact, thinking about the trolley problem was originally what motivated me to pursue the field of experimental psychology with an emphasis in moral judgments and decision-making. It can be captivating to think about moral dilemmas. We can even use moral dilemmas as a way to persuade other people. When in the middle of an argument about the greater good, all you need to do is line up the trolley problem, the life-boat scenario, and the soldier’s dilemma, and you’ll be able to persuade most people of your position (given that they have not heard of these wildly particular cases). But, there is a problem with using these thought experiments as a way to build psychological theories about the moral mind. Part of the problem is the artful move from hypothetical cases to actual cases. How often are we confronted with a dilemma as emotionally daunting as the trolley problem? I’ve never personally had to make the choice to kill one to save five, but I’ve sure given it enough thought after reading work in moral philosophy and moral psychology. I would love to think that I’d have the courage to make the choice that I consider morally appropriate from the armchair, but how do I know that merely thinking about these hypothetical cases is eliciting the same intuitions that would be evoked if the situation actually presented itself? I’m not entirely confident that my philosophical musings would map onto reality in the way that many people believe would be the case.

Another part of the problem is that the purposes of thought experiments are entirely different when we move from philosophy to psychology. In philosophy, these thought experiments are often employed as counter-arguments to some general principle. They are used in a manner in which the basic communication is, “You think you adhere to this broad principle, but let me give you an example that will cause you to reevaluate your stance.” In psychology, the purpose is different. The answers to thought experiments are the data points. Some psychologists studying morality tend to use thought experiments as a way to categorize someone as a proponent of one kind of ethical system. If the criteria for categorizing a participant as a deontologist is that he or she answers “no” to the life-boat scenario, I think we ought to rethink the criteria. It is an interesting approach, but if we are going to build a psychological theory of the moral mind, we would need to employ variation after variation of many many thought experiments for them to hold serious weight in a conversation about the descriptive status of moral beings.

The move from hypothetical to actual has also made me question ideas I communicated in my earlier post, namely regarding the objective moral good. I attempted to argue that reducing morality to concerns about harm was in fact the one true way in which we could descriptively show that everyone was on the same page about what constitutes a moral infraction (which I still believe is somewhat true, but it’s not the whole picture). Where I made an egregious error in thinking was when I also claimed that we could use this moral reduction to make normative arguments. I reasoned, optimistically, that if we could convince 7 billion people that they actually are concerned about harm in every perceived moral infraction, we could probably convince them that some of those instances didn’t actually contain the harm they had perceived (as in the case of the Oriya Hindu Brahmans who believe it to be immoral for the eldest son to eat chicken after his father’s death).

Obviously, my argument depended upon a lot of hypothetical cases resulting in the way I had hoped. But thinking back, using the Hindu Brahmans as an example, I believe I was missing something hugely important. By asking the Oriya Hindu Brahmans to suspend that belief (because there might not be any actual harm), I would likely be asking them to suspend a core part of their identities. I’m not convinced that it would be practical, or even result in any greater good, if we were to suggest a revision of cultural and moral norms. We would, in some sense, be foisting our norms onto an entirely different culture without being certain of the consequences. It may be true that we all have a conception of what the greater good means, but what we often forget is that we are viewing the world through a very prejudiced lens. Jonathan Haidt conveys this message extremely well with his talk on “the rationalist delusion.” When we make claims about how the world would be better off if everyone adhered to “xyz,” we are almost certainly forgetting that we are dealing with emotional creatures who happen to think, not thinking creatures who happen to feel emotions. Who are we to say that our imposition of norms would actually make everyone happier? I’m not so confident anymore.

While I realize that I harbor some frustration with current work in moral psychology, I ought to commend the work that has considerably helped the field. Contrary to some earlier criticisms, there is a lot of interesting work that has come from utilizing thought experiments in the study of moral judgments and decision-making. Joshua Greene’s work has shown us that different brain regions are more active when making a decision that has more emotional content than not. Jonathan Haidt’s work has suggested that we are often under the power of our intuitions and emotions, which is then followed by rationalization. His work has also persuaded many people (now, including me) that moral pluralism is the only reasonable approach to studying morality, which results from his championing of the Humean is-ought distinction. David Pizarro’s work has shown us that it is extremely important to infuse abstract thought experiments with social information, because that is how the real world works. There are many other great examples that I do not have the time or space to mention. Overall, we can’t only complain about the research of the past, because it has led to invaluable insights on how to move forward.

Psychology, especially social psychology (in which a lot of moral psychology work is done), as a discipline has been the subject of some hard criticisms in the recent past. What do we do to fix the biases among researchers and methodological approaches? We keep experimenting, and we keep doing science. Continuing to think and conduct experiments is what has allowed us to make it this far. If some psychological theory of the moral mind becomes promoted in the future, the only way we’ll know if it’s descriptively true is by dissecting it and rigorously testing it. Psychological science, like every other science, is a self-correcting domain of inquiry that will only get better as we continue to do it.

As you may be able to gather, I am interested in pursuing a career in experimental psychology. I wish to study the way moral values permeate our daily lives, how they shape our judgments of one another, and what they mean for predicting behavior. I was recently accepted into California State University, Northridge for their graduate program in experimental psychology. In the next two years here, I hope I continue to question and revise my beliefs about morality, and every other important domain of human inquiry. I then hope to start a PhD program in which I can increase my knowledge and expertise in the psychological study of morality.

The main reason I am interested in communicating my thoughts and interests is to exchange ideas and receive constructive criticism. I realize that I am one person who sees the world through a very particular lens, and I want to know when I’m being reasonable and when I’m not. If some (or all) of this sounds like mere opinion, it may be so. But I think we ought to recognize that we are all, in some sense, justified in what we believe. I may not have all of the information I need to make a persuasive argument, and I concede that. It is only through this open exchange that we can revise our beliefs in any substantive way. If I, or anyone else, is to be a prolific member of the scientific community, it pays to continue these conversations.

By Stephen Cobb [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
By Stephen Cobb [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons