Our lab aims to organize the astonishing complexity of human morality around basic functional principles. It is animated by a simple idea: Because we use punishments and rewards to modify others’ behavior, one function of morality is to teach others how to behave, while another complementary function is to learn appropriate patterns of behavior. Reflecting this relationship between teaching and learning, our research may be divided into a few broad categories:

  1. Reacting: How do we judge people who cause harm?
  2. Teaching: How is punishment designed teach them a lesson?
  3. Learning: How do our morals reflect the way we learn?
  4. Deciding: How do learned morals affect actual decisions?
  5. Reflecting: How do we make sense of our own moral decisions and attitudes?

Reacting: Why do we punish accidents?

Consider two friends who share beers over football on a snowy Sunday evening and then each drive home. Both fall asleep at the wheel and run off the road. The “lucky” one careens into a snow bank, while the “unlucky” one hits and kills a person. Our laws treat these two individuals dramatically differently. In Massachusetts, for instance, the first would receive a small fine and perhaps a point off her license, while the second would face 2.5–15 years in prison. We have shown that moral luck is not just a peculiar feature of our laws; it also shapes the punishment judgments of ordinary people (Cushman, 2008; Martin & Cushman, 2016), as well as their behavior in economic games (Cushman, Dreber, Yang & Costa, 2009Martin & Cushman 2015), and it does so from a young age (Cushman, Sheketoff, Wharton & Carey, 2012). Philosophers have long known understood this—they call it “moral luck”.

Past theories have treated moral luck as a very general heuristic, mistake or emotional bias, but our research points toward a different conclusion. Moral judgment results from competition between multiple systems (not a single, coherent process), and moral luck exposes the dilemma that arises when these systems disagree. Moreover, the specific system responsible for condemning accidents has this design for good reason: It helps to teach corrective lessons to social partners. In this manner, moral luck illuminates basic principles of the structure and function of moral judgment.

Here’s what we’ve learned:

1. Moral judgment is not a unitary, sequential process. Rather, it is accomplished by parallel and dissociable processes (Cushman, 2015). One is concerned with a person’s causal responsibility for harm (“who shot the gun?”), while the other is concerned with their culpable mental states (“did she mean to?”). Moral luck, among other cases, puts these processes in conflict (Cushman 2008).

2. The causal process contributes strongly to punishment judgements, but weakly to other kinds of moral judgments such as character and wrongness (
Cushman 2008Cushman et al., 2013Martin & Cushman, 2015Martin & Cushman, 2016). As a result, people will judge that people who cause accidental harm should be punished even if they are aren’t bad people and didn’t act wrongly.

3. The fact that we punish accidents is not a mistake or bias. Rather, it is a form of rational pedagogy. Accidents are teachable moments (Martin & Cushman 2017): In stochastic economic games people learn effectively from the punishment of accidental harm-doing (Cushman & Costa, in prep). And, in circumstances where a person cannot control her behavior—and therefore there is no opportunity for behavior modification—the “moral luck” effect on punishment is diminished (Martin & Cushman, 2016).

4. A longstanding body of research in cognitive development demonstrates that young children are especially susceptible to moral luck. It is traditionally held that this early outcome-based system is replaced with an intent-based system around age 8. But our work shows that the outcome-based system never disappears: It remains operative in judgments of punishment and blame (Cushman, Wharton, Sheketoff & Carey, 2013). Moreover, when adults are subject to intense cognitive load, even their judgments of moral wrongness begin to converge to the childhood form characteristic of punishment (Martin, Buon & Cushman, in prep).

Teaching: Punishment as pedagogy

Our study of moral luck taught us a very general lesson: In order to understand the structure of punishment, you need to understand its pedagogical function (Martin & Cushman 2017Cushman 2014Cushman & Macendoe 2009). We have therefore focused intensively on understanding the structure of human teaching and learning.

Our research shows that punishment can contribute to moral learning, but that it functions primarily as a form of communication, not as a form of incentive. When stung by a nettle, people simply avoid nettles. But when stung by criticism, people don’t simply avoid the critic—they try to understand what she meant. Because ordinary people naturally use social punishment in a way that reflects this fact, punishment is a ubiquitous and important feature of human morality.

In order to formalize the distinction between incentive and communication in precise, formal terms, we borrow computational methods from a branch of machine learning in computer science: Reinforcement learning. Our analysis analysis shows why it is rational to respond to non-social rewards as a form of incentive, but to respond to social rewards as a form of communication (Ho et al 2017). Our experimental research confirms basic predictions of this model (Ho et al 2015). And in extreme settings, such as “cultures of honor”, the value of communication can lead people to punish individuals who are not even causally responsible for a harm (Cushman, Durwin & Lively 2012). We have borrowed Bayesian models of language comprehension to understand how principles of communicative intent can be applied to other forms of social learning (Ho et al 2016).

One of our most exciting current projects addresses fundamental paradox about punishment. At a mechanistic level, human punishment is usually blindly retributive—we punish with startlingly little regard for its potential to teach social partners. But if the ultimate function of punishment is pedagogical, why would this be? By formalizing this problem mathematically, we show that it reflects a much more general question about how natural selection balances flexible vs. rigid behavioral strategies in competitive games. We draw convergent evidence from three methods—game theory, evolutionary dynamics and reinforcement learning—to show why evolution favors inflexible punishment. And, by embedding reinforcement learning agents in an evolutionary dynamic, we demonstrate how evolution selects for intrinsic social rewards, such as a “taste for revenge” (Morris, MacGlashan, Littman & Cushman, 2017).

Learning: Grounding harm aversion in computational models of learning

Most people think its wrong to harm one person, even if it would help many more. Yet, they also think it makes sense to maximize welfare. This glaring contradiction, embodied in (in)famous “trolley problem”, is a cornerstone of contemporary moral psychology. Much of our research seeks to characterize the aversion to harm: It is widespread (Hauser et al 2007), it depends upon prefrontal function (Koenigs et al 2007), it is automatic and largely resistant to introspection (Cushman et al 2006), and it is grounded in specific causal and intentional properties of action representation (Cushman et al 2011). A much more profound challenge, however, is to describe the precise cognitive architecture responsible for the competing demands of harm aversion and welfare maximization.

We have studied how this conflict reflects a general division between two systems of value-guided decision making with different targets: Actions versus outcomes. This model is builds upon current models in computational neuroscience and computer science. These theories distinguish between two basic methods of reinforcement learning, “model based” and “model free”. The first assigns value directly to specific actions based on their reward history (“pointing gun at face = usually bad”), and functions primarily through the midbrain dopamine system and basal ganglia. Equally important, however, is another system that derives value from a causal model of expected outcomes (“pulling a trigger causes shooting, which causes harm, which is bad”), which draws on a network of cortical brain areas. Our theory posits that both systems contribute to moral judgment and behavior, and that conflict between them contributes to classic moral dilemmas like the trolley problem (Cushman, 2013).

This approach predicts that harm aversion is principally grounded not in empathy (a concern for harmful outcomes), but rather to intrinsic properties of the action itself. We first tested this by asking people to perform typically harmful actions in harmless circumstances (Cushman, Grey, Gaffey & Mendes, 2012). For instance, participants were asked to shoot the experimenter in the face with a disabled handgun. (Of course our participants knew it was disabled, and that no harm would be caused). This preserved the sensory-motor properties of a typical harmful action, but removed any actual expectation of a harmful outcome. Participants exhibited a large increase in peripheral vasoconstriction, a physiological state associated with an aversive emotional response. This indicates an intrinsic aversion to sensory-motor properties of a canonically harmful act, even when no outcome is expected.

In another study we asked participants how averse they would be to assisting a terminally ill individual commit suicide in different ways: Giving a poison pill, shooting, suffocating, etc. (Miller, Hannikainan & Cushman, 2014). We found that their reported aversion was not significantly predicted by their empathy for the victim’s likely suffering (an outcome), but was almost perfectly predicted by their reported aversion to pretending to kill a person in the specified manner as part of a theatrical performance (preserving the sensory-motor properties of the action). In addition, and as predicted by our theory, moral judgments of trolley-type dilemmas exhibit sensitivity to both action-based and outcome-based value representations, which are themselves dissociable (Miller et al., 2014).

Notably, these studies imply that one’s own aversion to performing an action ultimately contributes to the moral condemnation of third-party action. In current research we are testing a model for this transfer from personal aversion to third-party evaluation, which we call “evaluative simulation.” Past research has emphasized the role that simulation may play in describing, explaining and predicting others’ behavior (i.e., theory of mind). We propose a parallel process by which simulation is used to morally evaluate another person’s behavior ( Miller & Cushman, 2013). In other words, a common way of asking, “was it wrong for her to do it?” is to instead evaluate, “how would it make me feel to do it?” Our research indicates that evaluative simulation is especially prevalent among political conservatives (Hannikainan, Miller & Cushman, 2017), and it may explain why most people—and especially conservatives—tend to moralize actions they find disgusting.

Deciding: Knowing what not to think

The reinforcement learning (RL) framework offers a powerful model of the competition between two systems of human decision-making—a habitual system that is computationally cheap but inflexible, and a planning system that is computationally expensive but flexible and accurate. Yet, as we applied this framework to classic topics in social psychology, we became frustrated by a limitation of a strict dual process approach. Human thought seems especially powerful because of the way that it efficiently integrates cheap habits and careful planning to solve complex problems. We are studying new hybrid computational architectures that integrate habit and planning. These models turn out to have crucial implications for understanding moral decision-making.

Contemporary RL distinguishes two classes of algorithms for learning and decision-making, model-free (“habits”) and model-based (“planning”), that occupy distinct points on the tradeoff spectrum between accuracy and efficiency. For the past ten years research in human value-guided decision-making has invested great effort into understanding the competition between model-based and model-free approaches. But how might they cooperate (Kool, Cushman & Gershman, in press)?

Our research has established new formal models of three basic kinds of cooperation between model-based and model free systems. The simplest form of cooperation is turn-taking: Deciding from moment to moment whether it is most efficient to allocate control to habit or planning. We propose that people solve this problem by learning task-specific value of planning by trail-and-error, weighing this against an intrinsic cost of cognitive control. We discovered that in current tasks that dissociating model-based from model-free control, there is no advantage to planning; we then designed a new task the generates this advantage ( Kool, Gershman & Cushman, 2016). Next we showed that people can learn the difference between the tasks, and respond adaptively to shifting prospects of reward (Kool, Cushman & Gershman, 2017).

Our second approach involves a deeper integration between habit and planning. Specifically, we show that these systems of control are composed hierarchically (Cushman & Morris, 2015). Psychologists have long noted that human action is governed by hierarchies of goals and subgoals. This form of goal-directed reasoning is fundamental to model-based RL (i.e., planning). Prior research offers little insight, however, into how goals are selected. We defined and empirically validated a new computational model in which goals and subgoals are selected by habit—colloquially, a “habit of thought”. For instance, every time you have the goal to get coffee, selecting the subgoal of grinding beans has been useful; thus, the very thought “grind beans!” becomes habitual. In current research we apply a similar approach to show habitual control of “action sequences”, another form of behavioral control. This resolves a major debate over the cognitive structure of habits: Do they reflect model-free value assignment, or chucks of routinized action sequence? Our research provides definitive evidence for both mechanisms, and then shows that they can be hierarchically composed by humans (Morris & Cushman, in prep).

In current research we are exploring a sequential form of integration. This offers an appealing solution to one of the most basic problems in cognitive science: There is just too much to think about. For instance, if you want lunch in Manhattan, there are 22,000 restaurants to choose from. How does your brain settle on the five or six that you will actually consider—your “choice set”? We propose that computationally cheap model-based value are sampled in order to construct a choice set, and that items within the choice set are then subject to more rigorous model-based evaluation. In essence, this model shows how the brain plucks a few thoughts worth thinking from the many better left unthought.

It also has interesting implications for the study of moral decision-making. Our computational model suggests that people will exclude irrational options from their choice sets (e.g., a restaurant fifty miles away). Might they also exclude immoral options (e.g., a churrascaria, for a vegetarian)? In order to test this possibility, we constructed a task in which people had to judge whether various courses of action were “possible” or “impossible” under time pressure (Phillips & Cushman 2017). We reasoned that a key effect of time pressure would be to restrict the size of the choice set—i.e., the set of options deemed possible. As predicted, we found that time pressure made people slightly more likely to judge irrational behaviors “impossible”. Morality produced a similar effect—and a dramatically larger one. In other words, people treat immoral actions as impossible by default. Thus, morals values may affect decision-making by excluding the very thought of immoral action (Morris & Cushman, 2017).

Reflecting: The origin of moral principles and errors of induction

Much of our moral knowledge takes the form of intuitions—judgments about particular cases that arise spontaneously, and without conscious awareness of the relevant cognitive processing (Cushman, Young & Hauser 2006). But, from the courtroom to the church to the classroom, human morality is also structured around explicit rules. Our final major area of research asks where these moral principles come from.

We have studied how moral principles arise as people attempt to make sense of their own intuitive judgments (Cushman & Young, 2011Cushman & Greene, 2011Barak-Corren, Tsay, Cushman & Bazerman, 2017). Yet, as we show, this process of inductive generalization is highly error-prone. Put simply, we ask how people come to know themselves, and how they go wrong.

A particularly compelling demonstration of an errant moral induction comes from our research on professional philosophers (Schwitzgebel & Cushman, 2012, 2016). By manipulating the order in which hundreds of professors judged a series of well-known moral dilemmas, we amplified or muted salient contrasts between them. Not only did this affect philosophers’ judgments, it subsequently produced a large shift in the proportion of philosophers that professed allegiance to prominent moral principles widely debated in the literature. These effects were significantly larger among philosophers than non-philosophers, and largest of all among specialists in ethics. In other words, philosophical training makes people good at generalizing principles from their intuitions, but not especially good at understanding where the intuitions come from in the first place. Inspired by this work, we have recently shown similar effects in the judgments of ordinary people (Barak-Corren et al, 2017).

Order effects can be produced in laboratory environments, but what are the influences on moral judgment that shape our explicit moral principles in more ordinary circumstances? From the U.S. Supreme Court to the American Medical Association to the average person on the street, many people endorse the view that there is a morally significant distinction between killing a person actively and passively allowing a person to die when it could have been prevented (Cushman et al., 2006). My research indicates that this explicit principle may represent another error of induction. It arises in part from a general bias, not specific to the moral domain, to regard actions as more causal and intentional than omissions (Cushman & Young, 2011). These biases appear to be present even in infants’ earliest-emerging capacity for interpreting others’ actions (Feinman, Cushman & Carey, 2015).

Evidence from functional neuroimaging indicates that this bias has a natural interpretation in terms of automatic versus controlled processing (Cushman et al., 2011): Judging harmful omissions recruits greater activation in the frontoparietal control network than judging harmful actions, and individuals who exhibit the greatest levels of activation in this network also show the least difference in their judgments of actions and omissions. Thus, institutions like the AMA and Supreme Court may explicitly endorse the view that active harm is morally worse than passive harm because seeing harmful actions as wrong is cognitively easy, while seeing omissions as wrong requires cognitive effort.