Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 17 May 2021

Judgment and Decision Makingfree

  • Valerie F. Reyna, Valerie F. ReynaDepartments of Human Development and Psychology, Cornell University
  • Priscila G. Brust-RenckPriscila G. Brust-RenckGraduate School of Psychology, Universidade do Vale do Rio dos Sinos, Brazi
  •  and Rebecca B. WeldonRebecca B. WeldonDepartment of Social and Behavioral Sciences, SUNY Polytechnic Institute

Summary

Everyday life is comprised of a series of decisions, from choosing what to wear to deciding what major to declare in college and whom to share a life with. Modern era economic theories were first brought into psychology in the 1950s and 1960s by Ward Edwards and Herbert Simon. Simon suggested that individuals do not always choose the best alternative among the options because they are bounded by cognitive limitations (e.g., memory). People who choose the good-enough option “satisfice” rather than optimize, because they are bounded by their limited time, knowledge, and computational capacity. Daniel Kahneman and Amos Tversky were among those who took the next step by demonstrating that individuals are not only limited but are inconsistent in their preferences, and hence irrational. Describing a series of biases and fallacies, they elaborated intuitive strategies (i.e., heuristics) that people tend to use when faced with difficult questions (e.g., “What proportion of long-distance relationships break up within a year?”) by answering based on simpler, similar questions (e.g., “Do instances of swift breakups of long-distance relationships come readily to mind?”).

More recently, the emotion-versus-reason debate has been incorporated into the field as an approach to how judgments can be governed by two fundamentally different processes, such as intuition (or affect) and reasoning (or deliberation). A series of dual-process approaches by Seymour Epstein, George Lowenstein, Elke Weber, Paul Slovic, and Ellen Peters, among others, attempt to explain how a decision based on emotional and/or impulsive judgments (i.e., system 1) should be distinguished from those that are based on a slow process that is governed by rules of reasoning (i.e., system 2). Valerie Reyna and Charles Brainerd and other scholars take a different approach to dual processes and propose a theory—fuzzy-trace theory—that incorporates many of the prior theoretical elements but also introduces the novel concept of gist mental representations of information (i.e., essential meaning) shaped by culture and experience. Adding to processes of emotion or reward sensitivity and reasoning or deliberation, fuzzy-trace theory characterizes gist as insightful intuition (as opposed to crude system 1 intuition) and contrasts it with verbatim or precise processing that does not consist of meaningful interpretation. Some of these new perspectives explain classic paradoxes and predict new effects that allow us to better understand human judgment and decision making. More recent contributions to the field include research in neuroscience, in particular from neuroeconomics.

Overview: Judgment and Decision Making in Psychology Research

Judging and deciding what to do can involve seemingly simple tasks in some circumstances, such as continuing to read this article or choosing what to eat but also can involve larger life choices, such as whom to marry or what subject to study in college. Research on judgment and decision making within the field of psychology has been devoted to unraveling the way humans make their decisions on a day-to-day basis. Overall, judgment per se can be characterized as the thought, opinion, or evaluation of a stimulus, and the decision is the behavior of choosing among alternative options. In the traditional view, the decision-making process is complex given that one must analyze alternative options, estimate the consequences of choosing each option, and deal with conditions of uncertainty (von Neumann & Morgenstern, 1944). Research in judgment and decision making has increased in an interdisciplinary fashion.

Historically, behaviorism was the primary school of thought in psychology until the 1950s or so, but critics of behaviorism recognized that stimulus–response accounts are not sufficient for explaining human behavior (Greenwood, 1999). For example, two stimuli can elicit the same response, and one stimulus can lead to two different responses. Furthermore, it is too simplistic to draw conclusions about human behavior without considering the underlying mental processes. In the early years, the judgment and decision-making field was primarily based on theory and data from economics and psychology (notably Edwards, 1954), but judgment and decision making also integrates law, political science, social policy, management science, marketing, engineering, and medicine, among others (Arkes & Hammond, 1986; Hammond, 1996; Slovic et al., 1977).

Research on judgment concerns such topics as perceptions of consequences and predictions about future outcomes, and research on decision-making concerns understanding preferences (for reviews, see Fischhoff & Broomell, 2020; Mellers et al., 1998; Weber & Johnson, 2009). Psychological processes (e.g., Kahneman & Tversky, 1979) have been studied to explain phenomena of judgment and choice that date back to original predictions of economics models (e.g., von Neumann & Morgenstern, 1944). In order to best understand the advances in psychology to predict judgment and decision processes, a brief overview of relevant economic theories is necessary. In particular, the normative approach from economic theory, which was based on axioms of coherence in preferences, showed that following these axioms would ultimately deliver decisions that maximized an individual’s expected utility. Expected utility is the weighted average of the extent to which an outcome is preferred relative to its alternatives. For example, these axioms include transitivity of preferences; if option A is preferred to option B, and option B is preferred to option C, then option A should be preferred to option C. One goal of this important work was to establish normative rules defining rational choices in terms of each individual’s preference structure. Without identifying the best option per se, coherence, or consistency in decision making, is deemed to be normative (Baron, 2012).

Despite these normative criteria for evaluation as an attempt to explain the descriptive behavior, judgments about risk and probability do not always obey consistent and coherence rules (e.g., Tversky & Kahneman, 1983). The apparent failure of people to reason coherently raises larger concerns about the ability of humans to function well in real-life situations (e.g., Allais, 1953; Tversky & Kahneman, 1974; but see Simon, 1955, under “Early Milestones in Psychology” section). In contrast to models that assumed rationality, a new set of descriptive models was developed to account for how individuals actually make decisions based on cognitive psychology research. The distinction among normative, descriptive, and prescriptive models is needed to clarify research goals: Normative models apply to how people should decide; descriptive models refer to how people actually make decisions; and prescriptive approaches help people make better decisions (Bell et al., 1988).

This article provides an overview of the historical path of research in the field of psychology. Because the early milestones were a direct reaction to economics research, the first step is an overview of the key relevant models, such as expected utility theory (a theory of rational choice), which assumed normative and descriptive models were the same. This is followed by a review of the early research on violating normative standards of the economic models, including Simon’s Satisficing hypothesis (to accept an available option as satisfactory rather than maximizing), ambiguity aversion (a preference for known risks rather than unknown risks), and other paradoxes (i.e., Allais, 1953; Ellsberg, 1961), which suggested descriptive models violated normative assumptions. These ideas and associated empirical phenomena challenged normative models, and they provide the foundation for significant departures from rational models. These challenges set the stage for insights and methods from psychology that could explain why human behavior did not follow the tenets of rational choice theories.

A turning point for psychology was when a substantial amount of research demonstrated that deviations from the rational rules of judgments and decision making were systematic. Daniel Kahneman and Amos Tversky’s (1979) prospect theory was central to this new era of research based on descriptive models, generating phenomena that deviated from normative predictions. This article reviews current models of psychological processes involved in making judgments and choices, noting those that account for the roles of affect, rationality, intuition, and other psychological processes (e.g., Kahneman, 2003; Peters & Slovic, 2007; Reyna & Brainerd, 2011). The conclusion includes recent contributions to the field from neuroscience, in particular, from neuroeconomics.

Prelude: Classical Economics

The advent of research in judgment and decision making in psychology was directly related to how these topics were studied in the field of economics (see Becker & McClintock, 1967, for a review). Economic theory proposed to identify the best possible solution to a problem given the decision maker’s values and preferences (for reviews, see Baron, 2012; Fischhoff, 2010). Such preferences are a result of the probability to win multiplied by the value of that outcome—expected value—a concept that dates back to the mathematical work of Blaise Pascal in the 17th century (for a review, see Edwards, 2001). A decision problem may constitute a set of alternative possible outcomes (e.g., winning a $100 prize in a lottery), the uncertainty of information in terms of probability of occurrence (e.g., the chances of winning the prize), ambiguity (in which the decision maker lacks knowledge or information about the probabilities or outcomes), or outcomes that occur sooner versus later in time (e.g., Luce & Shipley, 1962; O’Donoghue & Rabin, 1999). Note that probabilities can be known, as in decisions under risk, or unknown, decisions under ambiguity. To clarify, “ambiguity is epistemic uncertainty about probability created by missing information that is relevant and could be known” (Camerer & Weber, 1992, p. 330).

A key contribution to the field was Daniel Bernoulli’s (1954) concepts that later were called diminishing marginal utility (i.e., that small changes to extremely large values have little impact on choice, but the identical changes to small amounts are more likely to make a difference). Interestingly, mathematicians and physicists showed very little interest in Bernoulli’s 1738 work. It is so fundamental to economic theory, however, that economists translated it from Latin and published it in 1954 in a top journal over 200 years after it was written. There continues to be substantial interest in this work long after Bernoulli’s death (Stearns, 2000). Bernoulli presented one of the first accounts of why people have a preference for the sure gains option over the gamble when the expected value is the same (i.e., risk aversion). The deviation from expected value was explained by assuming that utility, which is a subjective function of value, is not linear with objective value, but a concave function.

The ideas about maximizing utility and rational choice that eventually were developed in the 20th century stemmed from intellectual ideas in the 19th century. During the latter time, philosophers debated about a policy to benefit the greater good (what type of policy would benefit the most number of people?) while simultaneously trying to predict economic outcomes (how does an economy filled with self-interested individuals thrive?; for a review, see Levin & Milgrom, 2004). Rational choice theory has been used to explain choices about saving and spending, crime, marriage, childbearing, with an emphasis on the individual doing what is best for themselves and choosing the action that has the greatest perceived utility (in a cost–benefit analysis of options). Rational choice theory has been useful in that it has helped with generating clear and falsifiable hypotheses, in turn advancing the field of judgment and decision making. Rational choice theory made assumptions of human rationality and maximization of utility.

The concept of rationality within this framework is expressed as internal coherence of a set of preferences (see Mellers et al., 1998, for a review). In this view, real-world deviations from consistency of revealed preferences were considered irregular or trivial and eliminated from the rational choice model (Samuelson, 1938; Suzumura, 1976). According to these types of models, individuals are assumed to be rational, that is, they choose coherently, with the chosen option reflecting utilities or personal preferences. Thus, if a person shows a preference for one particular object (or activity) when compared to another one, the utility of that object is higher than that of the rejected object, and the preference relation follows principles of coherence, such as transitivity. Transitivity refers to the coherence of preferences, such that, for example, if a person prefers bananas over apples and apples over oranges, that person would consistently choose bananas over oranges (Levin & Milgrom, 2004). The true nature of preferences is revealed by choices themselves; in classical rational choice theory, there is no underlying preference beyond what can be inferred from people’s choices.

Von Neumann and Morgenstern (1944) showed that when people’s choices obeyed these rules of coherence, they would maximize their expected utility or overall satisfaction. The theorem proving maximization of expected utility was a major achievement the details of which are beyond the scope of this article. Expected utility was related to expected value; the latter is a result of the multiplication of each possible outcome by its probability of occurrence. For example, a gamble with a $100 gain with a 50% chance would be preferred over a sure gain of $40 because $100 × 0.5 = $50, which is greater than $40 × 1.0 = $40. However, expected utility theory assumes that satisfaction of outcomes is not linearly related to objective magnitude.

Theories that followed expected utility theory introduced the idea that probabilities are not perceived objectively (e.g., Luce & Raiffa, 1957; Markowitz, 1952; Savage, 1954). Such theories as expected utility theory and subsequently subjective expected utility theory (e.g., Keeney & Raiffa, 1976; Savage, 1954; Schoemaker, 1982; Stigler, 1950; von Neumann & Morgenstern, 1944) became well established in economics research, and the assumption of individual rationality was applied to markets and policies (e.g., Frank, 2015). According to these theories of rationality, people should choose consistently among their options, and they maximize their expected utility by choosing the option with the overall greatest value.

Also, expected utility theories continue to influence modern economic approaches, including those using econometric techniques. These techniques are used to predict human behavior based on large economic data applied to consumer behavior, health policies, and social and political sciences, among others (see Pope & Sydnor, 2015, for a review). Even though these models are still considered mainstream and are the current view of many areas of economics (e.g., Frank, 2015), they do not account for key phenomena of behavior, as discussed in “Early Milestones in Psychology: Departures from Economic Theories.” To preview, in order to deal with the growing lists of behavioral violations from rational choices, and the need to accept behavioral assumptions and insights from psychology, the field of behavioral economics emerged in the late 1980s. Richard Thaler, one of the founding fathers of the field and first director of the Center for Behavioral Economics and Decision Research in 1989 at Cornell University, combined work from psychologists and empirical economists to attempt to account for biases and examine alternative frameworks, for which Thaler was awarded the 2017 Nobel Prize in Economic Sciences (e.g., Kahneman, 2012; Pope & Sydnor, 2015; Rabin, 1998, 2002; Rangel et al., 2008). One of the main goals of behavioral economics was to acknowledge and incorporate psychology into descriptive assumptions in order to improve economic analysis. The research that served as inspiration for the change in mindset is the focus of the next section, “Early Milestones in Psychology: Departures from Economic Theories.”

Early Milestones in Psychology: Departures from Economic Theories

From the 1950s to 1970s, judgment and decision-making research in psychology reacted to the standards of economic, normative models, and identified systematic departures from those standards (i.e., biases and fallacies). Information theory in the context of radio communication during World War I influenced the Cognitive Revolution, which drew on information theories and computer technology (for a retrospective, see Miller, 2003), after which psychologists began to study the rational mind in addition to the stimulus–response experience and observable behavior. In 1954, Ward Edwards took this topic of research to psychologists by publishing an article on the principles of microeconomic theory that directly apply to psychology, such as risky choice, subjective probability, and game theory. This paper was followed by reviews of the empirical and theoretical evidence from economics from 1954 to 1960 (Edwards, 1961).

Psychologists started investigating the relationship between normative and descriptive aspects of judgment and decision making. They discovered that people’s behavior and preferences violated normative theories, exhibiting biases and fallacies. These behaviors and preferences were biases and fallacies when compared against normative theories. Psychologists focused on understanding these biases and fallacies, whereas economists downplayed them (e.g., intransitive ordering of risky choices; Tversky, 1969). The study of the discrepancies between normative and descriptive models is still a recurring theme underlying contemporary judgment and decision-making research (for a review, see Keren & Wu, 2015).

One important problem that influenced two notable researchers in judgment and decision making, Daniel Kahneman and Amos Tversky, is illustrative of systematic violations of consistency and thus challenges expected utility theory: the Allais paradox. In 1953, Maurice Allais proposed a comparison between two lotteries, one with a sure option with a gamble (francs are converted to dollars in the following example):

A.

Receive $1 million for sure.

B.

10% chance of receiving $5 million, an 89% chance of receiving $1 million, or a 1% chance of receiving nothing.

Then he also proposed a comparison between two additional gambles:

C.

11% chance of receiving $1 million, or 89% chance of receiving nothing.

D.

10% chance of receiving $5 million, or 90% chance of receiving nothing.

The normative prediction would be that if people choose A in the first lottery (representing risk aversion), then they should also choose C in the second lottery in which there is a greater chance of winning, and thus they would show consistent preferences for risk (Allais, 1953). Alternatively, the same people who choose option B should also choose D, showing consistent risk-seeking preferences. However, this is not the case, and people tend to be more risk averse (choose A) in the first lottery and risk seeking in the second (choose D), that is, they make choices that are not consistent. These violations of consistencies violate rationality.

Inconsistent preferences illustrated in the Allais paradox could be explained by limitations of human cognition. Herbert Simon (1955) applied the concept of “bounded rationality” to accommodate limitations of human cognition. In particular, Simon’s (1955, 1957) hypothesis of Satisficing was based on the need to deal with unrealistic expectations of maximization. According to Simon, individuals have cognitive limitations that should be taken into account when making judgments and decisions. Some of these limitations are related to memory capacity, attention span, and limitations of time, all of which constitute a framework of what Simon referred to as “bounded rationality” (Simon, 1955). Simon proposed that people tend to find solutions that are good enough instead of optimizing (i.e., finding the best possible solution), because it is not reasonable for people to exhaustively compute their expected utility (e.g., they choose the first or second car that meets a satisfying criterion instead of researching all available cars on the market). This scenario is easily observed when there are multiple attributes, which makes the computational process more difficult, and there are greater benefits in minimizing the time and cognitive resources to produce a satisfactory result. In other words, boundedly rational decision makers satisfice instead of optimize their choices (Simon, 1956, 1990).

In 1961, another paradox was introduced by Daniel Ellsberg, who worked for the RAND corporation on military topics (and who also reviewed the Pentagon Papers). Unlike Allais, who tested decision under risk, Ellsberg challenged the assumptions for decisions under ambiguity, in which the exact probabilities of the outcomes cannot be precisely determined. This is a classic ambiguity problem: There is an urn with 90 balls (30 red balls and 60 balls that are black or yellow, with the latter of unknown proportion).

In round 1, a prize of $100 is offered for a correct guess of which color would be drawn at random from the urn: (a) red or (b) black.

In round 2, a prize of $100 is offered for a correct guess of which color would be drawn at random from the same urn, but with different options: (c) red or yellow or (d) black or yellow.

The most common pattern of response is to prefer to bet on red (option A) in the first round and to prefer to bet on black or yellow (option D) in the second round. This finding is contrary to normative predictions that people who bet on the known result (option A, bet on red for which there is a one-third chance of winning) in the first round (because they know how many red balls there are) would be likely to bet on the same principles to choose the sure option (option C, bet on red or yellow) in the second round. In this choice, people appear to ignore the fact that the probability of drawing a yellow ball is identical in both options in the second bet and thus, the remaining probabilities would match the first round of bets (i.e., choose either red or black). However, according to Ellsberg (1961), people prefer to avoid ambiguity of the unknown probabilities of outcomes and prefer the options for which they know the probability of each outcome.

A few years later, Edwards et al. (1963) wrote an important paper about Bayesian reasoning in probability assessment to psychological researchers. Edwards believed that humans behaved as if they had Bayes’s rules engrained in their minds. Edwards’s work inspired Tversky and Kahneman to generate new hypotheses and explore new topics with experimentation that ultimately led to questioning of normative standards. Thus, although Edwards thought people’s choices approximated those predicted by classical economic theory, this conclusion was rejected by the work of Tversky and Kahneman, for which Kahneman later was awarded the 2002 Nobel Prize in Economic Sciences (Tversky was deceased by the time the prize was awarded). They are recognized as some of the founders of behavioral economics (Lewis, 2016; Smith, 2001).

Not all decisions are simple choices between lotteries. For example, when deciding what car to buy, there are several factors besides cost that should be considered, such as insurance and average miles traveled per gallon of gas. To deal with this situation, multi-attribute expected utility theory was developed alongside Tversky and Kahneman’s work. According to this theory, utility could be determined for each attribute and ordered by preference, such that the downside of one attribute (e.g., cost) could be compensated (traded off) by the benefits of another attribute (e.g., average miles per gallon). The theory combined models of measurement and scaling with economic assessment of utility though weight assignment to each attribute to account for utility (e.g., Fishburn, 1967; see also Becker & McClintock, 1967). Nevertheless, most day-to-day situations are complex and require a rather sophisticated computation of overall utility which is likely to be beyond the average person’s numerical and computational ability (for a review, see Reyna et al., 2009).

Turning Point: Heuristics, Biases, and Framing Effects

From the 1970s to 1990s, psychological research continued to pursue evidence against normative models following several governmental incentives to promote the use of evidence-based outcomes in developing best practices. Daniel Kahneman and Amos Tversky took the central stage of descriptive theories and discovered a host of deviations from normative models, called “biases” and “fallacies” (for reviews, see Gilovich et al., 2002; Lewis, 2016). They also identified intuitive strategies—heuristics or mental shortcuts—that allow people to make judgments and decisions quickly, which often leads to the aforementioned systematic biases and fallacies (Tversky & Kahneman, 1974). They also report research on framing effects, which are well-established biases related to decisions that involve risk (Kahneman & Tversky, 1979).

Heuristics and Biases

In the early 1970s, Amos Tversky proposed the elimination-by-aspects model, which describes a psychological strategy to make choices given some specified features, such as cost (Tversky, 1972). The process is sequentially identifying options that do not meet predefined criteria (i.e., desirable features) and then eliminating them until only one alternative remains available for choice. For example, among five cars available for buying, perhaps only three meet the feature of having low average miles per gallon of gas, and thus the other two cars are eliminated. Next, one out of the three remaining models has a really high insurance policy, which is undesirable and leads to its elimination from the option set. Finally, the least expensive of the two cars left is the final choice. Note that this strategy does not maximize across the multiple attributes because options are eliminated, even though the magnitudes of good attributes might offset the magnitudes of bad attributes. Elimination-by-aspects is a plausible psychological strategy and an elegant model; it was another nail in the coffin of rational choice theories that assumed utility maximization.

Research on heuristics and biases in judgment under uncertainty is a direct reaction to Simon’s (1955) idea of bounded rationality. According to Tversky and Kahneman (Kahneman, 2003; Kahneman & Tversky, 1972; Tversky & Kahneman, 1971, 1974), people’s judgments violate principles of coherence. Three basic heuristics—representativeness, availability, and anchoring and adjustment—were introduced as evidence that points to how people tend to process information in a highly economical and effective way, even though they are subject to biases (Kahneman & Tversky, 1972; Tverksy & Kahneman, 1974).

The first heuristic, representativeness, is when people judge probability by similarity. Specifically, when identifying whether an object is a part of a category, they identify how similar the object is to the typical member of that category (Baron, 2012; Kahneman & Tversky, 1972, 1973). For example, in estimating the likelihood or frequency of event A compared to both events A and B, the representativeness heurist leads what is called the conjunction fallacy (Tversky & Kahneman, 1983). For example, consider the classic Linda problem:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

In Tversky and Kahneman’s (1983) study, participants were given two options and were asked which is most likely; 85% of participants ranked the option “Linda is a bank teller and is an active feminist” (events A and B) above the option “Linda is a bank teller” (event A). This result demonstrates how human mental operations do not always correspond to the law of probabilities (Tversky & Kahneman, 1983). The probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone: P(A) ≥ P(A MATH B) ≤ P(B). The observed ranking is a conjunction fallacy because the probability that Linda is either a bank teller, P(A) or an active feminist, P(B), should be judged as more probable (or equally probable) than the probability that she is both, P(A MATH B). The description of Linda, however, was more representative of a stereotype (Linda was deeply concerned with issues of discrimination and social justice), and therefore people thought that the probability of the conjunction was more representative than the unrepresentative class of bank tellers.

Examples of inconsistent joint probability judgments are also observed as disjunction fallacies (Bar-Hillel, 1973; Tversky & Shafir, 1992). A disjunction fallacy occurs when two events, A or B, are judged as being less probable than at least one of the components individually. However, the disjunction of two events is at least as likely as either of the events occurring individually: P(A) ≤ P(A MATH B) ≥ P(B). For example, the chance that Linda is either a bank teller or a feminist (or both) should be greater than the chance that she is just either one of those options. On average, however, people tend to choose the single event that better fits their stereotype instead of the disjunctive option (e.g., Bar-Hillel & Neter, 1993; Tversky & Koehler, 1994).

A series of other biases, such as insensitivity to prior probabilities, insensitivity to the accurate predictability (i.e., making a prediction based on the representativeness of a scenario description, not the reliability of the evidence), illusions of validity (i.e., showing great confidence in a prediction based on the good fit of the description and the available options even if they are aware of the factors that limit the accuracy of the prediction), belief in the law of small numbers (i.e., that long runs and streaks cannot be random even in small samples of behavior) (e.g., Gilovich et al., 1985; Kahneman & Tversky, 1972, 1973; Tversky & Kahneman, 1971, 1974).

The second heuristic, availability, refers to the instances in which judgments of the frequency of a class or the probability of an event or similar occurrences are remembered or come to mind (Kahneman & Tversky, 1972; Tversky & Kahneman, 1974). For example, one may assess the risk of a hurricane based on memory for recent events or may estimate the chance of a car accident as a result of driving under the influence of alcohol by recalling such events among their acquaintances. In this case, the availability of information (easy to retrieve memories) can create biases because judgments based on recollections of specific events often are affected by other factors instead of frequency and probability. Some of these biases are a result of the retrievability of instances due to familiarity (e.g., how many time one has driven under the influence) or salience of an event (e.g., the impact of being in a hurricane zone during the storm surge) or the effectiveness of a search set, which is influenced by cues such as the first letter of the word or the retrieval context in which that information appears. They can also be a result from how well one can imagine the events, such as contingencies (e.g., the risk involved in not heeding a hurricane evacuation warning is evaluated by imagining contingencies such as flooding), or even illusory correlation, which is the overestimation of the likelihood that two events will co-occur (e.g., to believe that small cities have generally nicer people than larger cities without any factual basis in objective probabilities; Chapman & Chapman, 1967; Tversky & Kahneman, 1973, 1974).

Another heuristic is anchoring and adjustment, in which people tend to make an estimation of a value starting from an initial value and then adjust. However, the adjustment is usually insufficient (e.g., Slovic & Lichtenstein, 1971; Tversky & Kahneman, 1974). An example of insufficient adjustment can be illustrated by the attempt to quickly estimate the product of two computations: (A) 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 and (B) 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8. In both cases, the initial values (i.e., 8 and 1) serve as anchors and a quick estimation of the result led to insufficient adjustment in both cases: the median estimates were 2,250 and 512 respectively, even though the correct answer is identical, 40,320.

Other heuristics and biases have been later identified (the following examples represent relevant effects that were influential even though not presented in chronological order). One such example is the confirmation bias, in which people seek out and give more weight to evidence that is consistent with their hypotheses while failing to test disconfirming hypothesis or ignoring evidence (e.g., I favor candidate A for an upcoming election. Thus, I will seek out and remember favorable news press on candidate A while not seeking out unfavorable news about candidate A that would undermine the initial impression of the candidate; Wason, 1960). Klayman and Ha (1987) pointed out that only seeking to confirm hypothesis could be a defensible strategy under specific conditions. Hindsight bias captures the idea that people tend to believe that an event was more predictable than it was prior to the event occurring (e.g., I always knew my team would win; Fischhoff, 1975; Fischhoff & Beyth, 1975; Klein et al., 2017; Roese & Vohs, 2012). There is also the overconfidence effect, in which people tend to believe that their own abilities, knowledge, and/or judgments are greater than they actually are in reality (Brenner et al., 1996; Dunning et al., 1990). This list is not exhaustive, but is meant to provide examples of influential judgment heuristics that shaped and continue to shape the field of judgment and decision making (see also Gilovich et al., 2002).

Amos Tversky and Daniel Kahneman argued that heuristics were adaptive but also produced biases and fallacies. Gerd Gigerenzer and his colleagues challenged the claim that biases and fallacies were errors and in that sense argued that heuristics are adaptive (e.g., Gigerenzer, 1991, 1996; Gigerenzer & Gaissmaier, 2011; Gigerenzer et al., 1999). These researchers suggest that heuristics must have been favored by evolution (although the fact that a behavior occurs does not make it a product of natural selection; that is a fallacy). In addition, evolutionary arguments are post hoc and thus difficult to test scientifically (but see Cosmides & Tooby, 1996). Gigerenzer and Hoffrage (1995) claimed that heuristics do not necessarily lead to biases if people are asked questions in terms of frequencies (instead of probabilities), which they asserted to be more “natural.” However, evidence disentangling multiple causes of performance have shown that frequency formats do not improve performance (Barbey & Sloman, 2007; Cuite et al., 2008; Evans et al., 2000; Koehler & Macchi, 2004; Reyna, 2004; Wolfe & Reyna, 2010). Other “fast and frugal” heuristics (i.e., heuristics that do not take much processing time nor many cognitive resources), such as the recognition heuristic and the gaze heuristic, have also been studied (Gigerenzer & Goldstein, 1996; Goldstein & Gigerenzer, 2002). Researchers point to the need to specify the environmental circumstances that bound the accurate use of heuristics (e.g., Dougherty et al., 2008; Newell & Shanks, 2004; Hogarth & Karelaia, 2006; Kahneman & Tversky, 1996).

Framing Effects

Unlike most decisions made based on heuristics, which rely on judgments under uncertainty, decisions under risk involve the knowledge of the probabilities (i.e., gamble) associated with available outcomes. When facing a choice between a sure win (e.g., $50 for sure) and a gamble (e.g., 50% chance to win $100), people are often risk averse and prefer the sure gain to the gamble when the expected value is the same (even if they prefer a gamble when the expected value is higher). When faced with losses, however, they show a preference toward the risky gamble (e.g., 50% chance to lose $100) over the certain loss (e.g., to lose $50 for sure), that is, they are more risk seeking (Tversky & Kahneman, 1986, 1991; see also Steiger & Kühberger, 2018). (Note that risk taking patterns change with very small probabilities; e.g., Kahneman & Tversky, 1979.)

To predict the gain–loss change in response pattern, Kahneman and Tversky (1979) suggest that irrational biases occur even when the expected value is the same in all four options (i.e., $50 for gains and losses), a framing effect. The framing effect is the display of conflicting risk preferences despite quantitatively equivalent options. Consider the classical example of the dread-disease problem (Tversky & Kahneman, 1981):

Imagine that the United States is preparing for the outbreak of an unusual disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

(A)

If program A is adopted, 200 people will be saved.

(B)

If program B is adopted, there is a one-third probability that 600 people will be saved, and a two-thirds probability that no people will be saved.

Alternatively,

(C)

If program C is adopted 400 people will die.

(D)

If program, D is adopted there is a one-third probability that nobody will die, and a two-thirds probability that 600 people will die.

In this example, people choose between two different programs to combat the disease depending on the condition they were assigned. The expected value is the same among all four options (i.e., 200 would live and 400 would die), but preferences change across gains and losses problems (i.e., in the gain frame, the majority choose program A, which is the risk-averse option, whereas in the loss frame, the majority choose program D, which is the risk-seeking option). Framing effects have been widely investigated, and preferences seem to replicate across multiple contexts and cultures (e.g., Edwards et al., 2001; Gallagher & Updegraff, 2011; Kühberger, 1995, 1998; Kühberger & Tanner, 2010; Levin et al., 1998; McGettigan et al., 1999; van Schie & van der Pligt, 1995). Yet, some researchers suggest that individuals are more likely to produce the traditional framing effect in situations that are simply described to them as hypothetical scenarios rather than in situations learned from experience (e.g., Barron & Erev, 2003; Estes, 1976; Hadar & Fox, 2009, Hertwig & Erev, 2009).

This preference reversal (i.e., risk aversion for gains and risk seeking for losses) was predicted by a highly influential descriptive theory, prospect theory (Kahneman & Tversky, 1979) and later, cumulative prospect theory (Tversky & Kahneman, 1992). Prospect theory is an attempt to explain the process by which people make choices between different gambles (or prospects) associated with different probabilities, using a psychological value function for outcomes and a psychological weighting function for probabilities. The value function differentiates gains and losses based on deviations from the reference point and is assumed to be concave for gains and convex (and steeper) for losses. In the nonlinear weighting function for probabilities, small probabilities tend to be overweighted relative to their objective magnitudes, and large probabilities tend to be underweighted. Prospect theory also influenced some subdisciplines of economics from this time: behavioral game theory (Camerer, 1990), behavioral decision theory (Einhorn & Hogarth, 1981), and behavioral finance theory (Thaler, 1980, 1993).

Modern Era: Rationality and Intuition

After the 1990s, several approaches were used to distinguish two processes responsible for cognitive function in judgment and decision making, one process that is based on rationality, which is largely about making consistent choices, and the other that is a result of intuition or affect, often leading to biases (see Kahneman, 2003, 2011; Stanovich & West, 2000, for a detailed overview). Dual process models incorporate rationality in addition to intuition (and sometimes affect or emotion) as both sides of a coin. In a simplified way, dual-process approaches (which were prevalent in several subdisciplines in psychology) recognize the influence of both rational thoughts and irrational intuition on judgment and decision making. For example, if the Linda problem is revisited, one would make the wrong judgment again because their cognition would most likely rely on heuristic and intuitive processes (system 1), even though rational, deliberative process (system 2) would most likely yield the correct answer.

One of the psychologists to discuss a conflict between these processes was Seymour Epstein, building directly on Freudian dualism as well as Cartesian dualism (between the immaterial mind/soul and the material body). For Epstein (1994), and his cognitive-experiential self-theory, the two methods of information processing are distinct. That is, intuition-rationality distinction was based on Freud’s psychodynamic distinction between primary versus secondary processes (i.e., pleasure and control systems, respectively). Even though Epstein was not a decision research scientist, his contribution to the field was instrumental to the systematic understanding of individual differences in these processes (see also Reyna & Brainerd, 2008; Stanovich & West, 2008).

Several other researchers have attempted to describe these processes and, despite differences, the features are that the intuitive or emotional process is often associative, experiential, fast, and impulsive, which can also be described as System 1; and the rational process, or System 2, is more analytical, deliberative, rule-based, and slow, and cognitively effortful, which is responsible for well-thought-out judgments and putatively advanced choices (Epstein, 1994; Epstein et al., 1996; Kahneman, 2003, 2011; Evans & Stanovich, 2013; Sloman, 1996; Stanovich, 1999; Stanovich & West, 2000).

According to these theories, people can rely on one process more than the other when making decisions. Susceptibility to framing effects, for example, should be a result of high intuitive thinking and low rationality, because they occur when options of the same objective value are evaluated differently (e.g., Kahneman, 2003; Porcelli & Delgado, 2009). However, there is relevant empirical evidence contesting standard dual-process theory (e.g., Reyna & Brainerd, 2008; Shiloh et al., 2002), suggesting that even a seemingly all-inclusive rational–irrational dualism needs updating.

One version of the dual process approach was the assumption of intuition (or affect) as a default over rationality, even though rationality can override intuition (Epstein et al., 1996; see also Kahneman, 2003, 2011). Kahneman and Frederick (2002, 2005) test the hypothesis that the overriding function of rationality is part of a monitoring feature that allows expressions of intuition but intervenes when necessary (see also Kahneman, 2003, 2011, for a review). Frederick (2005) introduced the Cognitive Reflection Test to assess individual differences in these processes. People answer questions in which the immediate, impulsive guess is incorrect, and thus they have to inhibit the erroneous thought and check for the correct response. For example, people are asked to indicate the cost of a ball in the following scenario “A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball.” Most people (more than 50%) tend to answer 10 cents because it is the result of the sum with $1 and the first response that occurs to them. However, on reflection, the correct answer turns out to be 5 cents.

A slightly different approach to this dualism is the affect heuristic proposed by Slovic and colleagues (e.g., Finucane et al., 2000). They point to an important role for feelings (or affective responses that occur fast), not cognition, as a basis for judgment and decision processes (Slovic et al., 2002, 2005). According to this perspective, how people feel about a topic (i.e., their subjective feeling of risk) is what allows them to construct preferences (e.g., between wind energy and nuclear power plants). Both negative and positive affect are argued to play a role in the overall evaluation of alternatives (Loewenstein et al., 2001; Weber & Johnson, 2009).

Other researchers have qualified their view of dual systems approaches by replacing the term “systems” with “types,” to avoid oversimplification of the processes underlying decision making in a two-system view (Evans, 2008, 2009, 2010, 2011; Stanovich, 2009, 2010; Stanovich et al., 2011). In this view, type 1 processes are intuitive, fast, and automatic. The defining attribute is that type I processes are not limited by cognitive capacity, in contrast to type 2 processes that are slow because of working memory limitations. Type 2 processes are also associated with executive functions (for counter-evidence, see Reyna & Brainerd, 1995). Finally, type 3 is a reflective mind responsible for monitoring and inhibiting conflicting responses between types 1 and 2, or even overriding type 1 responses as needed (Evans, 2011). More generally, according to Barrouillet (2011, p. 83), “the developmental predictions that can be drawn from this [dual-process] theory are contradicted by facts,” which bear on the validity of theories about adults (certainly about which process is less vs. more advanced). Keren and Schul (2009) also argue that most standard dual processes had ill-defined theoretical structures of the two systems and were not formulated as testable hypotheses.

In another descriptive theory that went beyond prior theories to make new predictions for judgment and decision making, Valerie Reyna and Charles Brainerd (1995, 2011) proposed a distinction based on how information is mentally represented, that is, gist or verbatim representations and associated processes, as well as social values, reward sensitivity (sensation seeking), and inhibition (Reyna et al., 2015). The theory’s description of mental representation distinguishes how people represent information in a verbatim-to-gist continuum (i.e., from the most precise and literal to the simplest meaningful distinction between options). Verbatim representations support rote analytical processes (e.g., 20% risk = 2 × 10% risk). Gist representations support intuitive processing that is imprecise (i.e., fuzzy), but also insightful, advanced, and meaningful interpretations of information (e.g., some as opposed to no risk, or, if needed, low as opposed to high risk). This gist process is considered a more advanced form of processing because it incorporates factors that affect the understanding of information, such as background knowledge, life experience, culture, education, and emotional import (e.g., whether a patient should feel worried or relieved about a 20% risk). Gist and verbatim processing occurs in parallel as a means of representing information that is relevant to the decision process, in contrast to standard default-interventionist approaches to dual processes (e.g., Brust-Renck et al., 2016; Reyna, 2012; Reyna & Adam, 2003; Reyna & Farley, 2006).

Most adults have a fuzzy preference to rely on gist-based processes to make decisions, relying on the bottom-line, qualitative interpretation of the meaning of information (e.g., difference between some versus none, or more versus less) rather than a rote meaningless approach (e.g., the categorical difference between 200 saved and saving no one; Broniatowski & Reyna, 2018). Thus, in the dread-disease problem, choices would be a result of the simplest qualitative distinctions. Information is encoded from the two options based on the gist distinctions, such as “saving some people” (i.e., 200) versus “possibly saving some people or saving none” (i.e., one-third of 600 or two-thirds of 0; Kühberger & Tanner, 2010; Reyna & Brainerd, 1991, 2011). According to this theory, a fuzzy preference to rely on gist representations of the options helps people apply their values to that gist (values such as saving lives is good). This can explain the choice for the sure option in the gain frame because of adult’s preference for “saving some lives” compared to “saving none,” In the loss frame, people are given the choice between the safe option, “If program C is adopted 400 people will die” and the risky option, “If program D is adopted there is one-third probability that nobody will die, and two-thirds probability that 600 people will die.” Given these alternatives, people tend to opt for the risky option because they derive the gist of the options for program C versus program D, and they prefer “none dying” (i.e., one-third of 0) to “some dying” (i.e., 400). Hence, these simple gist distinctions produce risk aversion for gains and risk seeking for losses in the dread-disease problem and many similar risky decisions (Reyna, 2012; Reyna et al., 2014).

This research also rules out alternative explanations for gain–loss framing effects, such as prospect theory (Kahneman & Tversky, 1979; Tversky & Kahneman, 1992). According to Kühberger and Tanner (2010), one of several critical tests of prospect theory and fuzzy-trace theory is to show the question without the “zero complement” of the risky option (i.e., two-thirds of 0 surviving in the gain frame, and one-third of 0 dying in the loss frame), for which the proportion of people that preferred the risky option in the gain frame (52%) and in the loss frame (48%) is approximately the same. This result disconfirms prospect theory because removing zero should have no effect on framing differences. The authors showed the classical effect when the “zero complement” was present, namely, that people preferred the risky option 30% of the time in the gain frame and 61% of the time in the loss frame, as predicted by both theoretical perspectives (see Broniatowski & Reyna, 2018; Reyna et al., 2014).

Other approaches, such as information leakage, explain attribute framing effects, but not risky choice framing effects (Sher & McKenzie, 2006). Attribute framing is when a single dimension is expressed positively (e.g., 80% correct on a test) as opposed to negatively (e.g., 20% wrong on a test). Speakers’ choice of positive wording conveys additional information about valence, such that the test is perceived more positively when expressed as 80% correct than as 20% wrong. (For an elegant discussion of the differences among attribute framing, risky-choice framing, and goal framing see Keren, 2012.). Consistent with the assumption of the information leakage account that people respond similarly when information is perceived to be equivalent, van Buiten and Keren (2009) found that there were no reversals in risk preference when all participants (speakers and listeners) were provided with both frames and told that both sets of options were mathematically equivalent. Therefore, separate but related theories are needed to account for both attribute framing and risky-choice framing effects (but see Gamliel & Kreiner, 2020).

Fuzzy-trace theory also predicts individual differences across adults and developmental differences across the lifespan (Reyna & Brainerd, 2011). For example, individuals with certain kinds of autism are higher in verbatim processing and lower in gist processing. Therefore, fuzzy-trace theory makes the surprising prediction that they will be technically more rational because they are less likely to demonstrate gist-based biases such as framing effects and the conjunction fallacy; these predictions were supported. The theory also predicts that framing effects and other biases become greater from childhood to adulthood, as information processing becomes more gist-based (also observed; Reyna & Farley, 2006; see also Paulsen et al., 2012). These studies remove the burden of symbolic and formal mathematical processing by using piles of prizes (e.g., stickers or toys) as outcomes and shaded areas of spinners to convey probability (Reyna & Ellis, 1994). Research on fuzzy-trace theory has further shown that prospect theory and utility theories cannot explain framing and other classic effects, and that novel phenomena of memory, judgment, and decision making can be explained with a small set of testable assumptions (Corbin et al., 2015). These ideas have been applied with the goal of improving decision making in law, medicine, and public health (Blalock & Reyna, 2016; Reyna et al., 2016).

Conclusion: What the Future Holds

Historically, the study of judgment and decision making in the field of psychology has centered on questions related to evaluation of options, preferences, and choice, focusing on deviations from economic, normative behavior and proposing descriptive models of behavior that account for these deviations. Current psychological models increasingly emphasize process-level explanations and behavioral predictions rather than mere demonstrations of biases and fallacies. Recently, neuroeconomics has emerged as an interdisciplinary field at the intersection of psychology, economics, and the growing field of neuroscience (Loewenstein et al., 2008). Neuroeconomics builds on data and theory from behavioral economics and decision research to further understanding of the brain.

Neuroscience findings, in turn, can further our understanding of current models of judgment and decision making (Reyna et al., 2012). Neuroscientists use tools such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) to lend insight into human judgment and decision making that could not easily be investigated using solely behavioral paradigms—for example, findings from neuroscience studies suggest that decision making involves so-called “default mode” (neural) networks (DMN; internally oriented processing as opposed to engagement with external tasks) along with task-engaged controlled processes (Li et al., 2017; Loewenstein et al., 2008). Recent findings from an extensive meta-analysis on the DMN and the subjective value network suggest that there is overlap in the functional connectivity of these neural networks, specifically in the central ventromedial prefrontal cortex (cVMPFC) and the dorsal posterior cingulate cortex (Acikalin et al., 2017). These findings are consistent with the current understanding of the VMPFC as an area involved in subjective value assessment, and it has been shown that subjective value is positively associated with VMPFC activation (Levy & Glimcher, 2012).

Neuroscience tools can be used to look at neural activation during different decision strategies and to observe activity in the brain after winning versus losing a gamble (Venkatraman et al., 2009; Xue et al., 2011). Neuroscience can also be used to understand the neural circuitry of systematic inconsistencies and errors that have been established in the judgment and decision-making literature. For example, a substantial amount of work has been devoted to examining the neural underpinnings of framing effects (e.g., De Martino et al., 2006; Li et al., 2017; Reyna et al., 2018; Roiser et al., 2009; Weller et al., 2007; Zheng et al., 2010). Several studies have shown that the amygdala is activated when people are making framing-consistent choices (i.e., choosing the sure gain or risky loss; De Martino et al., 2006; Li et al., 2017; Roiser et al., 2009). Findings from a recent meta-analysis of neuroimaging studies of framing suggest that activation during framing-consistent choices resembles activation that closely corresponds to the DMN, whereas the pattern of activation during framing-inconsistent choices (i.e., choosing the risky gain or sure loss) most closely corresponds to areas activated during task engagement (Li et al., 2017). Note that these results do not simply suggest that frame-consistent choices require limited effort or engagement, and for that reason, they are associated with the neural profiles of the DMN. Lack of effort would merely predict random or indifferent responses. Instead, critical tests indicate that systematic framing biases are attributable to gist representations (e.g., Reyna et al., 2014), which might be reflected in coactivation between DMN and PFC, and the latter can also reflect inhibition of noticed biases (see Broniatowski & Reyna, 2018; McCormick et al., 2019; Spreng & Turner, 2019).

Developmental neuroscience has also used behavioral paradigms to examine neural activity during judgments and decisions involving risk in adolescence, a period of development that involves a heightened amount of risky decision making in real life (Casey et al., 2016; Chein et al., 2011; Reyna, 2018; Steinberg, 2008). For example, using a simulated driving task, Chein and colleagues (2011) found that adolescents take more risks and have greater activation in reward-related areas such as the ventral striatum (VS) and orbital frontal cortex when driving with a peer present versus when they are driving alone. These findings suggest that peers may elicit a response in reward centers of the brains of adolescents that is similar to the response to food, sex, or drugs. Casey et al. (2016) illustrate a hierarchy of the changes that occur in the brain to explain the neural substrates of adolescent risky decision making. The authors describe a transition from subcortico-subcortical to cortico-cortical connectivity across development. In childhood, subcortical systems are driving behavior, whereas adolescence is characterized by a strengthening of connections to cortical frontal areas. Finally, in young adulthood, the cortico-cortical networks are more developed, with increased lateral PFC modulation of the medial PFC, resulting in more top-down control and goal-oriented behavior.

Neuroscience shows great promise for furthering our understanding of human judgment and choice. However, the interpretation of neuroscientific findings rests crucially on the behavioral tasks that are used. Brain activation by itself is meaningless. Together, carefully designed laboratory tasks and neuroscientific methods have extensive ecological implications: Judgment and decision making affect who is elected to office, what kinds of policies are supported, risky choices (e.g., drinking and driving), and unhealthy behaviors (e.g., smoking cigarettes). Understanding more about the underlying processes by testing theoretical predictions is fundamental to designing effective behavioral interventions and ultimately improving judgment and decision making.

Further Reading

  • Ariely, D. (2009). Predictably irrational, revised and expanded edition. Harper Collins.
  • Baron, J. (2007). Thinking and deciding (4th ed.). Cambridge University Press.
  • Belsky, G., & Gilovich, T. (2010). Why smart people make big money mistakes and how to correct them: Lessons from the life-changing science of behavioral economics. Simon & Schuster.
  • Fischhoff, B., Brewer, N. T., & Downs, J. S. (2012). Communicating risks and benefits: An evidence-based user’s guide. Government Printing Office.
  • Frank, R. H. (2018). The economic naturalist: In search of explanations for everyday enigmas. Basic Books.
  • Hammond, J. S., Keeney, R. L., & Raiffa, H. (2015). Smart choices: A practical guide to making better decisions. Harvard Business Review Press.
  • Hanoch, Y., Barnes, A. J., & Rice, T. (2017). Behavioral economics and healthy behaviors: Key concepts and current research. Taylor & Francis.
  • Hastie, R., & Dawes, R. M. (2009). Rational choice in an uncertain world: The psychology of judgment and decision making. SAGE.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Reyna, V. F., & Zayas, V. E. (2014). The neuroscience of risky decision making. American Psychological Association.
  • Russo, J. E., & Schoemaker, P. J. (2002). Winning decisions: Getting it right the first time. Currency.
  • Thaler, R. H., & Sunstein, C. R. (2009). Nudge: Improving decisions about health, wealth, and happiness. Penguin.
  • Wilhelms, E. A., & Reyna, V. F. (Eds.). (2014). Neuroeconomics, judgment, and decision making. Psychology Press.

References