Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 22 January 2022

Inference in Social Cognitionfree

Inference in Social Cognitionfree

  • D. Vaughn Becker, D. Vaughn BeckerArizona State University
  • Christian UnkelbachChristian UnkelbachUniversity of Cologne
  •  and Klaus FiedlerKlaus FiedlerHeidelberg University


Inferences are ubiquitous in social cognition, governing everything from first impressions to the communication of meaning itself. Social cognitive inferences are typically varieties of diagnostic reasoning or, more properly, “abductive” reasoning, in which people infer simple but plausible—although not deductively certain—underlying causes for observable social behaviors. Abductive inference and its relationship to inductive and deductive inference are first introduced. A description of how abductive inference operates on a continuum between those that arise rapidly and automatically (and appear like deductions) and those that inspire more deliberative efforts (and thus often recruit more inductive information gathering and testing) is then given. Next, many classic findings in social cognition, and social psychology more broadly, that reveal how widespread this type of inference is explored. Indeed, both judgements under uncertainty and dual-process theories can be illuminated by incorporating the abductive frame. What then follows is a discussion on the work in ecological and evolutionary approaches that suggest that, although these inferences often go beyond the information given and are prone to predictable errors, people are good enough at social inference to qualify as being “ecologically rational.” The conclusion explores emerging themes in social cognition that only heighten the need for this broader understanding of inference processes.


  • Social Psychology

Inference in Social Cognition

Imagine someone walking down a street at night in an unfamiliar city, who sees a stranger walking towards them. Even at a distance, inferences about what this person might afford—both threats and opportunities—begin almost immediately. One might use the gait, body size, and shape to discriminate that they are most likely a young male with an athletic build. One might use the clothing or accent to identify them as an outgroup member. If one has preexisting ideas that physically strong young males are also likely to be aggressive, one may accordingly infer that the person represents a threat and cross to the other side of the street. However, if the individual suddenly begins to whistle a melody by Vivaldi, the fit with the preexisting category of aggressive young males is reduced and the feeling of threat subsides (Neel et al., 2013).

In such scenarios, people may face uncertainty because they lack complete knowledge of the person and the situation. Nevertheless, they have access to some objective information (clothing type, type of tune), and they can go beyond the information given (Bruner, 1973) to infer unseen intentions and inclinations about whether a stranger is friendly or hostile. Like a doctor diagnosing an illness from symptoms, or a detective using forensic clues to solve a crime, such inferences are neither inductive nor deductive, but abductive, a diagnostic inference from an effect to a cause. This is typical of inference problems in the social world, which often require decision making without complete evidence and without all the relevant information. People make such inferences about others’ attitudes, personalities, what they are thinking in the moment, and what they can count on them behaving like in the future. Although these inferences run the risk of making errors, they nevertheless coalesce with past experiences of similar social targets to guide behavior.

People do not just make such abductive inferences about other individuals. They make these inferences about groups, extracting stereotypes that simplify decision making, but which carry with them the cost of unfair biases against individuals. People make these inferences about novel situations, the causes of events, and the way the future will play out. People even make these inferences about their own selves and motivations, and they often serve purposes that preserve their self-esteem and may even perpetuate their ignorance.

This article first explore the nature of abductive inference (Eco, 1983; Thagard, 1978) and the reasons why it is indispensable to social cognition; in fact, it subsumes many other existing frameworks. For example, abduction captures both inferences that are very quick (e.g., inferring that a salient skin color indicates an outgroup difference), as well as inferences that prompt more deliberate investigation (e.g., engaging in casual conversation to assess the person’s accent as a further cue to their social group); abduction thus includes both fast and slow dual-process perspectives (e.g., Kahneman, 2011). Next, the article surveys empirical findings about how people make social inferences, reviewing how many classic empirical investigations of inferences have revealed that human performance apparently diverges markedly from typical economic models of rationality (e.g., Tversky & Kahneman, 1973). It then explores more recent work in the ecological and evolutionary traditions, demonstrating that some of these violations are ultimately quite adaptive if one considers a more general cognitive economy that pursues multiple goals in the face of limited time and information processing capacity (Simon, 1955). Finally, the article examines some new trends in the study of social inference, including the turn to implicit associations, how low-level perceptual confounds might be difficult for any social perceiver to ignore, and whether the basic level of social categorization is additive or reflects unique intersectionalities of identity (e.g., Asian females may have unique stereotypes that are not a simple combination of those for Asians and for women).

Abduction, Induction, and Deduction

What is abductive inference? Simply put, it is people’s capacity to generate hypotheses about the cause of observed effects. Peirce often framed this as a response to a surprising fact that resists easy explanation, but it can be more automatic and unconscious as well—Peirce, in fact, argued that the whole of perception was abductive. Abductive inference happens when people observe behaviors and infer the actors’ motivation. It happens when people perceive a cue that a person belongs to a particular group and infer that this person will act in accordance with that group’s stereotype. It happens when people see an emotional expression and intuit that the person feels similar to their own emotional state when they show such a face. Abduction describes inferences to meaning that people make when others speak, and it explains why people might be deceived by those words. Abduction can even characterize the scientific approach to knowledge-building and the creative element in theory generation; Planck’s suggestion of the quantum hypothesis is an oft-cited example. As Peirce (quoted in Sebeok & Sebeok, 1981) noted “neither deduction nor induction can ever add the smallest item to the data of perception. . . . The truth is that the whole fabric of our knowledge is one matted felt of pure hypothesis confirmed and refined by induction. Not the smallest advance can be made in knowledge beyond the stage of vacant staring, without making an abduction at every step” (p. 28).

Differentiating Abduction, Deduction, and Induction

Abduction infers causes from effects (i.e., given effect Y → potential cause X). It thereby contrasts with deduction, which allows deriving effects when the cause is present (i.e., given cause X → effect Y follows), as well as with induction, which involves observations from which rules are extracted (i.e., X is always present when Y occurs). McAuliffe (2015) described the relationship among these three in this way: “abduction generates and chooses hypotheses to test; deduction determines the entailments of a hypothesis; induction ascertains whether the evidence accords with the hypothesis in question” (p. 300). Abductive inferences are thus never certain, unlike deductive inferences, where the antecedent entails the consequent (so long as the rule is valid), and they are not tested like inductive inferences which typically build on regularities across many observations.

As an example, one may repeatedly observe that gang members in the Sharks always wear red bandanas. This allows the induction that IF one is a Shark, then one wears a red bandana. Conversely, deduction means that IF Joe is a Shark, then he wears a red bandana (without observing the bandana). An abductive inference might instead start with the observation of Travis in a red bandana and give rise to the inference that he is a Shark: this might be correct, and given the right contexts—he is in the Shark territory, not many other gangs that wear red bandanas—even likely. Yet there are other possible explanations. Travis might simply enjoy bandanas and, by mere coincidence, wears a red one, being oblivious that it is a cue to being a gang member.

Abduction involves diagnosing unobserved causes from observed effects, cues, and symptoms, and context can vastly constrain the likely causes. Successful abductive inferences can be highly complex because there are often many potential causes of observed facts. Due to this complex interplay of effects, rules, and hidden causes, abductive inference is prone to errors, and so are many social inferences.

Two kinds of abduction. Philosophers, from Peirce (quoted in Sebeok & Sebeok, 1981) onwards, have suggested at least two broad kinds of abductive inference (see the lower part of Figure 1). One kind are abductive inferences that are so fast and automatic that they seem like deduction (e.g., when a male African American in Compton in his 20s wearing a red bandana is categorized as a gang member). In the words of Eco (1976), these are “overcoded abductions.” Peirce’s assertion that perception itself is abductive falls into this variety. Indeed, it explains why people perceive objects as unchanging across rotation and illumination changes, despite the fact that the retinal projection and cone response clearly detect changes (a phenomenon known as perceptual constancy) (Gillam, 2000).

Figure 1. Philosophers have differentiated among deduction, induction, and abduction. Following Eco (1976), abduction is seen as a continuum of reasoning acts, some of which are so overcoded that they act like deductions in the speed and automaticity with which they give rise to casual inferences, while at the other extreme, they become more deliberative and systematic, like a physician diagnosing an unseen illness from symptoms, and can inspire empirical testing that could yield inferences to the best explanation if sufficient knowledge exists. Almost all of the inferences in social cognition have an abductive character.

The other kind concerns more deliberative abductions, for example, when people seek the cause of an observed (sometimes surprising) effect. This involves generating hypotheses, and what people generate will depend on how they already understand the world. It may also entail creative new hypotheses that explain more than the effect at hand and can even lead to new insights about the world. It may also involve probing and testing hypotheses until the hidden cause is deciphered. This later stage might amount to inference to the best explanation (IBE); while IBE is often confused with abduction, it actually only describes a final possible stage of abductive inference (McAuliffe, 2015). These more elaborate abductive efforts are called diagnostic abductions.1

Admittedly, both overcoded and elaborate abduction are ultimately diagnostic, but the first is fast and feels almost deductive, while the second is slower and more methodical, involving information search and iterative testing. This information search could be largely inductive, but could instead involve searching for the cause of converging lines of evidence. For example, explaining why Travis wears a red bandana, carries a switchblade, and shows such concern about who is on his “turf” has a single explanation—he is a gang member in the Sharks—that is rendered more plausible by the number of observed things it explains, in a way that is quite different from induction, though each of the individual associations may have been inductively derived. Diagnostic abduction is thus more like a medical practitioner or detective assessing symptoms and clues to infer the underlying cause. These two broad classes of abduction—overcoded and diagnostic—will be used in the exploration of social cognitive inference, but it should be noted that philosophers are now making even finer distinctions (e.g., Magnani, 2009).

Although these two broad classes fit well with some of the ideas coming out of dual-process thinking, these are better seen as falling on a continuum influenced by a given person’s processing capacity, motivation, and subjective importance, among others (see Kruglanski et al., 2003). To illustrate, consider inferring character traits from observed behaviors. If one sees a new coworker, Jim, have an angry argument with another coworker, one might quickly infer that Jim is argumentative. This inference would be supported by the context if Jim is a member of a group with the stereotype of pugnacity: the overcoded association allows the trait inference to quickly outcompete other hypothetical causes of the argument. However, if one needs to add another employee to a new project, one might seek further information about whether this is typical behavior, inquire about the provoking event, and even test the consistency of Jim’s behavior by creating a situation that will better diagnose whether this is a disposition or was merely situationally provoked. Both overcoded and diagnostic abductions can not only lead to accurate inferences, but also be ignorance-preserving, yielding errors that have been well charted by social cognition researchers. For example, people’s tendency to abduct dispositional over situational causes for behaviors has long been considered the “fundamental attribution error” (or correspondence bias) (Gilbert & Malone, 1995).

It is important to note here that overcoded abductions often appear to perceivers as deductive operations. When a prejudiced individual sees an outgroup member and infers they are pugnacious (or criminal, or dangerous, or carries diseases, or threatens economic livelihood), they may well be wrong, but they often behave as if this is a deductive certainty. If they are instead motivated to be accurate (e.g., Neuberg, 1989), they may expend cognitive resources and seek additional information. Nevertheless, the perceived costs and benefits for misdiagnosing an outgroup member may still induce high false alarm rates for threats (i.e., falsely inferring threat when none exists), and high miss rates for opportunities (i.e., failing to recognize the potential for a positive interaction and new affiliation resource). Nesse (2001) called these predictable patterns of error the “smoke detector principle”—drawing on the metaphor of smoke detectors designed to false alarm to things like over-enthusiastic cooking errors, so that they never miss actual fires—and has argued that they characterize a wide variety of psychological disorders. They apply to an even wider set of (subclinical) mistakes in social inference.

Using this abductive frame, the article surveys empirical examples that reveal social inferences as an abductive process that is adaptive (i.e., it yields more correct than false conclusions) yet far from perfect. It first reviews classic work in attribution, including inferences about other individuals, groups, and the self. It then examines the focus on errors in inference that arose in the 1970s, stemming from the judgments under uncertainty literature (Tversky & Kahneman, 1973). It finally pivots to modern ecological and evolutionary perspectives, which challenge the emphasis on errors (Krueger & Funder, 2004) by noting how adaptive the article’s inferential shortcuts are when information is presented in an ecologically sound way (e.g., Gigerenzer et al., 1999).

Attributions and Social Inference

Classical attribution work examined how people infer the causes of others’ behaviors. Heider (1958) proposed that the social observer’s key task is to distinguish between dispositional/internal causes (e.g., character traits, abilities) and situational/external causes (e.g., situational pressures, luck). Relatedly, Edward Jones (1979) made a related attempt to describe how people infer traits from behavior. Accurately inferring whether a behavior is caused by factors within the person’s character or the situation provides highly relevant information (e.g., “Jim is pugnacious” vs. “provocation increases people’s tendency to argue”). Subtler calculations may introduce situational constraints and explanatory components (e.g., people become pugnacious when provoked; Jim is sensitive to provocations).

Early empirical investigations revealed that people have a general tendency to infer traits rather than situational factors as causes for behavior. Even when situational factors fully explain behavior, social perceivers often assume dispositional causes (Jones & Harris, 1967; Nisbett & Ross, 1980). That is, if Jim argues because he was strongly provoked (i.e., the situation explains his behavior), people nevertheless make inferences about his personality. This has come to be known as the fundamental attribution error: observers tend to favor internal over external explanations when trying to understand actor’s behaviors. Some have conjectured that this is because situational factors are not as focal to the social perceiver (Ross et al., 1977), or that it reflects a lack of motivation to attend to these external factors (see Gilbert & Malone, 1995, for an overview).

Other biases to infer internal or external causes have also been reliably observed. For example, people show self-serving biases to explain their successes with traits and abilities, but explain their failures with external factors (Campbell & Sedikides, 1999). Group-serving biases also follow this pattern (Brewer & Brown, 1998). People also show defensive biases when confronted with others’ misfortunes, inferring that it is due to the sufferer’s negative traits, which allows observers to avoid acknowledging that bad things might happen to them as well (Burger, 1981). When thinking about the cause of an event, people tend to think that agents should have anticipated what would happen (Shaver, 1985). People also have a difficult time ignoring the outcome when thinking about what someone else should have predicted, which is known as the hindsight bias (Fischhoff et al., 1977).

Inferring the causes of behavior is a difficult task. Kelley (1967) explored how people seek and survey social information to manage this task. The author suggested that people assess behaviors on three dimensions to attribute causes: distinctiveness, consistency, and consensus. For example, Jim may also argue when unprovoked (i.e., low distinctiveness), no one else but Jim argues under stress (i.e., low consensus), and Jim argues not only during work, but also after work and with everybody (i.e., high consistency). This pattern is obviously indicative of something about Jim that causes his behavior, a hidden cause that people infer to be the trait pugnacity. Conversely, Jim may argue only when provoked (i.e., high distinctiveness), everybody else argues when provoked (i.e., high consensus), and Jim only argues at work (i.e., low consistency). This pattern is obviously indicative that it is something about the provocations at work that makes Jim argumentative.

Considerable work has confirmed that these dimensions do influence inferences from behaviors to traits, but that most people do not systematically sample the relevant information, and so they often stray from ideal performance (Fiedler et al., 1999). For example, most participants neglect consensus information (e.g., overlooking that most people may tend to be argumentative and aggressive when provoked). Kelley’s work also introduced the idea that multiple causes, which are themselves either necessary or sufficient, can influence each other.

Inferences about individuals influence inferences about the groups they belong to and vice versa. Dispositional inferences at the individual level feed into inductive processes that generate stereotypes of that group. In a perfect rational and unbiased world, stereotypes should reflect the true properties of a given group. Of course, they often do not. There are several reasons for the inaccuracy of many stereotypes. For example, equal sampling of observations is difficult and people do not assess covariations very accurately (Crocker, 1981). One particularly insidious example is that of illusory correlations (Chapman, 1967). Hamilton and Gifford (1976) reported an interesting case of illusory correlations. The most relevant examples are majority/minority status and positive/negative behaviors. Majorities are more frequent than minorities by default, and positive behaviors are more frequent than negative behaviors (Unkelbach et al., 2019). Illusory correlations describe cases in which people infer that minorities show more negative behaviors, even if there is no correlation in the data given. Initial theorizing suggested that this occurred because people preferentially encoded and remembered the rare events, but empirical work did not support this.

Another explanation that does not rely on the joint occurrence of rare events is the pseudo-contingency (Fiedler & Freytag, 2004). The mere alignment of the base rates of variables such as majority/minority and positive/negative behaviors suffices to induce subjective contingencies. For example, a teacher who teaches two classes, one with many girls and the other one with many boys, and one with good performances mainly in languages, and the other with good performances mainly in mathematics, will perceive a correlation between gender and vocation, even if factually the girls are better in math and the boys in languages (Fiedler et al., 2007). Induction provides these base rates, but abduction produces the illusory correlation as an explanation, and if it provokes further testing, could reveal the error.

Consistent with this, illusory correlation effects are stronger when behaviors are aligned with expectations, which may allow negative stereotypes to intrude, and they particularly distort evaluative judgments (Klauer & Meiser, 2000). This suggests that illusory correlations arise when people fail to engage in diagnostic abductions that would prompt searching for more information. Intriguingly, mood seems to penetrate this process: negative moods appear to facilitate covariation detection, while positive moods inhibit it and support reliance on top-down confirmatory practices (Braverman, 2005).

In addition to inferring the causes of others behavior, and inferring shared properties within social groups, people also make inferences about the causes of their own behavior. It may seem a transparent task in many cases—“I eat because I am hungry”—but most motivations are more opaque. Bem (1967) postulated that people infer their own attitudes, preferences, and beliefs the same way they make inferences about others, namely by surveying their past actions and observing their own behavior. This is more likely when preexisting attitudes are weak rather than strong (Bem, 1972). As people do when observing other actors, they also understand that there are both internal and external factors that govern behavior. If one is asked whether she likes country music, and she surveys her behaviors and discovers that she listens to country music frequently, she might make an inference to an internal cause, namely a stable attitude: “I like country music.” Yet, if she realizes that she listens to it because it is on the radio at work, and it is her supervisor’s favorite station, she may be more hesitant to make an internal inferences, because there is an external factor that explains the behavior. Thus, social inferences about the self and others might be very different in their apparent form, extent, and content, but still be very similar in their underlying processes.

This sampling of diverse kinds of social inference provides merely an overview, but it underscores the wide-ranging role that abduction plays in the inferential processes studied by social cognition. This abductive framing explains why the process of inference in social cognition is often far from perfectly accurate. Like Sherlock Holmes, people believe they deduce when in reality they merely hypothesize explanations. Still, abductions are often good enough. Understanding why abduction works—even if it is also sometimes ignorance preserving—will require first delving into the “judgment under uncertainty” work that dominated the field in the 1970s and 1980s (Nisbett & Ross, 1980; Tversky & Kahneman, 1973).

Heuristics in Social Inference

Whether making inferences about an individual’s traits, a group’s characteristics, or an event’s meaning, gathering information must balance accuracy with efficiency, which introduces new potential for biases. Heuristics are mental strategies that neglect complete information in exchange for more rapid inferences. Very often, however, there is no alternative to using heuristics when making abductive inferences. Even scientists, experts, and robot systems must rely on some heuristic or proxy (e.g., historical analogies or the availability heuristic, Tversky & Kahneman, 1973) when inferring risks, economic trends, or planning the costs of a project, simply because a non-heuristic algorithm that uses complete information often does not exist.

Abductive social inference can be broadly categorized into two classes, overcoded and diagnostic, although they most likely constitute a continuum rather than two qualitatively distinct classes (see De Houwer, 2019). People may quickly infer a cause, like an agent’s disposition causing a behavior. There are also more laborious diagnostic efforts that sample relevant information or seek new converging evidence to clarify which inferences are warranted. Many dual-process models capture aspects of this apparent dichotomy (e.g., Gawronski & Bodenhausen, 2011). For example, Smith (1984) posited that in a new domain people will be deliberative until they induce rules that are reliable and come to act automatically. Trope (1986) proposed a two-stage model of specific inferences, in which people first automatically identify behaviors as dispositional (an overcoded abduction), and then test and subtract situational factors from the dispositional account (diagnostic abductions). Other models (e.g., Gilbert & Malone, 1995) also take a two-stage approach, with early rapid abductions anchoring the more deliberate inferences. What is common is that the automatic, overcoded process resolves itself earlier, while the latter is laborious and can suffer when time or motivation is lacking. Classical rational models of inference assume that the second stage happens, but it may not in practice, or under cognitive or emotional load. Dual process accounts thus emphasize a dichotomy that looks like the overcoded–diagnostic distinction, but seeing them as falling on a continuum may better capture the full range of possible inference styles. Indeed, substantial experimental evidence suggests that the assumptions of two distinct processing classes is more apparent than real (De Houwer, 2019). Rather, the same processes may be at work, but what differs is the weighing, scrutiny, or validation of the given information (see also Kruglanski et al., 2003).

Independent of the specific conceptual models, human behavior has long been known to fall short of “rational” models” (Simon, 1955). There are a number of well-established fallacies within the heuristics and biases tradition (Tversky & Kahneman, 1973) that should be reviewed, as they motivated much of the field and still inspire insightful work. Kahneman and Tversky (1974, 1981), in this realm, introduced several heuristics that bias inference toward certain explanations over others.

One of the biggest challenges to “rational” social inferences is social perceivers’ tendency to search for information that confirms what they already know: the confirmation bias. Ample research shows that once a rule, belief, stereotype, or hypothesis is formed, social perceivers tend to look for confirming evidence rather than disconfirming evidence (Arkes & Harkness, 1980). In the example of observing co-worker Jim in an argument, perceivers may infer the hypothesis that “Jim is pugnacious.” In addressing this hypothesis, the confirmation bias is evident if the perceiver tries to remember, hear from other, or even tries to imagine instances in which Jim was argumentative or quarrelsome. It is important to note, however, that sound reasoning also entails searching for disconfirming evidence; the perceiver must also look for situations in which Jim behaved friendly, conciliatorily, or made concessions. In fact, the counterevidence is more informative from a logical point of view. As most people will be argumentative sometimes, neglecting the counter-evidence will confirm the initial hypothesis, independent of the true state of the world.

Another prominent challenge for correct inferences is base-rate neglect, in which people use information (e.g., Jim’s behavior) to make inferences without considering the ecological distributions. This abstract notion is easily illustrated with the co-worker example. Imagine that the social perceiver observes Jim’s quarrel but does not know him personally. What kind of position does Jim have? At the company, there are some lawyers (e.g., 10%), but mostly customer relations operators (90%). Most social perceivers would guess Jim to be a lawyer based on his behavior, although the base rate strongly suggests otherwise. A related pitfall is the tendency to estimate the probability of a conjunction to be higher than the probability of either conjunct; the conjunction fallacy. This is a logical fallacy, as the probability of the conjunction of two events cannot be larger than the probability of a single event. Of course, such inferences, while logically fallacious, may actually produce accurate results because they allow simultaneous constraints to come into play. Consider the classic conjunction fallacy problem investigated by Tversky and Kahneman (1983): Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which of the following is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

Most people commit the conjunction fallacy and find the latter more probable (see Figure 2). Although it is a logical error, it may yield quite accurate practical consequences, because the information in the anecdote does converge on the notion that Linda is a feminist. Logician’s can balk, but in everyday reasoning this extra information has been reliably linked with “feminism” and it is quite likely that Linda is indeed a feminist, regardless of her occupation. Such abductive inferences go beyond the information given, but even in erring, they reveal something about inherent motivations that people have to make sense of their social world with whatever salient information is provided.

Figure 2. The first Venn diagram illustrates that the probability that Linda is a feminist bank teller is always less than the probability that she is just a bank teller. People may get this wrong because additional diagnostic information such as that she is “concerned with issues of discrimination and social justice” (added to the second Venn) automatically elicits an abductive inference about what kind of person Linda must be. While logically, the conjunction of two probabilities (i.e., being a bank teller and being a feminist) can never exceed one of the disjunct probabilities (i.e., the bank teller circle will always be larger compared to the intersection), the overcoded association elicited by the description of Linda may lead to the inference that she is a feminist. As illustrated in the figure’s right half, people infer (probably correctly) that Linda may be a feminist, but fail to put this in relation to the conjunction with her being a bank teller. Thus, a conjunction fallacy emerges.

A related but distinct influence on social inferences is the availability heuristic. It describes people’s tendency to judge the probability of an event by judging how easy the information comes to mind (Tversky & Kahneman, 1973). In the case of the arguing co-worker Jim, perceivers might follow a confirmatory strategy. Yet trying to remember a case when Jim also argued might be easy or difficult. If it is easy, people may infer that this must be frequent behavior; if it is difficult, people may infer that this must be infrequent behavior. This ease with which things come to mind can bias inferences (McLeod & Campbell, 1992). This entails that more salient social information can be overweighted even if it is task irrelevant. Stereotypes, for example, can intrude even when they decrease accuracy (Hamilton & Rose, 1980).

Another well-studied influence on inferences is anchoring. When people make inferences under uncertainty, they often begin with an estimate that is anchored or framed by other, sometimes irrelevant information. In the co-worker example, the relevant anchoring is the level or argumentation the perceiver might expect. In one of the classical examples, Tversky and Kahneman (1973) had participants spin a wheel of fortune. This wheel was rigged so one group would get high numbers (“65”) and the other low number (“10”). Then participants answered if the number of African states would be higher or lower than the fortune-wheel number, and then guess the number of states. In the former condition, participants’ median guess was 45, but only 25 in the latter condition. Anchoring also occurs in a self-referential fashion, when people assume others will behave in the same way and for the same reasons that they do (e.g., Epley et al., 2004). Anchoring is particularly likely when the domain is unfamiliar (Mussweiler & Strack, 2000).

In the related case of framing effects, the way a social situation is described can affect the inferences made, because it narrows down the abductive set of hypothetical causes that are considered and the outcomes to which they might lead. Inferences about a course of action, for example, find participants more likely to take chances when the choices are framed in terms of losses, rather than gains, even if the actuarial outcome is equivalent across options (Roney et al., 1995). Tversky and Kahneman (1981) introduced a scenario known as the Asian disease problem. Participants are asked to think about a new disease that is expected to kill 600 people in an Asian community and choose a course of action. One group had to choose between a treatment that would be certain to save 200 people and another treatment that has a one-third probability of saving all 600 people and a two-thirds probability that no people will be saved. Another group saw these same actuarial outcomes framed in terms of lives lost: in the first treatment option, 400 people would die, while in the second treatment, there was a one-third probability that nobody would die and two-thirds probability that 600 people would perish. Participants in the first group opted for the certain option, while those who saw things framed in terms of lives lost preferred the risky option.

Choosing more risky options with loss framings has been found in many domains, including business and medical decision making (Brockner et al., 1995; Rothman & Salovey, 1997). There are, of course, individual differences that affect the degree to which people are susceptible to these inferential biases. Individuals who are more prevention-focused are known to be more affected by loss framings. Moreover, those with low working memory resources—the executive construct that subsumes focused attention (Engle, 2001)—have a harder time resisting irrelevant information (Becker, 2017; Eder et al., 2011). Approach motivations increase reliance on heuristics (Forgas, 1998); this includes not only positive mood but also the approach emotion of anger (Tiedens, 2001).

Viewing social inference as largely abductive makes these kinds of biases and errors more comprehensible. When asking why these biases are there in the first place, one plausible explanation is that they are, on average, more frequently correct than not (e.g., Reber & Unkelbach, 2010). Instead of considering heuristics as faulty, there is research questioning whether the classic heuristics really violate norms of rationality (e.g., Funder, 1987; Gigerenzer et al., 1999). There is a growing consensus that people rely on these abductive heuristics because they are often “satisficing,” that is, good enough, providing valuable shortcuts in uncertain environments. Moreover, many errors in inference simply do not arise when the information is presented in ways that conform to our natural proclivities to sample experience and generalize from that experience.

Ecological and Evolutionary Perspectives on Social Inference

Ecological and evolutionary perspectives offer a meta-theoretical grounding for the diverse heuristics that people employ by maintaining that the mind’s inferential capabilities are tuned to a world with reliable regularities. If one attends to how the mind is prepared for these regularities, the inferential errors highlighted by the “judgement under uncertainty” tradition can lessen or disappear. For instance, base-rate neglect can be quite insidious when people are presented with probabilities in abstract ways; however, but if participants actually sample stimuli (Gigerenzer & Hoffrage, 1995) or see pictorial representations of the frequencies of these base rates (Cosmides & Tooby, 1996), people perform quite well (Fiedler & Unkelbach, 2011).

Gigerenzer et al. (1999) made a powerful argument that the human toolkit of fast and frugal heuristics are often good enough, which saves cognitive resources for other tasks. They call this ecological rationality. The authors argued that the mind is unbiased in an ultimate sense (e.g., Gigerenzer et al., 2012) because it efficiently balances multiple goals. Abductive inference is critical to this balancing act, because overcoded abductions allow people to avoid deliberating overlong on likely inferences so that they can devote deliberative efforts to more murky inferential problems. These ecological perspectives led to many new insights and explanations (e.g., Unkelbac et al., 2020). Evolutionary perspectives add ancestral constraints to this formulation and refer to the result as deep rationality (Kenrick & Griskevicius, 2013).

Another perspective that considers the social perceiver as more or less rational or unbiased is the sampling approach (Fiedler, 2000; Fiedler & Juslin, 2006). The basic idea is that biases in inference arise not due to irrational or flawed processes, but in which hypotheses come to mind, and the way the information is searched. The confirmation bias discussed in the section “Heuristics in Social Inference” falls in the explanatory scope of this approach. People lack the metacognitive ability to correct for biases in the generation of hypotheses and information, and thereby, inference biases occur. There are several prominent sampling biases. First, people are insensitive for search directions. For example, when assessing the relation between a positive mammography and breast cancer, looking for cases of breast cancer will yield that almost all cases of breast cancer had positive mammography results. However, the correct search direction is to look for positive mammography results and assess the cases of factual breast cancer (e.g., Fiedler et al., 2000). Second, people are insensitive for redundant information. For example, when the identical incident is repeated symbolically (e.g., observing Jim’s quarrel and then hearing about it later) influences people’s inferences about Jim as being pugnacious (e.g., Unkelbach et al., 2007). This repetition effect is prominently present when it comes to evaluating a statements’ veracity or validity (Brashier & Marsh, 2020; Unkelbach et al., 2019). And third, people are insensitive of the impact of aggregation level. In Simpson’s paradox (Fiedler et al., 2003), for instance, people have a hard time understanding that in graduate admission decisions male applicants are more successful than females overall, although in each of the two different graduate programs females systematically outperform males. This is possible if most females apply to the more difficult graduate program whereas most males apply to a generally much easier subprogram.

One reorientation that ecological/evolutionary perspectives emphasize is that people make inferences about affordances, which change as a function of the observer’s needs and goals (McArthur & Baron, 1983; Zebrowitz & Montparre, 2006). A stick can be a crutch if one is injured, a weapon if one is threatened, or may be used to knock an apple out of a tree if one is hungry. The affordances of a social agent change with their needs and capabilities as well as with the social perceiver’s needs and capabilities. This dovetails well with the abductive frame, which narrows down the set of candidate causal explanations based on context. An attractive individual can be a romantic opportunity or a threat to the current relationship, depending on who is doing the perceiving. If one sees a sign of anger in the workplace, it is not the equivalent to the same display at a sporting event, or when watching actors in a play. Even one’s own capacity to escape a situation can change how they interpret a stranger’s motives and their own reactions (Gawronski & Cesario, 2013). For example, Ferguson et al. (2019) suggested that first impressions, long thought to be highly automatic and resistant to change, can be shifted by information that is particularly diagnostic or believable.

Thus far much of what has been talked about are inferential problems that may cut across content domains, but thinking in terms of abducing affordances entails that inferential processes will sometimes have more domain-specific nuances. Kenrick et al. (2009) have suggested that fundamental motivational systems like self-protection, disease avoidance, mate attraction, and status striving, reflect deep and highly conserved biological systems. These motivational systems are vigilant to specific kinds of information in the environment, which facilitate specific goals and prevent specific threats; they are thus attuned to different affordances. Self-protection is attuned to a social target’s size and facial expression, while disease avoidance attunes to their coughing and sneezing. Many pieces of social information are relevant to multiple goals (e.g., detecting that someone belongs to an outgroup may evoke concerns about both aggression and disease), which entails that these motivational systems are sometimes in competition for the most adaptive inference. More proximate social needs can trump more distal needs. For example, an attractive and flirtatious conspecific might elicit inferences about sexual interest, but the presence—or even the knowledge—of one’s own romantic partner might bias people to infer merely friendly interest.

If many social inferences serve ancestrally important goals, those ancestral challenges can be used to better understand the kinds of information they seek, the behaviors that this information releases, and perhaps even provide insight into what abductions are likely to become overcoded (and resist more effortful and deliberative processes). Consider one emerging area of study is the behavioral immune system (BImS) (Schaller & Park, 2011). Just as the body has elaborate biological systems to fight off pathogens and infections, this research is predicated on the idea that people are predisposed to make inferences about the pathogen load of others in particular and environments in general. Inferences about whether a person might harbor a contagious disease can automatically arise from obvious symptoms—coughing, sneezing, skin lesions—but because the cost of infection is so high, the system seems to be designed with biases to over-detect signs of disease. As noted, Nesse (2001) likens such over-detection to a fire alarm, which is designed to sometimes act as a false alarm to minor cooking events lest it ever miss an actual fire. The BImS appears to be similarly oversensitive and can be elicited by non-contagious cues like race and obesity (Schaller & Park, 2011). Such errors can be seen as ecologically rational if they ultimately help people to avoid the costlier errors of becoming sick in a broader range of circumstances. Natural selection does not optimize, it merely selects against the most maladaptive mechanisms and behaviors.

Evolutionary considerations of ancestral challenges often entail hypothesizing which internal computations needed to be accomplished to satisfy basic social goals. For example, the welfare trade-off ratio (WTR) has been proposed as an internal regulatory variable that helps people infer someone else’s affordance value as a cooperative partner (Tooby & Cosmides, 2008). It is often measured by the degree to which people accept costs to themselves in order to benefit another—the extent to which they share resources with someone. The theory of inclusive fitness—the idea that people enhance the fitness of relatives who share their genes and so increase the representation of those genes in the future population (which can happen in the absence of reproducing oneself)—suggests that known kin will have higher WTRs than strangers. However, more nuances can be added to the WTR than just kinship. The mechanism is sensitive to learning, and repeated successful cooperative interactions with non-kin can increase their WTR relative to a less cooperative partner; it is thus a key calculation undergirding affiliation goals. This reliance on learning and development is a core consideration in most evolutionary psychology, despite claims to the contrary (Cosmides & Tooby, 2001). It is also plausible that the WTR is sensitive to a novel target’s similarity to the perceiver, or to other past interaction partners, which opens up the possibility that other races and cultures may be penalized and receive a lower WTR than they might otherwise deserve. WTR is not just a cue to who people should share resources with, it is a cue to the kind of abductive inferences that are most adaptive. Nonreciprocating kin automatically get the benefit of overcoded positive abductions (and continued sharing behavior), and only a series of surprising violations of reciprocity provoke more deliberative diagnostic abductions to seek the cause of their consistent defection.

It is still an unresolved question as to whether such social problems are domain-specific and modular, or whether common inferential resources must be shared. On the Wason (1968) selection task, Cosmides (1989) provided evidence for the modular view. Cosmides showed that the way people test hypotheses in a notoriously difficult reasoning task becomes quite solvable when cast in terms of cheater detection rather than the more abstract variations that were traditionally used. Imagine four cards that each have a number on one side and a letter on the other (figure 3). Given the rule “If a card has a vowel on one side it must have an even number on the other,” which cards should be turned over to test that rule? Fewer than 10% of respondents choose the correct two cards (most correctly choose “E,” but few choose the “3,” which, if it had a vowel on the other side, would violate the rule). However, when Cosmides (1989) recast this in terms of detecting a social transgression (the lower panel of figure 3), most correctly choose to check the 17-year-old to ensure that she is not drinking beer. Why can people do this task but not the logically equivalent traditional version? Indeed, this social variation seems to be one place where people use deduction correctly, but it is the abductive hypothesis that deception needs to be investigated that is invoked in the social cheating case but not the more abstract versions. Cosmides maintained that such inferences are modular, that the information in the cheater detection variation automatically engages mechanisms that look for social rule violation, but the more abstract version does not activate the cheater detection machinery. This suggests a modular kind of inference, which requires information in specific formats for reasoning to function properly.

Figure 3. (a) The traditional Wason task: what cards do you have to turn over to test the rule “If a card has a vowel on one side, it must have an even number on the other.” (b) The “cheaper detection” version of the Wason task: what cards do you have to turn over to test the rule “If a person is drinking a beer, they must be over 21.”

Not all evolutionary approaches to social inference are modular in the strong sense (e.g., fully encapsulated), work in social perception (e.g., Becker & Neuberg, 2019a, 2019b; Freeman & Ambady, 2011) suggests that there may also be common internal representational faculties that are shared by these more domain-specific faculties, which are critical to people’s ability to monitor, simulate, predict and adapt to their social environment. This would allow various social motivations and modules to interact with each other while they monitor the environment for cues that release their more specific computational efficiencies. The same social target might elicit activity in multiple competing motivational systems, each offering its own goal-specific interpretation of ambiguous social information. The kind of inferences people draw about social targets would then be much more context-sensitive, evoking overcoded abductions in certain cases (in which such automatic associations have proven reliable), but eliciting more diagnostically deliberative processes when context demands (and scarce cognitive resources can afford) further investigation. Domain-specific and domain-general evolutionary perspectives are thus both compatible, and their fusion will offer new ways of carving the processes of social inference at its joints.

Evolutionary considerations may also ground what kind of inferences people are intrinsically designed to make about another’s affordances, including what personality traits are the most commonly sought. Buss (1991) has suggested that the factors extracted in personality research and stereotyping research, such as the “Big 5” (i.e., conscientiousness, openness to experience, agreeableness, extraversion, and neuroticism; see Costa & McCrea, 1992), the “Big 2” (i.e., warmth and competence; see Fiske, 2018), or the “Big 3” (i.e., agency, beliefs, and communion; see Koch et al., 2016) actually reflect the critical inferences that people need to make about social others. People might be attuned to figuring out whether others have good or bad intentions and the capacity to act on these intentions, because these inferences were the most valuable ones to make in ancestral hunter–gatherer conditions, and they continue to facilitate adaptive functioning in the modern world.

If part of the human nature is attuned to information about enduring social regularities, people nevertheless may prioritize these differently as a function of other individual differences like gender or developmental stage. Sometimes the good or bad affordance is proximate and releases appetitive or defensive behavior immediately. Other times people take a longer-term perspective and eschew the proximate good to pursue a future good, or avoid a more ultimate bad. What is good right now is not necessarily good over time, but conversely, sometimes a greater risk is necessary to secure a better long-term future. For example, in many species intra-male competition can bring status, loss, and even death if unsuccessful, but access to superior mating options if it succeeds. Error management theory (Haselton & Buss, 2000) provides one influential evolutionary perspective on how inferential biases might have been selected to manage these costs and benefits. For example, these authors suggest that different parental investment demands on the sexes (i.e., women must carry a child to term in order to reproduce, while men can escape after one copulatory act) have led men to overperceive sexual interest in women, and women to underperceive a man’s willingness to commit. Such inferential biases could arise because (at least ancestrally) the benefits of biased perceiving outweigh(ed) the costs. It is also possible such biases arise from more general cognitive processes, like sex differences in what people pay attention to, how readily they learn and remember social information, and the nagging neglect of searching for disconfirming evidence in specific domains. Emerging work from evolutionary psychologists emphasizes how such processes guide the development of attunements and biases that underlie people’s social inferences, and may provide a meta-theoretical stimulus for emerging work in a broad set of domains.

Emerging Themes in Social Inference Research

One strong theme in newer work is the implicit nature of many inferences. Indeed, the notion of “implicit bias” has entered common parlance. Much of this work stems from the efforts of Greenwald et al. (1998) in developing the implicit association test (IAT) as a measure of “implicit” cognitions, but broader precedents can be found in work on affective priming (e.g., Fazio et al., 1986) and evaluative conditioning (Levey & Martin, 1975). This work clearly shows that even in the absence of explicit (e.g., reported, or accessible to consciousness) biases against social targets, many have difficulty suppressing or ignoring the influences of stereotypes about those targets. One may explicitly disavow negative feelings about outgroup men, but still identify them faster when the decision is paired with decisions about other negative items (i.e., the IAT effect). The association works the other way as well: an outgroup male face can speed the identification of an ambiguous item as a weapon (Payne, 2006) and facilitate other negative reactions (e.g., Unkelbach et al., 2008).

Importantly, similar to the point about dual systems of social inference, the distinction of “explicit” and “implicit” inference processes represents a simplification, and for the future it is promising to consider this as a continuum, rather than a categorical distinction (see Corneille & Hütter, 2020). Using the terms “implicit” and “explicit” locates an effect on this dimension, rather than indicating two distinct underlying process classes or even distinct systems. At the most general level, the abductive continuum (running from overcoded abduction to diagnostic abduction; see figure 1), may well subsume many of these dichotomies that the field of social cognition currently postulates as fundamentally explanatory. As an example, from the perspective of ecological rationality (Gigerenzer et al., 1999), such implicit effects could be understood as situations where, in the interest of cognitive economy, people are more inclined to rely on overcoded abductions even though they allow errors to intrude. The evolutionary error management perspective (Haselton & Buss, 2000) would additionally suggest that these errors persevere in certain cases, like biases against minority outgroups, because they have historically carried little costs for members of the majority culture—indeed they may even be culturally condoned. Such perspectives do not excuse biases, but they do provide important perspectives to consider when trying to remediate implicit biases. This is critical because implicit (or overcoded abduction) effects can be very subtle, even leading people to correctly identify signals of social threat but incorrectly remember them as being displayed by another social target whose stereotype better fits that threat (Becker et al., 2010; Boon & Davies, 1987). Such biases have significant implications for social problems like racial profiling, and efforts to ameliorate such biases are of intense interest (Payne et al., 2017).

Although many implicit biases are learned and culturally arbitrary, some may arise from deeper sources, like the structure of the face itself. For example, baby-faced individuals are seen as more trustworthy and less competent (Berry & McArthur, 1985), which appears to be an overextension of the reflexive responses to babies that support parent–child bonding. The common facial features and proportions that signify youth across many species (i.e., big eyes relative to the head) might also account for people’s positive reactions to puppies, kittens, and anime art. Coevolutions of social signals and the preexisting perceptual acuities in receivers could also play a key role in implicit biases. For example, it could explain why happy facial expressions are so distinct and detectable relative to other facial expressions (e.g., Becker & Srinivasan, 2014). Hagar and Ekman (1979) demonstrated that happy facial expressions are more detectable at a distance. They suggested that the form of the happy facial expression evolved this discriminability to unambiguously convey prosociality at distances outside the range of prehistoric projectile weapons—in other words, they preempt conflict by promoting inferences that the displayer is not a threat. Another example is the lowered brow of angry facial displays, which resembles the low brow ridge that signals masculinity. The angry brow not only speeds the detection of and inferences about threat, but it also leads to erroneous inferences that angry females are “male” (Becker et al., 2007) and increases ratings of dominance in the displayer (Hess et al., 2005). Although this could be an effect of stereotypes about men and anger, given that the lowered brow clearly influences both the expression and the gender recognition system, such effects may have a deeper source: ancestral pressures selected against “browless” anger (or displays that made less use of the corrugator muscle) because they had less impact than signal forms that give rise to parallel inferences about masculinity, another sign of threat and dominance.

Such hypotheses are hard to test directly (i.e., one cannot roll back evolutionary time and manipulate selective forces), but one can use such hypotheses to generate new and surprising predictions. For example, the lowered brow as a sex indicator is a much older signal, and so it should be harder to ignore in expression judgments than it is to ignore expression in gender-detection task. Becker (2017) found support for this using a Garner interference method, in which one feature of the social target is judged while another is supposed to be ignored, but nevertheless slows down the primary judgement. This lends converging evidence to the theory that one signaling system (emotional expression) converged to take advantage of another earlier signaling system (sex discrimination). One can even find evidence that cultural innovations sometimes take advantage of preexisting perceptual acuities, as when social groups choose to identify themselves with easy-to-discriminate apparel—gangs choosing to use read bandanas vs. the rival gangs blue bandanas—that speeds social categorization and allows overcoded abductions to operate quickly even if they preserve error.

Another aspect of emerging interest is the intersectionality of social categories. Although he features of a social target is sometimes evaluated in an additive way, certain intersections may come to act as basic-level categories (Rosch et al., 1976). The most famous example is the “warm–cold” effect (Asch, 1946) on social inferences; for example, the “independent, intelligent, and warm” social target is evaluated much more positively than the “independent, intelligent, and cold” target, than is predicted by the evaluative implications of the traits alone. Similarly, the young outgroup male is likely to evoke more rapid categorization than other intersections of age, race, and gender, because each one of these categories is related to different kinds of threat, which all coalesce in this intersection (Adams et al., 2015). Indeed, the prototype of aggression may be the young male of a specific outgroup, though what this outgroup is will vary with an individual’s experience and social milieu (Cottrell & Neuberg, 2005). Abductive inference’s context sensitivity makes it an ideal frame from which to study intersectionality.

In contrast to these explorations of inference that presume that the goal is a truer picture of the world, it remains possible that truth is not the ultimate goal of reasoning. The goals of social inference may actively suppress accuracy in favor of other aims like winning arguments or conforming to group norms and beliefs. Sperber and Mercier (2017) have advanced the first thesis, and the notion that people only reason to win arguments gains increased credence in what many see as an emerging “post-truth” era. Consider that belonging to a social group can evoke different strategies for solving moral and reasoning problems. For example, Cummins (1999) participants investigated a social cheating scenario involving an employee and their supervisor and found that different information was checked depending on whether they took a dominant or a subordinate perspective. Although epistemological goals appear wedded to social cognitive inference on first blush, and despite the fact that we can often agree on what is a sound and what is a silly argument, people must be open to the possibility that social inference might inherently stray from truth if other goals are served that are equally adaptive. The present abductive inference framework is particularly useful, as it explains how people may perpetuate false beliefs and preserve certain ignorances while still successfully navigating their social environment.


Inferences are ubiquitous in social cognition, governing everything from first impressions to the communication of meaning itself. Fiske (1992) noted three recurring pragmatic themes in social cognition: “People are good-enough social perceivers; people construct meaning through traits, stereotypes, and stories; and people's thinking strategies depend on their goals” (p. 877). This article has tried to make clear how well all three of these pragmatic themes apply to how inferences are made in social cognition. The abductive framing adopted reflects an emerging consensus in philosophy and cognitive science that people’s inferences rarely have deductive certainty, but it does emphasize that social inference is often good enough, albeit not perfect. The biases and heuristics that people typically display are adequate to achieve multiple social goals, but they also lead to predictable errors, some of which lead to insidious social problems (e.g., confirmation bias in the face of negative stereotypes). The goals that affect the information gathering in social inference may be very task-specific (e.g., inferring the threat that a stranger poses), but they are also influenced by pragmatic constraints (speed vs. accuracy). This abductive frame is inherently pragmatic—indeed it may explain why and when speed accuracy tradeoffs occur—and opens up new areas for research and theory in social inference, and cognitive science more broadly.

Further Reading

  • Eco, U., & Sebeok, T. (1983). The sign of the three: Dupin, Holmes, Peirce. Indiana University Press.
  • Forgas, J. P., Williams, K. D., & Von Hippel, W. (2003). Social judgments: Implicit and explicit processes. Cambridge University Press.
  • Neuberg, S. L., Becker, D. V., & Kenrick, D. T. (2014). Evolutionary social cognition. In D. Carlston (Ed.), Handbook of social cognition, p. 656-679. Oxford University Press.
  • Todd, P. M., Gigerenzer, G., & the ABC Research Group. (2012). Ecological rationality: Intelligence in the world. Oxford University Press.
  • Woods, J. (2013). Errors of reasoning: Naturalizing the logic of inference. College Publications.


  • Adams, R. B., Hess, U., & Kleck, R. E. (2015). The intersection of gender-related facial appearance and facial displays of emotion. Emotion Review, 7(1), 5–13.
  • Asch, S. E. (1946). Forming impressions of personality. Journal of Abnormal and Social Psychology, 41, 258–290.
  • Arkes, H. R., & Harkness, A. R. (1980). Effect of making a diagnosis on subsequent recognition of symptoms. Journal of Experimental Psychology: Human Learning and Memory, 6(5), 568–575.
  • Becker, D. V. (2017). Facial gender interferes with decisions about facial expressions of anger and happiness. Journal of Experimental Psychology: General, 146(4), 457–461.
  • Becker, D. V., Kenrick, D. T., Neuberg, S. L., Blackwell, K. C., & Smith, D. M. (2007). The confounded nature of angry men and happy women. Journal of Personality and Social Psychology, 92, 179–190.
  • Becker, D. V., Neel, R., & Anderson, U. S. (2010). Illusory conjunctions of angry facial expressions follow intergroup biases. Psychological Science, 21, 38–40.
  • Becker, D. V., & Neuberg, S. L. (2019a). Archetypes reconsidered as emergent outcomes of cognitive complexity and evolved motivational systems. Psychological Inquiry, 30(2), 59–75.
  • Becker, D. V., & Neuberg, S. L. (2019b). Pushing archetypal representational systems further. Psychological Inquiry, 30(2), 103–109.
  • Becker, D. V., & Srinivasan, N. S. (2014). The vividness of the happy face. Current Directions in Psychological Science, 23, 189–194.
  • Bem, D. J. (1967). Self-perception: An alternative interpretation of cognitive dissonance phenomena. Psychological Review, 74, 183–200.
  • Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 6, pp. 1–62). Academic Press.
  • Berry, D. S., & McArthur, L. Z. (1985). Some components and consequences of a babyface. Journal of Personality and Social Psychology, 48(2), 312–323.
  • Boon, J. C., & Davis, G. M. (1987). Rumours greatly exaggerated: Allport and Postman’s apocryphal study. Canadian Journal of Behavioural Science, 19(4), 430–440.
  • Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual Review of Psychology, 71, 499–515.
  • Braverman, J. (2005). The effect of mood on detection of covariation. Personality and Social Psychology Bulletin, 31(11), 1487–1497.
  • Brewer, M. B., & Brown, R. J. (1998). Intergroup relations. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (pp. 554–594). Wiley.
  • Brockner, J., Wiesenfeld, B. M., & Martin, C. L. (1995). Decision frame, procedural justice, and survivors’ reactions to job layoffs. Organizational Behavior and Human Decision Processes, 63(1), 59–68.
  • Brockner, J., & Wiesenfeld, B. M. (1996). An integrative framework for explaining reactions to decisions: Interactive effects of outcomes and procedures. Psychological Bulletin, 120(2), 189–208.
  • Bruner, J. S. (1973). Beyond the information given: Studies in the psychology of knowing. W. W. Norton.
  • Buss, D. M. (1991). Evolutionary personality psychology. Annual review of psychology, 42, 459–491.
  • Burger, J. M. (1981). Motivational biases in the attribution of responsibility for an accident: A meta-analysis of the defensive-attribution hypothesis. Psychological Bulletin, 90(3), 496–512.
  • Campbell, W. K., & Sedikides, C. (1999). Self-threat magnifies the self-serving bias: A meta-analytic integration. Review of General Psychology, 3(1), 23–43.
  • Chapman, L. J. (1967). Illusory correlation in observational report. Journal of Verbal Learning & Verbal Behavior, 6(1), 151–155.
  • Corneille, O., & Hütter, M. (2020). Implicit? What do you mean? A comprehensive review of the delusive implicitness construct in attitude research. Personality and Social Psychology Review, 24(3), 212–232.
  • Cottrell, C. A., & Neuberg, S. L. (2005). Different emotional reactions to different groups: A sociofunctional threat-based approach to “prejudice.” Journal of Personality and Social Psychology, 88, 770–789.
  • Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187–276.
  • Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1–73.
  • Cosmides, L., & Tooby, J. (2001). Unraveling the enigma of human intelligence: Evolutionary psychology and the multimodular mind. In R. J. Sternberg & J. C. Kaufman (Eds.), The evolution of intelligence (pp. 145–198). Lawrence Erlbaum.
  • Costa, P. T., & McCrae, R. R. (1992). Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychological Assessment, 4(1), 5–13.
  • Crocker, J. (1981). Judgment of covariation by social perceivers. Psychological Bulletin, 90(2), 272–292.
  • Cummins, D. D. (1999). Cheater detection is modified by social rank: The impact of dominance on the evolution of cognitive functions. Evolution and Human Behavior, 20(4), 229–248.
  • De Houwer, J. (2019). Moving beyond System 1 and System 2: Conditioning, implicit evaluation, and habitual responding might be mediated by relational knowledge. Experimental Psychology, 66(4), 257.
  • Eco, U. (1976). A theory of semiotics. Indiana University Press.
  • Eco, U. (1983). Horns, hooves, insteps: Some hypotheses on three kinds of abduction. In U. Eco & T. Sebeok (Eds.), The sign of the three: Dupin, Holmes, Peirce (pp.119–134). Indiana University Press.
  • Eder, A. B., Fiedler, K., & Hamm-Eder, S. (2011). Illusory correlations revisited: The role of pseudocontingencies and working-memory capacity. The Quarterly Journal of Experimental Psychology, 64(3), 517–532.
  • Engle, R. W. (2001). What is working memory capacity?. In H. L. Roediger III, J. S. Nairne, I. Neath, & A. M. Surprenant (Eds.), The nature of remembering: Essays in honor of Robert G. Crowder (Science conference series, pp. 297–314). American Psychological Association.
  • Epley, N., Keysar, B., Van Boven, L., & Gilovich, T. (2004). Perspective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology, 87(3), 327–339.
  • Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C, & Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229–223.
  • Ferguson, M. J., Mann, T. C., Cone, J., & Shen, X. (2019). When and how implicit first impressions can be updated. Current Directions in Psychological Science, 28(4), 331–336.
  • Fiedler, K. (2000). Beware of samples! A cognitive-ecological sampling approach to judgment biases. Psychological Review, 107, 659–676.
  • Fiedler, K., & Freytag, P. (2004). Pseudocontingencies. Journal of Personality and Social Psychology, 87(4), 453–467.
  • Fiedler, K., Freytag, P., & Meiser, T. (2009). Pseudocontingencies: An integrative account of an intriguing cognitive illusion. Psychological Review, 116(1), 187–206.
  • Fiedler, K., Freytag, P., & Unkelbach, C. (2007). Pseudocontingencies in a simulated classroom. Journal of Personality and Social Psychology, 92(4), 665–677.
  • Fiedler, K., & Juslin, P. (2006). Taking the interface between mind and environment seriously. In K. Fiedler & P. Juslin (Eds.), Information sampling and adaptive cognition (pp. 3–29). Cambridge University Press.
  • Fiedler, K., & Unkelbach, C. (2011). Lottery attractiveness and presentation mode of probability and value information. Journal of Behavioral Decision Making, 24(1), 99–115.
  • Fiedler, K., Walther, E., & Nickel, S. (1999). The auto-verification of social hypotheses: Stereotyping and the power of sample size. Journal of Personality and Social Psychology, 77(1), 5–18.
  • Fiedler, K., Walther, E., Freytag, P., & Nickel, S. (2003). Inductive reasoning and judgment interference: Experiments on Simpson’s paradox. Personality & Social Psychology Bulletin, 29, 14–27.
  • Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3(4), 552–564.
  • Fiske, S. T. (2018). Stereotype content: Warmth and competence endure. Current Directions in Psychological Science, 27(2), 67–73.
  • Fiske, S. T. (1992). Thinking is for doing: Portraits of social cognition from Daguerreotype to laserphoto. Journal of Personality and Social Psychology, 63(6), 877–889.
  • Forgas, J. P. (1998). On being happy and mistaken: Mood effects on the fundamental attribution error. Journal of Personality and Social Psychology, 75(2), 318–331.
  • Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118(2), 247–279.
  • Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101(1), 75–90.
  • Gawronski, B., & Bodenhausen, G. V. (2011). The associative-propositional evaluation model: Theory, evidence, and open questions. In J. M. Olson & M. P. Zanna (Eds.), Advances in experimental social psychology (Vol. 44, pp. 59–127). Academic Press.
  • Gawronski, B., & Cesario, J. (2013). Of mice and men: What animal research can tell us about context effects on automatic responses in humans. Personality and Social Psychology Review, 17(2), 187–215.
  • Gigerenzer, G., Fiedler, K., & Olsson, H. (2012). Rethinking cognitive biases as environmental consequences. In P. M. Todd, G. Gigerenzer, & ABC Research Group (Eds.), Ecological rationality: Intelligence in the world (pp. 80–110). Oxford University Press.
  • Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102(4), 684.
  • Gigerenzer, G., & Todd, P. M. (1999). Fast and frugal heuristics: The adaptive toolbox. In G. Gigerenzer, P. M. Todd, & The ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 3–34). Oxford University Press.
  • Gillam, B. (2000). Perceptual constancy. In A. E. Kazdin (Ed.), Encyclopedia of psychology (Vol. 6, pp. 89–93). American Psychological Association.
  • Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74(6), 1464–1480.
  • Hager, J. C., & Ekman, P. (1979). Long-distance transmission of facial affect signals. Ethology and Sociobiology, 1, 77–82.
  • Hamilton, D. L., & Gifford, R. K. (1976). Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments. Journal of Experimental Social Psychology, 12(4), 392–407.
  • Hamilton, D. L., & Rose, T. L. (1980). Illusory correlation and the maintenance of stereotypic beliefs. Journal of Personality and Social Psychology, 39(5), 832–845.
  • Haselton, M. G., & Buss, D. M. (2000). Error management theory: A new perspective on biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78(1), 81–91.
  • Heider, F. (1958). The psychology of interpersonal relations. John Wiley & Sons.
  • Hess, U., Adams, R. B., Jr., & Kleck, R. E. (2005). Who may frown and who should smile? Dominance, affiliation, and the display of happiness and anger. Cognition and Emotion, 19, 515–536
  • Gilbert, D. T., & Malone, P. S. (1995). The correspondence bias. Psychological Bulletin, 117, 21–38.
  • Jones, E. E. (1979). The rocky road from acts to dispositions. American Psychologist, 34(2), 107–117.
  • Jones, E. E., & Harris, V. A. (1967). The attribution of attitudes. Journal of Experimental Social Psychology, 3(1), 1–24.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237–251.
  • Kelley, H. H. (1967). Attribution theory in social psychology. Nebraska Symposium on Motivation, 15, 192–238.
  • Kenrick, D. T., & Griskevicius, V. (2013). The rational animal: How evolution made us smarter than we think. Basic Books.
  • Kenrick, D. K., Neuberg, S. L., Griskevicius, V., Becker, D. V., & Schaller, M. (2009). Goal-driven cognition and functional behavior: The fundamental motives framework. Current Directions in Psychological Science, 19, 63–67.
  • Klauer, K. C., & Meiser, T. (2000). A source-monitoring analysis of illusory correlations. Personality and Social Psychology Bulletin, 26(9), 1074–1093.
  • Koch, A., Imhoff, R., Dotsch, R., Unkelbach, C., & Alves, H. (2016). The ABC of stereotypes about groups: Agency/socioeconomic success, conservative–progressive beliefs, and communion. Journal of Personality and Social Psychology, 110(5), 675–709.
  • Krueger, J. I., & Funder, D. C. (2004). Towards a balanced social psychology: Causes, consequences, and cures for the problem-seeking approach to social behavior and cognition. Behavioral Brain Sciences, 27(3), 313–376.
  • Kruglanski, A. W., Chun, W. Y., Erb, H. P., Pierro, A., Mannetti, L., & Spiegel, S. (2003). A parametric unimodel of human judgment: Integrating dual-process frameworks in social cognition from a single-mode perspective. In J. P. Forgas, K. D. Williams, & W. von Hippel (Eds.), Social judgments: Implicit and explicit processes (pp. 137–161). Cambridge University Press.
  • Levey, A. B., & Martin, I. (1975). Classical conditioning of human evaluative responses. Behavioral Research & Therapy, 13, 221–226.
  • MacLeod, C., & Campbell, L. (1992). Memory accessibility and probability judgments: An experimental evaluation of the availability heuristic. Journal of Personality and Social Psychology, 63(6), 890–902.
  • Magnani, L. (2009). Abductive cognition: The epistemological and eco-cognitive dimensions of hypothetical reasoning. Springer.
  • McArthur, L. Z., & Baron, R. M. (1983). Toward an ecological theory of social perception. Psychological Review, 90(3), 215–238.
  • McAuliffe, W. (2015). How did abduction become confused with inference to the best explanation?. Transactions of the Charles S Peirce Society: A Quarterly Journal in American Philosophy, 51(3), 300–319.
  • Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.
  • Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), 1142–1150.
  • Neel, R., Neufeld, S. L., & Neuberg, S. L. (2013). Would an obese person whistle Vivaldi? Targets of prejudice self-present to minimize appearance of specific threats. Psychological Science, 24, 678–687.
  • Nesse, R. M. (2001). The smoke detector principle: Natural selection and the regulation of defensive responses. Annals of the New York Academy of Sciences, 935, 75–85.
  • Neuberg, S. L. (1989). The goal of forming accurate impressions during social interactions: Attenuating the impact of negative expectancies. Journal of Personality and Social Psychology, 56(3), 374–386.
  • Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Prentice-Hall.
  • Payne, B. K. (2006). Weapon bias: Split-second decisions and unintended stereotyping. Current Directions in Psychological Science, 15(6), 287–291.
  • Payne, B. K., Vuletich, H. A., & Lundberg, K. B. (2017). The bias of crowds: How implicit bias bridges personal and systemic prejudice. Psychological Inquiry, 28(4), 233–248.
  • Reber, R., & Unkelbach, C. (2010). The epistemic status of processing fluency as source for judgments of truth. Review of Philosophy and Psychology, 1, 563–581.
  • Roney, C. J. R., Higgins, E. T., & Shah, J. (1995). Goals and framing: How outcome focus influences motivation and emotion. Personality and Social Psychology Bulletin, 21(11), 1151–1160.
  • Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8(3), 382–439.
  • Ross, L. D., Amabile, T. M., & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perception processes. Journal of Personality and Social Psychology, 35(7), 485–494.
  • Rothman, A. J., & Salovey, P. (1997). Shaping perceptions to motivate healthy behavior: The role of message framing. Psychological Bulletin, 121(1), 3–19.
  • Schaller, M., & Park, J. H. (2011). The behavioral immune system (and why it matters). Current Directions in Psychological Science, 20(2), 99–103.
  • Sebeok, T., & Sebeok, J. U. (1981). You know my method. In T. Sebeok (Ed.), The play of musement (pp. 17–52). Indiana University Press.
  • Shaver, K. (1985). The attribution of blame: Causality, responsibility, and blameworthiness. Springer-Verlag.
  • Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.
  • Smith, E. R. (1984). Model of social inference processes. Psychological Review, 91(3), 392–413.
  • Srull, T. K., & Wyer, R. S. (1979). The role of category accessibility in the interpretation of information about persons: Some determinants and implications. Journal of Personality and Social Psychology, 37(10), 1660–1672.
  • Stone, M. (2012). Denying the antecedent: Its effective use in argumentation. Informal Logic, 32, 327–356.
  • Thagard, P. (1978). The best explanation: Criteria for theory choice. The Journal of Philosophy, 75, 76–92.
  • Tiedens, L. Z. (2001). Anger and advancement versus sadness and subjugation: The effect of negative emotion expressions on social status conferral. Journal of Personality and Social Psychology, 80(1), 86–94.
  • Tooby, J., & Cosmides, L. (2008). The evolutionary psychology of emotions and their relationship to internal regulatory variables. In M. Lewis & J. Haviland-Jones (Eds.), Handbook of emotions (3rd ed., pp. 114–137). Guilford Press.
  • Trope, Y. (1986). Identification and inferential processes in dispositional attribution. Psychological Review, 93(3), 239–257.
  • Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.
  • Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 30, 453–458.
  • Tversky, A., & Kahneman, D. (1983). Extension versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293–315.
  • Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
  • Unkelbach, C., Alves, H., & Koch, A. (2020). Negativity bias, positivity bias, and valence asymmetries: Explaining the differential processing of positive and negative information. In B. Gawronski (Ed.), Advances in experimental social psychology (pp. 115–187). Academic Press.
  • Unkelbach, C., Fiedler, K., & Freytag, P. (2007). Information repetition in evaluative judgments: Easy to monitor, hard to control. Organizational Behavior and Human Decision Processes, 103(1), 37–52.
  • Unkelbach, C., Forgas, J. P., & Denson, T. F. (2008). The turban effect: The influence of Muslim headgear and induced affect on aggressive responses in the shooter bias paradigm. Journal of Experimental Social Psychology, 44(5), 1409–1413.
  • Unkelbach, C., Koch, A., & Alves, H. (2019). The evaluative information ecology: On the frequency and diversity of “good” and “bad.” European Review of Social Psychology, 30(1), 216–270.
  • Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019). Truth by repetition: Explanations and implications. Current Directions in Psychological Science, 28(3), 247–253.
  • Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology. 20(3), 273–281.
  • Zebrowitz, L. A., & Montepare, J. (2006). The ecological approach to person perception: Evolutionary roots and contemporary offshoots. In M. Schaller, J. A. Simpson, & D. T. Kenrick (Eds.), Evolution and social psychology (pp. 81–113). Psychosocial Press.


  • 1. More broadly, although abduction includes the act of diagnosing causes for proximate behaviors (e.g., attitudes, beliefs, dispositions, and so on), it also includes discriminating among explanations, generalizing existing knowledge frames to new data, and even generating new experimental approaches to gather pertinent data and new theories that organize and explain it; Eco (1983) has called this latter phenomenon meta-abduction.