Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS ( (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 27 October 2020

Decision Strategies in Politicsfree

  • Richard R. LauRichard R. LauDepartment of Political Science, Rutgers University


A decision strategy is a set of mental and physical operations that a decision maker uses to reach a choice among two or more alternatives. Once the alternatives have been identified, a decision strategy involves gathering information about at least some of the different alternatives under considerations and making judgments about them. A decision strategy will include a mechanism for selecting the best alternative—for example, select the alternative with the highest probability of success. Decision strategies differ along two primary dimensions: how much information is gathered, and how comparable that information is across alternatives. Four major types of decision strategies include classic rational choice (relatively deep search, equally distributed across alternatives), confirmatory motivated reasoning (relatively deep search, unequally distributed across alternatives), fast and frugal (relatively shallow search, equally distributed across alternatives), and heuristic-based intuitive (shallow search, unequally distributed across alternatives). Although standard rating scales have been developed to help ascertain which strategies a decision maker prefers, the best method for determining which strategy is being employed is to directly observe information gathering while the decision is being made. An important task for future research is to more clearly explicate the situations when different decision strategies perform particularly well or particularly poorly.


  • Political Behaviour

Decision Strategies

A judgment is an evaluation of a single entity—for example, how good of a job is Donald Trump doing as President? Is the economy doing better or worse than it was a year ago? Did the Russian government try to interfere with the 2016 U.S. presidential election? How much do I like Joe Biden? How effective is providing job training to prisoners in reducing recidivism? A preference is a comparative evaluation, a relative ranking, of two or more entities that are somehow “member of the same class” (e.g., I prefer chocolate over vanilla and strawberry ice cream; I think Marco Rubio is the best candidate running in the New Hampshire Republican primary election; see Druckman & Lupia, 2016). A decision is a choice among two or more alternatives, a formal statement of and a commitment to act upon a preference. It is often accompanied by actual behavior in support of that preference: for example, actually buying a particular ice cream flavor at the supermarket or voting for a particular candidate in an election.

A decision strategy, then, is a set of mental and physical operations that a decision maker—a person or institution—uses to reach a decision (Redlawsk & Lau, 2013). When decisions are relatively unstructured (Langley, Simon, Bradshaw, & Zytkow, 1987)—as foreign policy crises typically are—a decision strategy must begin with generating alternative courses of action, which implicitly always include “doing nothing.” Well-structured decisions, on the other hand, such as an application for a visa or an election, provide decision makers with a fixed set of alternatives from which to choose (e.g., approve or deny the visa application; vote for the Democrat, the Republican, the Green party candidate, or the Libertarian party candidate). Once the alternatives have been identified, a decision strategy involves gathering information about at least some of the different alternatives under consideration and making judgments about them. What possible outcomes are associated with each alternative under consideration? How likely are they to occur? How much do I value or like or care about each of those different consequences? A decision strategy must also include a method for choosing among the identified alternatives.

This article limits attention to decision strategies for making well-structured political choices—with the vote choice being the prototypic example—for the simple reason that generating alternatives—for example, by brainstorming—involves very different psychological procedures than learning about/evaluating/reacting to a fixed set of alternatives—too much to cover in a single review article. Once the set of alternatives under consideration have been given to (or generated by) the decision maker, however, what this article will cover—choosing among them—comes into play for all types of decisions.1 For the same limited-space reason, this article restricts attention primarily to ordinary, everyday individual decision makers, although it makes passing reference to expert decision makers (e.g., political elites; see Levy, 2013) or to institutional/organizational decision making procedures (see March, 1988; March & Olson, 1989; March & Simon, 1958).

The article begins by sketching out a dual process theory of human cognition that provides a basic framework for studying individual decision making. By so doing, the article consciously avoids starting with the dominant economic/rational choice perspective (e.g., von Neumann & Morgenstern, 1944) that frames virtually all discussions of decision making, because the assumptions it makes almost never apply in the real world. The major distinction is whether cognition (including decision making) is largely automatic, preconscious, and very fast; or deliberate, conscious, and slow. With this background, four broad models of decision strategies developed by Lau & Redlawsk (2006) are introduced and used as a framework for discussing more specific strategies often employed in political decision making.

Once these different types of decision strategies have been described and discussed, a question that should be of paramount importance to empirical researchers is considered, “How do you know what decision strategy a decision maker is using?” If decision strategies are assumed to reflect different (semi-permanent) cognitive styles, and we believe decision makers are (or can be) aware of them, then one way to proceed is to employ a number of different self-report scales aimed at describing political decision making. One recently developed and validated self-report scale will be described. A more direct though typically much more complicated and often intrusive procedure is to observe decision makers while they are making a decision, and somehow record theoretically important behaviors during the decision process. Several different “process tracing” methods are described that decision researchers have developed specifically for this purpose.

The article concludes with a brief discussion of the consequences of using the different decision strategies that are presented and discussed. Some of these strategies involve a great deal of time and effort to gather and evaluate relevant information, while other strategies proceed almost effortlessly, so it clearly matters to the decision maker. But are the decisions reached any different—and more importantly, any better or worse—if strategy A is used rather than strategy B?

A Dual Processing Account of Reasoning, Cognition, and Decision Making

It has become standard in psychology to view human beings as limited information processors (Anderson, 1983, 1996; Simon, 1957, 1979, 1985) whose ability to perceive, reason, and learn about the world around them is severely limited, first by our basic sense organs (Zimmermann, 1989), but then more dramatically by a severely limited working or short term memory—constrained to considering “7 plus or minus 2” bits of information at any point in time (Miller, 1956)—that people must use to process the incoming information from their eyes and ears, and transmit (store) a subset of it to a long term memory of, effectively, unlimited capacity (Anderson, Bjork, & Bjork, 1994; Fiske & Taylor, 2008; Hastie, 1986; Lau & Sears, 1986; Norman, 1982).

It is within this basic underlying architecture of human cognition that a variety of dual process theories have been developed over the past 40 years in psychology to help explain basic reasoning and thinking (e.g., Barrett, Tugade, & Engle, 2004; Evans, 1996, 2003; Evans & Curtis-Holmes, 2005; Johnson-Laird, 1983; Osman, 2005; Sloman, 1996; Wason & Evans, 1975), social cognition (Chaiken, 1980; Chaiken & Trope, 1999; Devine, 1989; Forgas, Williams, & von Hippel, 2011; Petty & Cacioppo, 1981; Smith & DeCoster, 2000), and most relevant to current purposes, judgment and decision making (Gigerenzer & Todd, 1999; Gilovich, Griffin, and Kahneman, 2002; Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982; Reyna, 2004; Tversky & Kahneman, 1983). Although there is not a one-to-one matching between these different types of theories, they share many features in common, the most prominent of which are a description of two very different modes of thinking.

System 1 processes are automatic (and thus uncontrollable), unconscious (or preconscious), implicit, very fast, high capacity, parallel, mostly perceptual in nature, and easy (if not effortless) to perform. These processes are presumed to be evolutionarily old, exclusively nonverbal, and shared with animals. They tend to be domain specific, and mostly pragmatic in orientation. They are generally viewed as universal, and are believed to be unrelated to general intelligence and working memory capacity.

System 2 processes, on the other hand, are controlled, conscious, explicit, relatively slow, low capacity, and high effort/difficult to perform. They are evolutionarily recent, language-based, and thus unique to humans. They are very general processes that are abstract and logical in orientation. They are presumed to be positively associated with intelligence, but are strictly limited by the capacity of working memory, and thus are presumed to involve serial processing of one topic or problem at a time.

An excellent recent review of this literature is provided by Evans (2008). All people make decisions utilizing both System 1 and System 2 processes, either alone or in combination. Many psychologists view System 1 as the default mode of processing that results in rapid intuitive judgments that operate outside of conscious awareness and subsequently provide the basic inputs for decision making and/or the effective decisions themselves, which an “executive monitor” operating in System 2 may occasionally overrule. Evans (2008) calls this view of “default-interventionist” (e.g., Evans, 2006; Kahneman, 2011; Kahneman & Frederick, 2005) and contrasts it to theories of dual cognition that are more “parallel-competitive” in nature (e.g., Sloman, 1996).

An Information Processing Framework for Studying Political Decision Making

Lau and Redlawsk (2006) have published a comprehensive study of voter decision making, a perspective that fits nicely with the limited cognition/dual process view of human information processing presented above. Lau and Redlawsk start with the perspective that decision making is a process of gathering information about the candidate or policy alternatives under consideration, and the dimensions or attributes across which those alternatives differ. Because humans have limited cognitive abilities, everyday citizens have two primary goals in making political decisions: the desire to make a good choice, and the desire to make an easy choice—that is, to spend as little time as possible in reaching those decisions (Payne, Bettman, & Johnson,1988). Voting in elections is the most consequential political decision that people will ever make. But because most people realize that the probability that their individual vote will determine the outcome of an election is extremely small, in mass politics, the motivation for easy just about always trumps the motivation for good.

Two criteria for analyzing decision strategies immediately suggest themselves: How much information is gathered before making the decision, and how equally that information search is distributed across the alternatives under consideration. The depth and comparability of search then become the two dimensions that together define basic categories or models of decision making. This basic framework will be used to discuss different strategies that citizens use in making political decisions.

Model 1: Deep Information Search, Equally Distributed Across Alternatives

Classic Economic/Statistical Rational Choice

Rational choice (Arrow, 1951; Chong, 2013; Machina, 1982; Riker, 1995; Savage, 1954; von Neuman & Morgenstern, 1944) is a normative model of decision making that describes how people ought to make decisions. It is the oldest and probably best-known model of decision making. Although it is meant to be prescriptive rather than descriptive, it is frequently presented as a simplification of how decisions are actually made. There are a number of very precise statistical requirements that must be assumed by rational choice theory (see Austen-Smith & Banks, 1999; Hastie & Dawes, 2009; Lau, 2003; Raiffa, 1968; Redlawsk & Lau, 2013), but informally, the theory says decision makers should a) gather as much objective and accurate information as possible about every conceivable alternative; b) consider every possible outcome that could be associated with all of those alternatives; c) assign a value to each of the possible outcomes according to some fixed, and exogenous, value function; d) weigh those different considerations according to how likely they are to occur; and finally, e) combine all of this information to select the most preferred alternative according to a logic of consequences—that is, by some predetermined criterion for choosing—for example, maximize expected utility.2 Hence more than any other decision strategy, rational choice involves System 2 cognition. It is deliberate, slow, intentional, and difficult to perform, placing exceedingly high demands on cognitive resources.

The popularity of rational choice theory derives from its promise of selecting the alternative with the highest probability of producing a value-maximizing decision—if all of its assumptions are met. Well known examples of this general approach to political decision making are provided by Enelow and Hinich (1984), Hinich and Munger (1994) Palfrey and Rosenthal (1983), Riker and Ordeshook (1968, 1973), and Stokes (1963). This classic economic model of decision making is sometimes described as requiring “omniscient” or “demonic” rationality because its information processing and calculation demands greatly exceed the capabilities of normal people (Gigerenzer & Todd, 1999; Kahneman, 1994; Lupia, McCubbins, & Popkin, 2000; March, 1994).

Omniscient rationality is necessary for classic rational decision making, not only because of the amount of information that must be processed, but also because of the nature of the information processing. Rational decision making is inherently compensatory—that is, it inevitably involves both positive and negative information, which should make some particular alternative more, and less, likely to be chosen. This is called compensatory reasoning in that positive information about a particular alternative can trade off against or compensate for negative information about that same alternative. A familiar example of compensatory choice is the prediction made by a standard regression equation, where some predictors are associated with positive coefficients, while others have negative coefficients. The weighted additive rule, the expected utility rule, and the additive difference rule, are all formally specified decision strategies that conform to the dictates of classical rational choice (Ford, Schmitt, Schechtman, Hults, & Doherty, 1989). Compensatory choice is very difficult because it requires decision makers to formulate explicit value tradeoffs—for example, to support the candidate whose economic policies you strongly prefer, even though you disagree with many of her foreign policy stands. In the real world, decision makers typically go to great lengths to avoid making such value trade-offs (Hogarth, 1987; Jervis, 1976; Payne, Bettman, & Johnson, 1992, 1993).

Rationality With Constraints

Downs (1957) added one tremendously important insight to the rational choice approach to political decision making—the idea that information search is costly, and as a consequence, decision makers might—indeed, most should—rationally decide to remain ignorant. The cost of gathering information becomes one more consideration that must factor into the decision calculus. Downs’ approach is often referred to as rational choice with constraints, because the decision maker is not assumed to have perfect or complete information. Value maximization is still the presumed goal, and compensatory decision making the necessary guiding process, but now the costs of gathering information are added to the negative side of the ledger.

Downs is rightfully celebrated for keeping the decision maker’s cognitive limits in mind, although he was writing before we had much theory about what those limits were, and how they influenced actual decision making. His insight was that decision makers have stopping rules that tell them when they have gathered enough information—essentially, when the marginal utility of additional information exceeds the costs of gathering it. If information search cannot be comprehensive, Downs argued, rational citizens must then somehow pass onto others the costs of gathering information. Although he did not use the term, the strategies Downs discussed for simplifying information gathering (e.g., relying on the advice of interest groups or experts) could be called heuristics—a topic discussed more thoroughly below.3 Still, decision making is considered a conscious, deliberate procedure that would clearly involve System 2 processes.

Kelley and Mirer’s (1974) simple act of voting, a memory-based moment-of-decision positive minus negative tally, is an intentionally simplified version of rational decision making. According to Kelley and Mirer, when citizens are deciding how to vote, they search through memory and simply count the number of reasons they have for voting for each candidate on the ballot, subtract from that a count of the number of reasons they have for voting against each candidate, and choose the candidate who has the highest net positive tally. If two candidates have the same net tally, voters are assumed to choose the one who is consistent with their party identification—and if they have no partisan leanings, to flip a coin and choose randomly. Using responses to open-ended questions that have been asked since 1952, by the American National Election Study surveys, as their indication of the contents of memory, Kelly and Mirer show that their simple rule correctly predicts reported vote choices almost 90% of the time.

Kelley and Mirer’s strategy simplifies rational choice in two important ways, first by treating all reasons as simply good or bad rather than trying to evaluate them on a more nuanced scale (the frequency of good and bad features strategy); and second by weighing each reason equally (the equal weights heuristic), rather than applying more discriminating importance or probability weights that are required by the weighted additive and expected utility rules. Decision researchers would call this editing—simplifying a difficult decision by eliminating (i.e., ignoring) potentially relevant information. Still it clearly invokes System 2 cognition, in that it is a deliberate, conscious, and compensatory strategy, where reasons to vote for a candidate trade off against reasons to vote against that candidate; which is applied in an even-handed, comparable way to all major alternatives on the ballot.

Model 2: Deep Information Search, Unequally Distributed Across Alternatives

If Model 1 includes the oldest and best known general decision strategies that can be applied to any domain of decision making, Model 2 includes the best known theories of voter decision making. Campbell, Converse, Miller, and Stokes’ (1960) social psychological American voter “funnel of causality” model begins, for most voters, with a social identification with either the Democratic or Republican party that is typically learned early in life via standard social learning/childhood socialization processes (Sears, 1975) and remains stable throughout adulthood.4 As an election approaches, voters’ decisions are, according to the social psychological model, influenced by the information they are exposed to during the campaign, including agreement with the policy stands of the competing candidates, the past performance in office by incumbents seeking re-election, and by judgments of the competing candidates’ competence and integrity (Funk, 1997; Kinder, 1986; Markus & Converse, 1979; Miller & Merrill Shanks, 1996; Nie, Verba, & Petrocik, 1976). This is a lot of information to process. The major difference between the social psychological model and a rational choice perspective is the presumption that perceptions are not accurate but rather are colored by party identification. It is this reason that led Lau and Redlawsk (2006) to refer to this model as confirmatory decision making.

The American Voter model does not directly address information search, although it implicitly assumes that most citizens are passive recipients of whatever information political and media elites provide in performing their jobs. Thus most citizens in presidential elections will bring a lot of information to bear on their vote choice, because there is so much information about the competing candidates available, free of charge, during presidential elections, but citizens would have concomitantly less information available about candidates running for statewide offices—as there is typically much less information about those races in the media—and even less information still about the candidates running in local elections. There is no requirement or even desire to gather the same information about all of the competing candidates, however; and given the motivated biases that inevitably result from party identification coloring the perceptions of the candidates during any ongoing election campaign, and from similarly biased media consumption habits (e.g., Coe et al, 2008; Dilliplane, 2011; Garrett, 2009; Lazarsfeld, Berelson, & Gaudet, 1948), there is every expectation that information search will be unbalanced, and in fact slanted in favor of the party with which to voter identifies. The typical voter, then, employs some combination of automatic, preconscious System 1 processes, along with more deliberate and conscious System 2 considerations—at least in higher-level national elections where such information is readily available.

Lodge, Steenbergen, and Brau (1995; see also Lodge, McGraw, & Stroh,1989) present a hot cognition impression-driven on-line running tally model of candidate evaluation and choice that is guided by more recent social cognition research and an explicit awareness of cognitive limitations. Following Hastie and Park (1986), Lodge et al. assume that everything in the social world comes affectively charged, that is, hot cognition. Whenever new information about some candidate or political policy is encountered, any prior impression of that person or policy is immediately retrieved from memory and more or less automatically (and thus typically unconsciously) updated on-line according to the new information. The revised on-line tally is again stored in long-term memory, but the information itself, the reasons for any change in evaluation, is, in the name of cognitive economy, simply discarded and usually forgotten. A vote decision is then nothing more than retrieving the on-line tallies associated with the competing candidates, and selecting the one with the highest tally (that is, affect referral; see Wright, 1975). This is an extreme form of editing down a more complex decision to its most basic element. There is no requirement that those comparative judgments are based on the same type of information, however. Indeed, an on-line impression-driven model simplifies an inherently comparative judgment even more by decomposing the decision into separate and independent processes of forming impressions of the different candidates—another one of the standard ways that cognitively challenging tasks can be simplified.

In many ways, the on-line model is quite similar to Kelley and Mirer’s (1974) simple act of voting, except that it assumes that the tallying occurs automatically as relevant information is encountered, rather than retrieved from memory whenever a decision must be reached. The importance of the on-line model is that it suggests that political judgments and decisions are typically based on much more information than citizens can typically recall if asked at some point after they make a political decision. The survey respondents who provided the supporting evidence for Kelley and Mirer’s model could, on average, report only a total of 4 or 5 reasons for voting for and against the two major candidates in those presidential elections. However their actual decision process was probably based on much more information than those 4 or 5 reasons, Lodge, Steenbergen, and Brau (1995) would argue, suggesting that voters do a much more thorough job of holding politicians accountable for their performance in office and promises during an election campaign than would be apparent based on the small number of reasons they are able to report to survey researchers.

The on-line running tally model is almost exclusively automatic System 1 cognition. Because positive information is assumed to increase the running tally while negative information is assumed to decrease it, clearly some sort of compensatory “updating” procedure is employed. Subsequent research by Lodge and Taber (2005, 2013) explicitly argues that the hot cognition assumed by their on-line impression-driven model typically results in “motivated reasoning” whereby any pre-existing affect immediately (and unconsciously) enters into the information processing stream, biasing any subsequent perceptions and judgments of related information (Kunda, 1990). This motivated reasoning sounds very similar to the “coloring” or short-term forces that long-standing party identification is assumed by Campbell et al. (1960) to interject into voter decision processes.

Model 3: Shallow Information Search, Equally Distributed Across Alternatives

The decision strategies discussed in the previous two sections require—or at least allow for—relatively deep information search. With the exception of classic economic rational choice and its assumption of omniscient cognitive abilities, all of the other decision strategies already discussed employ procedures such as editing and decomposition to reduce the cognitive demands of the decision strategy. The remaining decision strategies to be discussed in this and the following section produce cognitive economy by explicitly invoking stringent restrictions on information gathering. A term that is widely employed to describe such cognitive shortcuts is heuristic.

A heuristic is a decision strategy that, consciously or unconsciously, ignores some of the available information, with the primary goal of making decisions quickly and easily, rather than focusing on maximizing utility—or making the “best” possible decision.

Single issue voting is a decision strategy invoked by citizens who feel so strongly about an issue that they are willing to ignore everything else and base their decisions on one issue alone. Such a strategy obviously provides enormous cognitive savings, as the only thing voters need to learn is the candidate’s stand on this single issue. For example, Conover, Gray, and Coombs (1982) list abortion and the Equal Rights Amendment as issues that were treated in this manner by many voters in the 1980 national elections. Similarly, since the early 1990s, many Republican voters have expressed a mantra of “no new taxes,” and in the 2016 presidential elections, there were many voters who seemingly supported “anyone but Hillary” or “anyone but Trump.” Media reports on current elections rarely have reliable information on how many voters are thinking in a particular way, but shortly before the 2012 national elections Gallup reported that 17 % of registered voters (1 in 6) stated that they would only vote for candidates who share their views on abortion (Saad, 2012).

Tversky (1969) formalized such a strategy with his lexicographic heuristic, which holds that decision makers compare alternatives on the attribute of judgment they perceive to be most important and select the alternative that is preferred on this most important consideration. If two or more alternatives “tie” on this most important criterion, the others are eliminated, and then the remaining alternatives are compared on the second most important criterion, and so on until only one alternative remains.

Gigerenzer (2000; Gigerenzer & Gaissmaier, 2011; Gigerenzer & Goldstein, 1996, 1999; Gigerenzer & Todd, 1999) has described various fast and frugal decision heuristics that are often consciously chosen by decision makers explicitly because they are both fast and accurate—and often, more accurate than the decisions recommended by more complex decision rules (with more complex decision algorithms, such as multiple regression). For example, the recognition heuristic says that, when one of two alternatives is recognized and the other is not, infer that the recognized alternative has a higher value. The recognition heuristic is very accurate in answering questions such as: which of two tennis players is more likely to win a second-round match at Wimbledon, or which of two stocks is more likely to be profitable over the next month? The fluency heuristic says that, if both alternatives are recognized, infer that the one that is recognized faster has the higher value. The take-the-best heuristic holds that people make decisions based on recalling cues (criteria) that discriminate between two alternatives and then apply the simple decision rule of selecting the alternative that has a higher value on the recalled discriminating cue.

Model 4: Shallow Information Search, Unequally Distributed Across Alternatives

The Model 3 decision strategies discussed in the previous section rely on shallow—often very shallow—information search, but they gather that information about all alternatives in the choice set in order to choose among them. As such, it is not too difficult to imagine that they could often provide reasonably accurate, high quality judgments and decisions with far less effort than would be required to conduct any sort of detailed, rational, high information decision process with careful weighting of different types of information, and so on. This accuracy/effort trade-off could make Model 3 decision strategies a rational choice for many decision makers. The final category of decision strategies curtails information search even farther, now not only ignoring a lot of the available information by conducting shallow search, but also sometimes even ignoring (or quickly eliminating) alternatives altogether. As such, the cognitive effort savings are even greater, but it becomes harder to imagine that decision quality would not significantly suffer from the trade-off.

Satisficing, one of the best-known decision heuristics (first described by Simon in 1947), illustrated this dilemma well. Decision makers are assumed to have some sort of aspiration level, a “latitude of acceptance” about each of the consequences (the considerations) associated with any decision they must make. Alternatives are considered one-at-a-time, and as soon as an alternative is found that equals or exceeds the aspiration level on every dimension of judgment, this satisfactory alternative is chosen, and the decision maker moves on to other problems. If no satisfactory alternative is found, the decision maker must lower his or her aspiration level, and try again.

There is no goal of finding the “best” possible solution with this decision strategy, just an acceptable one (i.e., one that “suffices”). Obviously, the order in which alternatives are considered will in many situations (e.g., whenever there are several satisfactory alternatives in the mix) have a strong influence over which alternative is selected. Satisficing is formally agnostic about the order in which alternatives are considered, but it is easy to combine this strategy with the recognition heuristic to predict order of consideration, and thus choice, in any real-world setting. If there is a “status quo” or somehow otherwise familiar alternative in the choice set, in practice that alternative will usually be among the first alternatives considered.

Retrospective judgments (Fiorina, 1981; Healy & Malhotra, 2013) comprise another well-known Model 4 decision strategy. Although it is often irrational to consider “sunk costs” in making decisions, Fiorina argues that using past performance in office by incumbents as an indicator of likely future performance is a very efficient decision strategy for voters to employ. There is no requirement for additional information search, because, by election day, each decision maker has already lived through most of the incumbent’s term in office, and all a voter must do is search through memory to decide if I (our my country) is better off than I (it) was some time ago.5 On the other hand, making prospective judgments about what the competing candidates are promising to do if elected, is a much more demanding cognitive task. Decision makers must seek out information about the policy proposals of the competing candidates or parties (in every policy domain that concerns the voter), discount those promises by the likelihood that an elected politician will actually be able to implement those policies, and if implemented that they will actually achieve the goals they are meant to achieve, and so on. This is a daunting task, one well beyond the time and effort most people are willing to devote to the task of deciding how to vote. But if many people are walking around with some sort of global “times are good/bad” judgment about their country, all they need to do is attribute responsibility for the nature of the times to an incumbent leader, and vote accordingly at the next election.

There are several ways that retrospective judgments can be made more rational—for example, by comparing the performance of the current incumbent to a counterfactual judgment about what would have happened if another party/candidate with a different set of policy priorities and promises had been elected; or by comparing what the current incumbent has achieved to what an alternative party had achieved the last time they were in office, but any such extensions take much more cognitive effort, much additional information gathering from memory or some outside source, weighting by uncertainty and so on—and thus much more Model 1 than Model 4—which makes it extremely unlikely that very many people would ever do this. There are also several well-documented ways in which such retrospective judgments lead to poor decisions, for example if retrospective judgments reflect recency biases (Am I better off than I was a year ago?) rather than considering an entire term of office (Bartels, 2008), or if incumbents are credited or blamed for events over which they had little or no control (e.g., good harvests, natural disasters, shark attacks; see Achen & Bartels, 2016).

Lau and Redlawsk have labeled low-information, noncomparable, noncompensatory decision making as intuitive, heuristic-based decision strategies. Important political heuristics (that often have non-political analogs) include:

Partisanship. Voters who have a strong party stereotype or schema (Lodge & Hamill, 1986) can apply it indiscriminately (and save a great deal of cognitive effort) by assuming that all Democratic politicians share the liberal policy views and priorities of the prototypic Democratic political leader, and assuming that all Republican politicians share the same conservative policy views and priorities of the prototypic Republican leader. Indeed, partisanship is such a powerful cue in any political system with a strong, well-established party system—and that sharply discriminates between the different alternatives—that in many lower-level elections, it serves as the only attribute citizens who identify with a party consider in making their vote choices.

Common social stereotypes (about women, blacks, Southerners, policemen, gays and lesbians, older people, etc.). Stereotypes are applied in all aspects of social perception (Fiske & Taylor, 2008; Judd & Park, 1993; Jussim, Nelson, Manis, &Soffin, 1995; Macrae, Mine, & Bodenhausen, 1994; Smith & DeCoster, 1998). They operate in the same way as partisanship to provide cognitive savings when decision makers assume that all members of some class or category (e.g., all female politicians, all Jewish voters) share the prototypic characteristics that are commonly associated with that category.

Endorsements. Rather than doing all of the hard work of gathering and comparing a candidate’s policy positions in any policy area, the endorsement heuristic delegates that work to a trusted expert (see Arceneaux & Kolodny. 2009; McDermott, 2006; Rapoport, Stone, & Abramowitz, 1991). In politics, lobbying groups such as the American Association of Retired People (AARP), the National Organization of Women (NOW), National Rifle Association (NRA), the Veterans of Foreign Wars (VFW), and any number of different labor unions and business groups, are all more than willing to do all of that work for us, and to share their recommendations with the public. If I like the National Right to Life (NRL) organization, for example, and I know who they are endorsing in an election, I can follow their recommendations and be confident that my abortion views will the supported by their preferred candidate. The NRL endorsements are equally useful to me if I dislike that group and their views toward abortion, of course; in those circumstances, I know I should prefer the candidate they oppose. Either way, I am saving a great deal of cognitive effort by trusting the group to do the work for me. Nonpartisan groups with particular expertise—for instance, the American Bar Association, the American Medical Association—will also make recommendations or provide endorsements within their areas of expertise, for example, by rating an individual nominated by the President to a federal judgeship, as “qualified” or not.

Prominent individuals, including former political leaders, sports stars, and popular celebrities, will also publicly endorse a candidate in an election, in the same way they might recommend any number of commercial products, and decision makers are free to follow their recommendations, if they so choose. One important difference between a political endorsement and a commercial endorsement is that if Lebron James speaks out on some political topic, we assume he is doing so because he really cares about that topic; whereas, if he recommends that we all go out and drink Coca-Cola,® we all assume it is because he is being paid a lot of money to say so.

Viability. Poll results provide reliable, widely available (when elections are near) information on the popularity of a particular individual or option, that can be used strategically by voters as elections approach (Lau, Ditonto, & Love, 2017; Utych & Kam, 2014). Leaving aside the delusional nature of such considerations—that is, thinking my individual vote has some probability of determining the outcome of the election—many people do not want to waste their votes, and can use poll information to eliminate from consideration candidates or parties who effectively have no chance in an upcoming election.

Habit is perhaps the easiest heuristic of all in any area of decision making—although it says nothing about how the habit was established in the first place. Whenever you are in a familiar situation, don’t think, just do what you did the last time: always vote Republican; always turn on CNN to follow the news at 7 p.m.; never attend local city council meetings; stand up and take off your hat whenever the national anthem is played; offer your seat to an elderly person on the bus.

Measuring/Observing Decision Strategies

Although these various strategies have been separated into distinct categories for didactic purposes, they are, in practice, not as distinct as this listing suggests. Nor are decision makers limited to using only one strategy in reaching a decision. Decision makers will often employ several different strategies more or less simultaneously, or use them serially to help simplify (edit) a decision—for example, start with a viability strategy to eliminate all alternatives from the choice set who have no real chance of winning, but then employ a rational choice strategy in choosing between a smaller number of remaining candidates.

Discussing different decision strategies raises the obvious question of how can we tell what strategy a decision maker is using when making a decision? The first and most obvious answer is to ask them. Psychology has a long history of developing self-report scales to measure “unseen” aspects of the human psychology, such as attitudes, personality, and different aspects of cognition. Cognitive style refers to “fairly stable individual differences in the way people perceive, think, solve problems, learn, and relate to others” (Kozhevnikov, 2007), which would certainly include making decisions. Many psychologists have concluded that cognitive style is often a better prediction of subsequent success in some field than measures of general intelligence.

Lau, Kleinberg, and Ditonto (2018) recently presented a 15-item “PolDec-5” instrument that measures general tendencies to pursue the four general types of political decision strategies described here, plus a fifth, “going with your gut.” These items have been administered to several large national samples and have been validated against observational measures of decision strategy use. For convenience, representative items from each subscale are presented in Figure 1. Interested readers should refer to the original article for more details of the analysis and the full set of items. Because these scales are meant to describe general cognitive styles, they are unlikely to predict behavior in any given situation particularly strongly, although they should be better predictors of summary performance across a variety of different political/vote decisions (see Davidson & Jaccard, 1979; Fazio & Williams, 1986; Fazio & Zanna, 1981; Wicker, 1969).

Figure 1. PolDec-5: A Political Decision-Making Scale.

Source: Lau, Kleinberg, and Ditonto (2018).

The larger question—which will not be answered here—is whether people have any unique insight into the causes of their own behavior. Psychologists who study cognitive style obviously believe they do. Other psychologists are quite skeptical, however (e.g., Nisbett & Wilson, 1977), and report several studies where experimental subjects seemed totally unaware of the situational influences on their behavior. If nothing else, each of us has observed more instances of our own decision making than anyone else has, and should, therefore, be able to give reasonably accurate reports about what we typically do—which these scales are trying to measure—if not why we do what we do.

A second method for trying to measure decision strategies is to directly observe decision makers while they are making a decision. This is easier said than done. Verbal protocols, for example (Ericsson & Simon, 1993), ask subjects to “think aloud” while they are making some decision, to verbalize the thoughts that come into the decision maker’s mind while the decision is being made. Such protocols can provide extremely rich descriptions of the sequential contents of working memory, but is inherently limited to System 2 cognition and decisions. Researchers disagree about whether the very task of verbalizing thoughts changes the underlying cognitive processes, which is a serious limitation of this procedure. See Crutcher, 1994; Ericsson and Simon, 1993; Payne, 1994; Whitney and Budd, 1996; and Wilson, 1994 for useful examples and discussion of the strengths and weaknesses of this technique.

Because different decision strategies require very different amounts and types and distributions of information across alternatives, an important insight of behavioral decision research is that different patterns of information search clearly reflect the use of distinguishable decision strategies (Ford et al, 1989; Jacoby, Jaccard, Kuss, Troutman, & Mazursky, 1987). If a decision has been reached before anything close to “complete” information has been gathered, for example, then it is obvious that the decision maker is not employing any rational choice strategy. Taking advantage of modern computer technology, various process tracing methodologies have been developed by researchers to observe and record information search patterns as subjects are making some well-structured decision in a laboratory (see Schulte-Mecklenbeck et al., 2017, for excellent recent review). Information boards present decision makers with an attribute by alternative matrix of information on a computer screen, where the columns of the matrix typically represent the different alternatives under consideration, and the rows of the matrix display the different attributes of judgment or “considerations” across which the alternatives may differ. Visible labels on the information boxes in each cell of the matrix describe the information that will be revealed if that box is opened (Payne et al., 1993). Only one box can remain open at a time. The computer records what boxes are opened, the order in which they are opened, how long each box remains open—and of course, ultimately, what choice is made—providing a great deal of information about the strategies people use in making many types of decisions, and greatly facilitating research on questions such as how information display affects information search, how decision strategies affect choices and decision accuracy, and so on. With information boards, we get direct measures of the depth of search and comparability of search across alternatives, and thus we can directly observe the decision strategies in operation. Willemsen and Johnson (2011) have developed an excellent software package that can be downloaded by registered users for free from MouselabWEB. See Schulte-Mecklenbeck, Kuhberger, and Ranyard (2011) for a review of the Mouselab program and process tracing methods more generally, and see Mintz, Geva, Redd and Carnes (1997) for an interesting application of it to elite decision making.

Lau and Redlawsk (2001b; 2006) refer to this technique as a static information board, and see it as an ideal mechanism for studying relatively simple decisions where all of the relevant information is available simultaneously, where information acquisition is completely under the decision maker’s control, and where the decision must be reached in a relatively short period of time—conditions that describe many consumer decisions (e.g., Which brand of cereal should I buy off of a supermarket shelf? Which company provides the best auto insurance where I live?) but would not describe many political decisions. Election campaigns, for example, occur over extended periods of time, they involve a mix of easily available (party affiliation) and much more difficult to find information (detailed policy proposals), the availability of which changes over time. Indeed, the information itself (e.g., policy stands) sometimes changes over time.

Lau and Redlawsk have developed a Dynamic Process Tracing Environment (DPTE) program to study judgment and decision making in more complex and dynamic social situations such as political campaigns (Andersen, Redlawsk, & Lau, forthcoming; Lau, 1995; Lau & Redlawsk, 1997, 2001b, 2006). DPTE studies retain the control and detailed record keeping of static information boards but change the context in which decisions must be reached. The typical DPTE study provides much more information than any decision maker can process and integrate, hence forcing discrimination and selectivity in information search. While the entire information environment—the full attributes by alternatives matrix—is clear from the outset with a static information board, in the typical DPTE study, only a small portion of the total information environment can be seen and accessed at any point in time, as information boxes scroll down the computer screen to simulate the ongoing flow of information during an election campaign. This makes it easy to manipulate the relative ease of accessing different types of information, as the probability that any information box appears on the screen can be controlled. Figure 2 provides a screen shot of the information available (albeit, only for a few seconds) during a DPTE study. Interested readers can learn more about DPTE by going to the Dynamic Process Tracing Environment web page.

Figure 2. Screenshot of Dynamic Information Board.

Although the larger ethos of a DPTE study is to provide decision makers with great discretion over the information they gather in the process of making a decision, it is also quite possible to expose all decision makers to certain relevant information without their explicitly choosing to look at it, for example in the form of campaign ads that can take over the computer screen for 20 seconds or so. Although the information environment is much more complex and variable than it is with a static information board, it is still possible with the information recorded by the DPTE program to measure the relative depth and comparability of information search across the dynamic environment, and thus to make very informed inferences about the types of decision strategies that subjects are employing.

Conclusion: The Consequences of Utilizing Different Decision Strategies

An alternative method for studying decision strategies tries to simulate, statistically, what decision makers should be doing if they followed a specific strategy, rather than observing actual decision making behavior (e.g., Kim, Taber, & Lodge, 2010; Kollman, Miller, & Page, 1992; Laver, 2005; Taber & Steenbergen, 1995). Now the goal is evaluation rather than description—not, “How do decision makers actually make decisions,” but “What would the consequences be if decision makers followed a specific decision strategy?” Computers do not have the same cognitive limits as humans, however, and it is not clear how much these studies can tell us about the effectiveness of different decision strategies in the real-world conditions under which they must operate.

Nonetheless, this article concludes with a brief discussion of the “so what?” question because it can be addressed with both observational and experimental research. We should care about decision strategies because they obviously influence the nature and accuracy of the choices that people make. If a strategy such as satisficing eliminates some alternatives from consideration, then obviously, the eliminated strategies cannot be chosen. If decision strategies rely on particular cognitive processes—for example, recall from long term memory—then decisions will be influenced by biases such as primacy and recency in those cognitive processes. If decision strategies, and/or the judgmental processes that underlie them, are biased by pre-existing implicit or explicit preferences, then the resulting decisions will be biased in the same direction.

Rational choice strategies guarantee the highest probability of a value-maximizing choice, but only under conditions that rarely if ever hold in the real world. Perhaps the most important question about decision strategies is how well do they perform under conditions that we are likely to experience in the real world? Gigerenzer and colleagues (Gigerenzer, 2000; Gigerenzer & Gaissmaier, 2011; Gigerenzer & Goldstein, 1996; Gigerenzer & Todd, 1999) discuss several examples where fast and frugal heuristic strategies produce decisions that are as good as, and often better than, the decisions reached by more complex decision algorithms.

Lau and Redlawsk (1997, 2006) discuss several experiments where low information Model 3 and Model 4 decision strategies were more likely to result in higher quality (“correct”) vote decisions than decision strategies that resembled rational choice. A clear task for future research is to more clearly explicate the situations when different decision strategies perform particularly well or particularly poorly. For example, decision strategies that rely on stereotypic beliefs will suffer whenever those stereotypes do not hold (Lau & Redlawsk, 2001a). Having to “opt out,” rather than “opt in” to being an organ donor, makes a huge difference in the number of available organs for patients needing organ transplants (Johnson & Goldstein, 2003). How can we devise institutional factors that “nudge” citizens to employ strategies and/or make decisions that are especially useful or socially beneficial in particular situations that frequently occur in the real world?

Further Reading

  • Brandstatter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113(2), 409–432.
  • Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. (2015). Emotion and decision making. Annual Review of Psychology, 66, 799–823.
  • Mellers, B. A., Schwartz, A., & Cooke, A. D. J. (1998). Judgment and decision making. Annual Review of Psychology, 49(1), 447–477.
  • Oppenheimer, D. M., & Kelso, E. (2015). Information processing as a paradigm for decision making. Annual Review of Psychology, 66, 277–294.
  • Weber, E. U., & Johnson, E. J. (2009). Mindful judgment and decision making. Annual Review of Psychology, 60, 53–85.
    Elite Decision Making
    • Allison, G. T., & Zelikow, P. D. (1999). Essence of decision: Explaining the Cuban missile crisis (2nd ed.). New York, NY: Longman.
    • Geva, N., Mayhar, J., & Skorick, J. M. (2000). The cognitive calculus of foreign policy decision making: An experimental assessment. Journal of Conflict Resolution, 44(4), 447–471.
    • Jervis, R. (2010). Why intelligence fails: Lessons from the Iranian Revolution and the Iraq War. Ithaca, NY: Cornell University Press.
    • Kuperman, R. D. (2006). Making research on foreign policy decision making more dynamic. International Studies Review, 8(3), 537–544.
    • Levy, J. S. (2013). Psychology and foreign policy decision-making. In L. Huddy, D. O. Sears & J. S. Levy (Eds.), Oxford handbook of political psychology (2nd ed.). New York, NY: Oxford University Press.
    • Mintz, A. (2005). Applied decision analysis: Utilizing poliheuristic theory to explain and predict foreign policy and national security decisions. International Studies Perspectives, 6(1), 94–98.
    • Rapport, A. (2018). Cognitive approaches to foreign policy analysis. Oxford Research Encyclopedias: Politics.
    • Redd, S. B., Brule, D., & Mintz, A. (2018). Poliheuristic theory and foreign policy analysis. Oxford Research Encyclopedias: International Studies.
    • Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press.


  • Achen, C. H., & Bartels, L. M. (2016). Democracy for realists: Why elections do not produce responsive government. Princeton, NJ: Princeton University Press.
  • Andersen, D. J., Redlawsk, D. P., & Lau, R. R. (Forthcoming). The dynamic process tracing environment (DPTE) as a tool for studying political communication. Political Communication.
  • Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
  • Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psychologist, 51(4), 355–365.
  • Anderson, M. C., Bjork, R. A., & Bjork, E. L. (1994). Remembering can cause forgetting: Retrieval dynamics in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(5), 1063–1087.
  • Arceneaux, K., & Kolodny, R. (2009). Educating the least informed: Group endorsements in a grassroots campaign. American Journal of Political Science, 53(4), 755–770.
  • Arrow, K. J. (1951). Social choice and individual values. New York, NY: Wiley.
  • Austen-Smith, D., & Banks, J. S. (1999). Positive political theory I: Collective preference. Ann Arbor, MI: University of Michigan Press.
  • Barrett, L. F., Tugade, M. M., & Engle, R. W. (2004). Individual differences in working memory capacity and dual-process theories of the mind. Psychological Bulletin, 130(4), 553–573.
  • Bartels, L. M. (2008). Unequal democracy: The political economy of the new gilded age. Princeton, NJ: Princeton University Press.
  • Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American voter. Chicago, IL: University of Chicago Press.
  • Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766.
  • Chaiken, S., & Trope, Y. (Eds.). (1999). Dual-process theories in social psychology. New York, NY: Guildford.
  • Chong, D. (2013). Degrees of rationality in politics. In L. Huddy, D. O. Sears, & J. S. Levy (Eds.), Oxford handbook of political psychology (2nd Ed.), New York, NY: Oxford University Press.
  • Coe, K., Tewksbury, D., Bond, B. J., Drogos, K. L., Porter, R. W., Yahn, A. & Zhang, Y. (2008). Hostile news: partisan use and perceptions of cable news programming. Journal of Communication, 58(2), 201–219.
  • Conover, P. J., Gray, V., & Coombs, S. (1982). Single-issue voting: Elite-mass linkages. Political Behavior, 4(4), 309–331.
  • Crutcher, R. J. (1994). Telling what we know: The use of verbal report methodologies in psychological research. Psychological Science 5(5), 241–244.
  • Davidson, A. R., & Jaccard, J. J. (1979). Variables that moderate the attitude–behavior relation: Results of a longitudinal survey. Journal of Personality and Social Psychology, 37(8), 1364–1376.
  • Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56(1), 5–18.
  • Dilliplane, S. (2011). All the news you want to hear: The impact of partisan news exposure on political participation. Public Opinion Quarterly, 75(2), 287–316.
  • Downs, A. (1957). An economic theory of democracy. New York, NY: Harper & Row.
  • Druckman, J. N., & Lupia, A. (2016). Preference change in competitive political environments. Annual Review of Political Science, 19, 13–31.
  • Enelow, J. M., & Melvin, J. H., (1984). The spatial theory of voting: An introduction. New York, NY: Cambridge University Press.
  • Ericsson, K. A., & H. A. Simon. (1993). Protocol analysis: Verbal reports as data (rev. ed.), Cambridge, MA: MIT Press.
  • Evans, J. S. B. T. (1996). Deciding before you think: Relevance and reasoning in the selection task. British Journal of Psychology, 8 (2), 223–240.
  • Evans, J. S. B. T. (2003). In two minds: Dual process accounts of reasoning. Trends in Cognition Science, 7(10), 444–459.
  • Evans, J. S. B. T. (2006). “The heuristic-analytic theory of reasoning: Extension and evaluation.” Psychonomic Bulletin and Review, 13, 378–395.
  • Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278.
  • Evans, J. S. B. T., & Curtis-Holmens, J. (2005). Rapid responding increases belief bias: Evidence for the dual process theory of reasoning. Thinking & Reasoning, 11(4), 382–389.
  • Fazio, R. H., & Williams, C. J. (1986). Attitude accessibility as a moderator of the attitude-perception and attitude-behavior relations: An investigation of the 1984 presidential election. Journal of Personality and Social Psychology, 51(3), 505–514.
  • Fazio, R. H., & M. P. Zanna. (1981). Direct experience and attitude behavior consistency. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology, Vol. 14 (pp. 161–202). New York, NY: Academic Press.
  • Fiorina, M. P. (1981). Retrospective voting in American national elections. New Haven, CT: Yale University Press.
  • Fiske, S. T., & Taylor, S. E. (2008). Social cognition: From brains to culture. Boston, MA: McGraw-Hill.
  • Ford, J. K., Schmitt, N., Schechtman, S. L., Hults, B. M., & Doherty, M. L. (1989). Process tracing methods: Contributions, problems, and neglected research questions. Organizational Behavior and Human Decision Processes, 43(1), 75–117.
  • Forgas, J. P., Williams, K. D., & Von Hippel, W. (Eds.). (2011). Social judgments: Implicit and explicit processes. New York, NY: Cambridge University Press.
  • Funk, C. L. (1997). Implications of political expertise in candidate trait evaluations. Political Research Quarterly, 50(3), 675–697.
  • Garrett, R. K. (2009). Politically motivated reinforcement seeking: Reframing the selective exposure debate. Journal of Communication, 59(4), 676–699.
  • Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. New York, NY: Oxford University Press.
  • Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29.
  • Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.
  • Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103(4), 650–669.
  • Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: the take the best heuristic. In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 75–95). New York, NY: Oxford University Press.
  • Gigerenzer, G., & Todd, P. M. (1999). Fast and frugal heuristics: The adaptive toolbox. In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 3–34). New York, NY: Oxford University Press.
  • Gilovich, T., Griffin, D. W., & Kahneman, D. (Eds.). (2002). The psychology of intuitive judgment. New York, NY: Cambridge University Press.
  • Hastie, R. (1986). A primer of information-processing theory for the political scientist. In R. R. Lau & D. O. Sears, (Eds.), Political cognition: The 19th Annual Carnegie Symposium on Cognition (pp. 11–39). Hillsdale, NJ: Erlbaum.
  • Hastie, R., & Dawes, R. M. (2009). Rational choice in an uncertain world (2nd ed.). Thousand Oaks, CA: SAGE.
  • Hastie, R., & Park, B. (1986). The relationship between memory and judgment depends on whether the task is memory-based or on-line. Psychological Review, 93(3), 258–268.
  • Healy, A., & Malhotra, N. (2013). Retrospective voting reconsidered. Annual Review of Political Science, 16, 285–306.
  • Herstein, J. A. (1981). Keeping the voter’s limits in mind: A cognitive process analysis of decision making in voting. Journal of Personality and Social Psychology, 40(50), 843–861.
  • Hinich, M. J., & Munger, M. C. (1994). Ideology and the theory of political choice. Ann Arbor: University of Michigan Press.
  • Hogarth, R. M. (1987. Judgment and choice (2nd ed.). New York, NY: Wiley.
  • Jacoby, J., Jaccard, J., Kuss, A., Troutman, T., & Mazursky, D. (1987). New directions in behavioral process research: Implications for social psychology. Journal of Experimental Social Psychology, 23(2), 146–175.
  • Jervis, R. (1976). Perception and misperception in international politics. Princeton, NJ: Princeton University Press.
  • Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339.
  • Johnson-Laird, P. N. (1983). Mental models. New York, NY: Cambridge University Press.
  • Judd, C. M., & Park, B. (1993). Definition and assessment of accuracy in social stereotypes. Psychological Review, 100(1), 109–128.
  • Jussim, L., Nelson, T. E., Manis, M., & Soffin, S. (1995). Prejudice, stereotypes, and labeling effects: Sources of bias in person perception. Journal of Personality and Social Psychology, 68(2), 228–246.
  • Kahneman, D. (1994). New challenges to the rationality assumption. Journal of Institutional and Theoretical Economics, 150(1), 18–36.
  • Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus & Giroux.
  • Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. Holyoak & R. G. Morrison (Eds.), Cambridge handbook of thinking and reasoning (pp. 267–294). New York, NY: Cambridge University Press.
  • Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgement under uncertainty: Heuristics and biases. New York, NY: Cambridge University Press.
  • Kelley, S., Jr., & Mirer, T. W. (1974). The simple act of voting. American Political Science Review, 68(2), 572–591.
  • Kinder, D. R. (1986). Presidential character revisited. In R. R. Lau & D. O. Sears (Eds.), Political cognition: The 19th Annual Carnegie Symposium on Cognition (pp. 233–256). Hillsboro, NJ: Lawrence Erlbaum.
  • Kinder, D. R., & Kiewiet, R. (1979). Economic discontent and political behavior: The role of personal grievances and collective economic judgments in congressional voting. American Journal of Political Science, 23(May), 495–527.
  • Kim, S.-Y., Taber, C. S., & Lodge, M. (2010). A computational model of the citizen as motivated reasoner: Modeling the dynamics of the 2000 presidential election. Political Behavior, 32(1), 1–28.
  • Kollman, K., Miller, J. H., & Page, S. E. (1992). Adaptive parties in spatial elections. American Political Science Review, 86(4), 929–937.
  • Kozhevnikov, M. (2007). Cognitive styles in the context of modern psychology: Toward an integrated framework of cognitive style. Psychological Bulletin, 133(3), 464–481.
  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
  • Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientific discovery. Cambridge, MA: MIT Press.
  • Lau, R. R. (1995). Information search during an election campaign: Introducing a process tracing methodology to political science. In M. Lodge & K. McGraw (Eds.), Political judgment: Structure and process (pp. 179–205). Ann Arbor: University of Michigan Press.
  • Lau, R. R. (2003). Models of decision making. In D. O. Sears, L. Huddy, & R. Jervis (Eds.), Oxford handbook of political psychology (pp. 19–59). New York, NY: Oxford University Press.
  • Lau, R. R., Ditonto, T. M. & Love, J. (2017). Showdown at the OK Corral: Testing competing theories of political judgment. Presented at the New York Area Political Psychology Meeting (November 4), Columbia University.
  • Lau, R. R., Kleinberg, M. S., & Ditonto, T. M. (2018). Measuring voter decision strategies in political behavior and public opinion research. Public Opinion Quarterly, 82(Suppl. 1), 325–350.
  • Lau, R. R., & Redlawsk, D. P. (1997). Voting correctly. American Political Science Review, 91(3), 585–599.
  • Lau, R. R., & Redlawsk, D. P. (2001a). Advantages and disadvantages of cognitive heuristics in political decision making. American Journal of Political Science, 45(4), 951–971.
  • Lau, R. R., & Redlawsk, D. P. (2001b). An experimental study of information search, memory, and decision making during a political campaign. In James Kuklinski (Ed.), Citizens and politics: Perspectives from political psychology (pp. 136–159). New York, NY: Cambridge University Press.
  • Lau, R. R., & Redlawsk, D. P. (2006). How voters decide: Information processing during election campaigns. New York, NY: Cambridge University Press.
  • Lau, R. R., & Sears, D. O. (Eds.). (1986). Political cognition: The 19th Annual Carnegie Symposium on Cognition. Hillside, NJ: Erlbaum.
  • Laver, M. (2005). Policy and the dynamics of political competition. American Political Science Review 99(2), 263–275.
  • Lazarsfeld, P. F., Berelson, B. R., & Gaudet, H. (1948). The people’s choice. New York, NY: Columbia University Press.
  • Levy, J. S. (2013). Psychology and foreign policy decision-making. In L. Huddy, D. O. Sears, & J. S. Levy (Eds.), Oxford handbook of political psychology (2nd ed.). New York, NY: Oxford University Press.
  • Lodge, M., & Hamill, R. (1986). A partisan schema for political information processing. American Political Science Review, 80(2), 505–519.
  • Lodge, M., McGraw, K. M., & Stroh, P. (1989). An impression driven model of candidate evaluation. American Political Science Review 83(2), 399–419.
  • Lodge, M., Steenbergen, M. R., & Brau, S. (1995). The responsive voter: Campaign information and the dynamics of candidate evaluation. American Political Science Review, 89(2), 309–326.
  • Lodge, M., & Taber, C. S. (2005). The automaticity of affect for political leaders, groups, and issues: An Experimental Test of the Hot Cognition Hypothesis. Political Psychology, 26(3), 455–482.
  • Lodge, M., & Taber, C. S. (2013). The rationalizing voter. New York, NY: Cambridge University Press.
  • Lupia, A., McCubbins, M. D., & Popkin, S. L. (2000). Beyond rationality: Reason and the study of politics. In A. Lupia, M. D. McCubbins, & S. L. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality. New York, NY: Cambridge University Press.
  • Machina, M. (1982). “Expected Utility”: Analysis without the independence axiom. Econometrica, 50(2), 277–323.
  • Macrae, C. N., Mine, A. B., & Bodenhausen, G. V. (1994). Stereotypes as energy-saving devices: A peek inside the cognitive toolbox. Journal of Personality and Social Psychology, 66(1), 37–47.
  • March, J. G. (1988). Decisions and organizations. Oxford: Blackwell.
  • March, J. G. (1994). A primer on decision making. New York, NY: Free Press.
  • March, J. G., & Olson, J. P. (1989). Rediscovering institutions: The organizational basis of politics. New York, NY: Free Press.
  • March, J. G., & Simon, H. A. (1958). Organizations. New York, NY: Wiley.
  • Markus, G. B., & Converse, P. E. (1979). A dynamic simultaneous equation model of electoral choice. American Political Science Review, 73(4), 1055–1070.
  • McDermott, M. L. (2006). Not for members only: Group endorsements as electoral information cues. Political Research Quarterly, 59(2), 249–258.
  • Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
  • Miller, W. E., & Shanks, J. M. (1996). The new American voter. Cambridge, MA: Harvard University Press.
  • Mintz, A., Geva, N., Redd, S. B., & Carnes, A. (1997). The effect of dynamic and static choice sets on political decision making: An analysis using the decision board platform. American Political Science Review, 91(3), 553–566.
  • Nie, N. H., Verba, S., & Petrocik, J. R. (1976). The changing American voter. Cambridge, MA: Harvard University Press.
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259.
  • Norman, D. A. (1982). Learning and memory. San Francisco, CA: Freeman.
  • Osman, M. (2005). An evaluation of dual-process theories of reasoning. Psychological Bulletin Review, 11(6), 988–1010.
  • Palfrey, T. R., & Rosenthal, H. (1983). A strategic calculus of voting. Public Choice, 41(1), 7–53.
  • Payne, J. W. (1994). Thinking aloud: Insights into information processing. Psychological Science 5(5), 241–248.
  • Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(3), 534–552.
  • Payne, J. W., Bettman, J. R. & Johnson, E. J. (1992). Behavioral decision research: A constructive processing perspective, Annual Review of Psychology, 43(1), 87–131
  • Payne, J. W., Bettman, J. R. & Johnson, E. J. (1993). The adaptive decision maker. New York, NY: Cambridge University Press.
  • Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classical and contemporary approaches. Dubuque, IA: Brown.
  • Raiffa, H. (1968). Decision analysis: Introductory lectures on choice under uncertainty. Reading, MA: Addison-Wesley.
  • Rapoport, R. B, Stone, W. J., & Abramowitz, A. I. (1991). Do endorsements matter? Group Influence in the 1984 Democratic caucuses. American Political Science Review, 85(1), 193–204.
  • Redlawsk, D. P., & Lau, R. R. (2013). Behavioral decision making. In L. Huddy, D. O. Sears, & J. S. Levy (Eds.), Oxford handbook of political psychology (2nd ed.). New York, NY: Oxford University Press.
  • Reyna, V. F. (2004). How people make decisions that involve risk: A dual-processes approach. Current Directions in Psychological Science, 13(2), 60–66.
  • Riker, W. H. (1995). The political psychology of rational choice theory. Political Psychology, 16(1), 23–44.
  • Riker, W. H., & Ordeshook, P. C. (1968). A theory of the calculus of voting. American Political Science Review, 62(1), 25–42.
  • Riker, W. H., & Ordeshook, P. (1973). An introduction to positive political theory. Englewood Cliffs, NJ: Prentice-Hall.
  • Saad, L. October 4, (2012). Abortion is threshold issue for one in six U.S. voters. Gallup 4.
  • Savage, L. J. (1954). The foundations of statistics. New York, NY: Wiley.
  • Schulte-Mecklenbeck, M., Johnson, J. G., Böckenholt, U., Goldstein, D. G., Russo, J. E., Sullivan, N. J., & Willemsen, M. C. (2017). Process-tracing methods in decision making: On growing up in the 70s. Current Directions in Psychological Science, 26(5), 442–450.
  • Schulte-Mecklenbeck, M., Kuhberger, A., & Ranyard, R. (Eds.). (2011). A handbook of process tracing methods for decision research. New York, NY: Psychology Press.
  • Sears, D. O. (1975). Political socialization. In F. I. Greenstein & N. W. Polsby (Eds.), Handbook of political science (Vol. 2, pp. 93–127). Menlo Park, CA: Addison-Wesley.
  • Sears, D. O., & C. Funk. (1991). The role of self-interest in social and political attitudes. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 24, pp. 1–91). New York, NY: Academic Press.
  • Simon, H. A. (1947). Administrative behavior. New York, NY: Macmillan.
  • Simon, H. A. (1957). Models of man: Social and rational. New York, NY: Wiley.
  • Simon, H. A. (1979). Information processing models of cognition. Annual Review of Psychology, 30(1), 363–396.
  • Simon, H. A. (1985). Human nature in politics: The dialogue of psychology with political science. American Political Science Review, 79(2), 293–304.
  • Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3–22.
  • Smith, E. R., & DeCoster, J. (1998). Knowledge acquisition, accessibility, and use in person perception, and stereotyping: Simulation with a recurrent connectionist network. Journal of Personality and Social Psychology, 74(1), 21–35.
  • Smith, E. R., & DeCoster, J. (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4(2), 108–131.
  • Stokes, D. E. (1963). Spatial models of party competition. American Political Science Review, 57(2), 368–377.
  • Taber, C. S., & Steenbergen, M. R. (1995). Computational experiments in electoral behavior. In M. Lodge & K. M. McGraw (Eds.), Political Judgment, 141–178. Ann Arbor: University of Michigan Press.
  • Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76(1), 31–48.
  • Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293–315.
  • Utych, S. M., & Kam, C. D. (2014). Viability, information seeking, and vote choice. Journal of Politics, 76(1), 152–166.
  • von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.
  • Wason, P. C., & Evans, J. S. B. T. (1975). Dual processes in reasoning? Cognition, 3(2), 141–154.
  • Whitney, P., & Budd, D. (1996). Think-aloud protocols and the study of comprehension. Discourse Processes, 21(3), 341–351.
  • Wicker, A. W. (1969). Attitudes versus actions: The relationship of verbal and overt behavioral responses to attitude objects. Journal of Social Issues, 25(4), 41–78.
  • Willemsen, M. C. & Johnson, E. J. (2011). Visiting the decision factory: Observing cognition with MouselabWEB and other information acquisition methods. In M Schulte-Mecklenbeck, A. Kühberger & R. Ranyard (Eds.), A handbook of process tracing methods for decision research: A critical review and user’s guide (pp. 19–42). New York, NY: Psychology Press.
  • Wilson, T. D. (1994). The proper protocol: Validity and completeness of verbal reports. Psychological Science 5(5), 249–252.
  • Wright, P. (1975). Consumer choice strategies: Simplifying vs. optimizing. Journal of Marketing Research, 12(1), 60–67.
  • Zimmermann, M. (1989). The nervous system and the context of information theory. In R. F. Schmidt & G. Thews (Eds.), Human Physiology (2nd ed., pp 166–175). Berlin: Springer-Verlag.


  • Notes

    1. This brief description makes the process of decision making for unstructured decisions seem like it is comprised of a series of discrete steps, which over-simplifies reality. In most situations where decision makers must generate possible courses of action, the actual process will be more of an iterative one, where alternatives are generated, evaluated, improved, evaluated again, and so on.

  • 2. Note that rational choice theory does not attempt to explain or advise people as to what values they ought to hold, but rather what decision they should make given the values they do hold. It is commonly assumed—particularly within economics—that values are material in nature (e.g., wealth, social status), but that is not required by the theory itself. Values can be symbolic or self-actualizing or anything else that provides a basis for forming a preference.

  • 3. Downs recognized that the “others” to whom information gathering is delegated could be biased in their perceptions and reports—which is perfectly fine, as long as the decision maker is aware of and compensates for those biases.

  • 4. The American Voter model applies best to well-established democracies with stable party systems, as there is in the United States and many other advanced democracies. Party identification would not play such a prominent role in new democracies without well-established party systems, or whenever an older democracy has undergone a serious realignment where long-established party coalitions around social cleavages have been overturned.

  • 5. People can make retrospective judgments with respect to their own personal well-being, or with respect to some larger group of which the person is a member. Research generally shows that larger “sociotropic” concerns are usually much more important than narrower self-interest (Kinder & Kiewiet, 1979; Sears & Funk, 1991).