Bounded Rationality and Cognitive Limits in Political Decision Making
Bounded Rationality and Cognitive Limits in Political Decision Making
- Brooke N. Shannon, Brooke N. ShannonDepartment of Government, The University of Texas at Austin
- Zachary A. McGeeZachary A. McGeeDepartment of Political Science, The University of Texas at Austin
- , and Bryan D. JonesBryan D. JonesDepartment of Government, The University of Texas at Austin
Summary
Bounded rationality conceives of people engaging in politics as goal oriented but endowed with cognitive and emotional architectures that limit their abilities to pursue those goals rationally. Political institutions provide the critical link between micro- and macro-processes in political decision-making. They act to (a) compensate for those bounds on rationality; (b) make possible cooperative arrangements not possible under the assumptions of full or comprehensive rationality; and (c) fall prey to the same cognitive and emotional limits or canals that individual humans do. The cognitive limitations that hamper individuals are not only replicated at the organizational level but are in fact causal.
Keywords
Subjects
- Governance/Political Change
- Policy, Administration, and Bureaucracy
- Political Behavior
- Political Psychology
Introduction
Much of political science is concerned with decision making at the individual, collective, or social level. Although the discipline has accomplished a great deal in connecting individual decision making to collective political action, the study of institutions continues to be characterized by ad hoc assumptions about underlying human behavior—perhaps best characterized as unstated prior assumptions. Micro and macro processes, cognitive processes at the individual level, and social processes in organized collectives—particularly government—are certainly related, but how? Individuals make decisions and act within institutions that have preexisting rules, norms, and processes that govern these actions. But political institutions can and do act as well, by producing policy actions. Yet these outputs are not a sum of individual actions, as in the case in elections (even if the rules for summing votes are complex). In organizations, it is the rules and norms that actually comprise the institution; without an understanding of these rules and norms, we can’t grasp the relationship between individual decision making and collective actions.
Political scientists studying institutions have harnessed two distinct approaches in studying the relationships between individuals and institutions. One is based in the rational choice framework most generally associated with economics. Rational actors deploy the most efficient means to achieving their goals. Politics is centered on strategic action deployed in pursuit of goals; as a consequence, goal-oriented behavior must be a large component of any decision-making approach in political science. The approach has a long history in the discipline (e.g., Riker, 1962). In the study of legislatures, rational models of maximizing actors subject to the constraints of rules have proved fruitful.
Yet large swaths of behavior in institutions remain unexplained within the rational-choice framework, and in many cases the models have failed to provide predictive successes. Simon (1985) argues that these models never will be able to because they postulate knowledge of goals of actors as auxiliary assumptions, which cannot be known with any certainty without empirical inquiry. Moreover, all political actors have multiple goals, and the specific trade-offs may not be understood even by the actors themselves.
The second approach used by political scientists in the study of institutions is bounded (or behavioral) rationality, in which actors are goal oriented but are limited by their cognitive and emotional architectures in achieving these goals. Moreover, examining institutions from the perspective of bounded rationality leads quickly to the understanding that the cognitive architectures of individuals affect the institutions they inhabit. The rational choice perspective tends to concentrate on the formal rule structure of institutions, but bounded rationality allows for the evolution of existing formal rules and informal norms and the differential attention to some rules at the expense of others. Viewing institutions as being comprised of boundedly rational individuals operating within them leads to two distinct forms of organizational behavior that are not contradictory. The first, the Simon–March approach (March & Simon, 1958; Simon, 1947) sees organizations as compensatory for the cognitive limitations of individual actors. Scholars in this tradition see specialization of function and hierarchical assembly of subparts as mechanisms that compensate for human limitations. The second, born from the study of policy processes, notes that “human organizations fall prey to canalizations of behavior in the same way that human decision-makers do” (Jones, 2001, p. 131). Scholars in this tradition tend to focus on the attention spans of organizations, termed their agendas, and their still-limited information-processing abilities (Jones & Baumgartner, 2012; Jones, Workman, & Jochim, 2009; Workman, Shafran, & Bark, 2017).
How do these differences contribute to decision making in politics? As of the early 21st century, we lack a coherent bridge between the micro and macro levels of information processing that is representative of individual decision making. And from a larger level, such a bridge should include organizations, social processes, and other macrophenomena such as public policy processes and economic considerations. New developments in political science, behavioral economics, cognitive studies, and psychology are opening new possibilities of unifying the decision-making processes of the individual, a primary focus of the behavioral sciences, and the operation of social collectives, the domain of social science (Jones, 2017). The shortcomings of comprehensive rationality, commonly known as rational choice, can be addressed by a more robust understanding of choice through the lenses of behavioral rationality.
In this article, we argue that a shift toward the use of a bounded rationality framework will provide scholars with more realistic models of political decision making without sacrificing the strategic component of human behavior. Bounded or behavioral rationality does not imply non-rational behavior. By integrating institutions as the link between micro and macro processes, scholars can better understand interactions among elites as well as the causal nature of their cognitive underpinnings as they relate to organizational outcomes. In making our argument, we explore the evolution of bounded rationality from its beginnings in public administration to being a regularly utilized model for decision making in the public policy process literature and political science more generally (displacing, in some cases, the model of comprehensive rationality).
Theoretical Foundations: Comprehensive Rationality Versus Bounded Rationality
Comprehensive Rationality as a Limited Framework for Understanding Behavior
Comprehensive rationality makes sweeping assumptions about cognitive abilities and the decision-making processes of humans in general. Also known as rational choice theory, comprehensive rationality has served as a durable model for a framework on individual decision making, in that the theory conceptualizes individuals as rational, calculating, and strategic. Individuals are seen as decision-making actors who can weigh alternatives and trade off seamlessly among competing goals (represented in economics as indifference curves) before making a decision. These are Herculean assumptions, as empirical studies in choice have shown. The approach does not require the use of all information, but it does require that information be acquired until the likely value of the next bit of information is less than the cost of acquisition. This ought to be recognized as almost as Herculean as a full-information assumption, fundamentally impossible in an uncertain world.
Decision makers decide among alternatives under several theoretical assumptions within the framework of comprehensive rationality. First, the list of alternatives is to be exhaustive; second, alternatives must be directly comparable; and last, alternatives must be transitive (i.e., a preference of a to b and b to c implies a strict preference of a to c) (Jones & McGee, 2018; Jones, McGee, & Shannon, 2018). From here, rational individuals make decisions that maximize their benefits and best match their preferences (Jones, 2017). In this view, people are able to routinely check in with, update, and affirm their preconceived preferences, which are relatively stable. In terms of making political decisions, the motivation for comprehensive rationality is chiefly to understand why individuals vote for candidates who may or may not represent these preconceived preferences.
Decisions made in this environment conform to these preconstructed preferences, which people seek to maximize. Comprehensive rationality is grounded in transactional ends; every actor in politics, including individuals, parties, and coalitions, behaves rationally and is goal oriented. Any actor or agent “proceeds towards its goals with a minimal use of scarce resources and undertakes only those actions for which marginal return exceeds marginal cost” (Downs, 1957b, p. 137). Theoretically, humans are constantly making decisions based on their conscious weighing of all alternatives. This theory’s assumptions are rigid in both cognitive abilities and time availability of individuals. Yet the approach is capable of integrating individual behavior of decision makers into institutional behavior under strict conditions. Most importantly, the analyst utilizing comprehensive rationality must examine maximizing behavior subject to a set of rules that combine the preferences of many members of a decision-making unit; this task is difficult given that the level of analysis is most often the aggregated behavior of individual vote choices (Krehbiel, 1992, 1998). Therefore, if the strict conditions for integrating individual behavior into institutions is not met, then the findings from such studies will be inconsistent with empirical realities, usually due to a failure to account for limits in individuals’ cognitive abilities.
Nonetheless, comprehensive rationality has limitations even where outcomes are more clear-cut. Common theoretical applications rest on social dilemmas where individual rational action leads to irrational collective behavior. In the well-known rational choice game called the prisoner’s dilemma, two-person cooperation in a single situation breaks down where players seek to maximize their own benefit while minimizing their losses—in resources such as time, money, and labor—all in the absence of communication. An individual will not trust his or her partner to cooperate, so he or she will not cooperate either. In the classic example, two people are arrested for a burglary. They can either stay silent, not cooperating with the police, or they can confess to the crime, receiving a lesser sentence. They each lack information and do not know what the other will do. In this scenario, it is expected that both of the criminals will confess, ratting the other out because the costs of cooperation and the potential losses for his or her personal well-being are higher if the other confesses and he or she stays silent (Shepsle, 2010, p. 236). The prisoner’s dilemma, part of game theory and emblematic of comprehensive rationality, displays a collective action problem that is very common in politics. The dilemma is represented as a game because, when simulated, it can be run many times, and its various outcomes can be explored.
The problem for a full-blown rational actor model is that social dilemmas illustrated by the prisoners’ dilemma are overcome all the time. How can that happen? Although the prisoner’s dilemma highlights collective action problems demonstrative of politics, the resulting decisions maximizing benefit and minimizing losses are unrealistic in political decision making for individuals and institutions alike. Policymakers face one-off situations when creating policy in that a decision is made for policy to be supported and implemented or it is not. But they also face repeated situations in which expectations are formed and collectively optimal equilibria can be reached, at least formally (Taylor, 1987). Such repeated games do not occur unless rules are stable. How are such structures built and maintained? The repeated game scenario has served only to push back the boundedly rational elements of institutions. Moreover, most social dilemmas occur in much more complex circumstances than the classic scenarios. Although the prisoner’s dilemma may be successful in predicting behavior in competitive simulations where two alternatives can be identified, it lacks explanatory power in common in such political institutions as the United States Congress, characterized by multiple interacting repeated games.
In government, the stakes are high and policy choices result in the distribution and regulation of common pool resources; therefore, a prisoner’s dilemma that results in no action would result in much stronger consequences for the general public than in the classic game. The prisoner’s dilemma results in a zero-contribution thesis, which does not reflect observations of everyday life, including many governmental processes (Ostrom, 2000). Congress is an ideal environment for “predicting behavior in one-shot or finitely repeated social dilemmas in which the theoretical prediction is that no one will cooperate” (Ostrom, 1997, p. 1). Nonetheless, cooperation is common, resulting in policy outcomes (Craig, 2017; Curry & Lee, 2019). Collective choices, which are the outcomes of the policy process, require a more robust explanation. Under the strict assumptions of comprehensive rationality, cooperation is not predicted, because in collective goods, rational actors can easily become free riders. Instead, collective action happens more regularly than not in policymaking. Elinor Ostrom advocated for new methods of rationality, what she called “better than rational” (Jones, 2017; Ostrom, 1997).
Political scientists in the third quarter of the 20th century marveled at the powerful theoretical frameworks that could be developed by relying on comprehensive rationality assumptions. Downs (1957a) provides the classic example by giving a few key arguments (e.g., the median voter theorem), but here we focus on the so-called calculus of voting. Downs argues that, given the costs and benefits of voting, it is not rational to cast a vote because your vote is unlikely to be the deciding vote; however, we know this prediction is not always correct because millions of Americans vote every election cycle. Riker and Ordeshook (1968) add an additional term to his formal model, measuring civic duty and correcting the equation to allow for it to be rational to vote. However, this was a strange addition to the classic cost–benefit approach—indeed, it is more sociological than economic calculus. How many such ad hoc variables would be needed to explain the vote? And none take away from the primary point that the voting calculus fails to explain the choice to vote.
William H. Riker, in his classic book The Theory of Political Coalitions (1962), argues that legislative coalitions follow the size principle. That is, coalitions will be minimum winning, where there are just enough votes to pass the bill and not one more. As with Downs’s original prediction, we know that Riker is not completely right because we do see bills passed by large margins regularly in Congress. Scholars again work to update his theory, adding in communication between actors, supermajoritarian features that mirror American political institutions, or multiple branches of government (Beckmann, 2010; Cameron, 2000; Hinckley, 1972; Krehbiel, 1998; Walker, 1983). However, even with these adjustments, point-prediction accuracy, outside of controlled, elite-based environments for models of comprehensive rationality, is low (Jones, 2003). Coalitions just don’t work as Riker’s brilliant exposition predicted.
Bounded Rationality’s Role in Political Institutions
According to rational-choice theory, the external environment, including institutions, is an outside variable that the individual is reacting to when making decisions. The environment produces incentives (both positive and negative) for actors, who respond to them by maximizing. This understanding of an individual’s cognitive processing ability is in many ways similar to the behaviorist psychology championed by 20th-century psychologist B. F. Skinner. Skinner had a strong role for learning, whereas rational choice economics does not, but both approaches viewed humans as responsive to both positive and negative stimuli. Cognition results more from these reactions than “part of a causal chain from information to action” (Jones, 2017, p. 5).
As cognitive psychology developed, it became clear that a model of the environment alone could not explain human choices. Rather, the internal processing of information from the environment was critical (Simon, 1985). Put another way, psychologists began to integrate the way people think (i.e., information processing) into their broader models for understanding stimulus-response patterns.1 Bounded rationality developed independently from cognitive psychology, but both were strongly influenced by Herbert Simon, and his mastery of cognitive psychology literature certainly helped in developing his model of bounded rationality (Jones, 1999; Simon, 1985).
By introducing limits on rational adaptation and viewing organizations as influenced by those limits, bounded rationality provides a bridge between the behavioral and institutional arms of political science. Herbert Simon originated the term “bounded rationality” in his 1947 classic book Administrative Behavior to describe these limits. The approach retains the assumption that decision makers are goal oriented, which distinguishes bounded rationality from non-rationality.
Bounded rationality takes aim at the perception of individuals as comprehensively rational, able to make choices through utility maximization and to compare alternatives in an environment with total information available to them. It also incorporates the limitations in cognitive processing that individuals experience when making decisions, because they possess neither perfect knowledge nor the ability to accumulate it, resulting from an imprecision in anticipation of consequences and emotions related to imagined consequences—all of which contribute to our shortcomings as individual utility maximizers (Simon, 1947). Simon thought that limited calculational abilities were far less important than difficulties based in other aspects of cognitive processing, such as difficulties in searching for and comparing alternatives systematically.
Fundamental to these inabilities were the limits on attention. The nature of our mind’s attention structures limits our ability to identify relevant stimuli when supplied with excessive information or alternatives (Jones, 2001). Attention allocation has become a major focus of studies employing bounded rationality, especially in the policy process literature. These limitations cause uncertainty to take a more central role “as far more fundamental to the probability calculus implied” (March, 1994). Bounded rationality is the relationship between the individual’s information-processing ability and the environment, complicated by the complexity of the problems faced (Bendor, 2003). The more complex the problem, the more evident the limits.
Limitations in cognitive processing explain the bounds of the individual’s ability to process the overabundance of information. Individuals are serial processors of information in that attention can be directed at only one item at a time. The nature of our mind’s attention structures limits our ability to identify relevant stimuli when supplied with excessive information or alternatives (Jones, 2001). There is simply too much information to address or process at once. Short-term or working memory is small and limited, leading to a bottleneck in information processing; this is a cognitive limitation, far more important than limits in calculating ability, for example (Bendor, 2003; Hogarth, 1987; Lodge, Steenbergen, & Brau, 1995; Redlawsk & Lau, 2013; Simon, 1990).
Simon was principally interested in explaining organizations and administrative behaviors, but argued that any theory of administration must be grounded in understandings of individual decision-making processes. “Simon insisted that the study of public organizations be based in the study of decision-making, and that any model of decision-making take account of what he termed the ‘raw material’ of those conditions—human nature” (Jones, 2017, p. 4). Many facets of human organization evolved or were deliberately constructed to compensate for the inability of humans to have expertise in all fields and to process diverse streams of information, in particular, hierarchy and decentralization.
But organizations do not fully compensate for human decisional frailties. Organizations are not so separate from decision-making behavior of individuals because it is individuals who compose institutions themselves. Institutions are created, operated, and sustained by individuals for better or worse, so they are also characterized by the bounded rationality of those individual actors.
Behavioral organization theory, developed initially by Herbert Simon and James March (March & Simon, 1958; Simon, 1947), emphasized the ability of organizations to expand the capacity of human actors. Human limitations are expanded through specialization of function and coordination through hierarchy. In addition, organizations compensate for human attentional limits through providing parallel processing of diverse informational inputs. Organizations assist in distilling the overabundance of information in the external environment, as individuals rely on serial processing of information for particularly difficult problems (prioritizing a single issue at the expense of others). Formal rules and informal norms govern the day-to-day behaviors of organizational members and subparts, necessitating the need for top-down intervention only occasionally.
Yet, as we have noted, organizations also mimic the shortcomings of human actors. Organization is a dual-edged sword, allowing the expansion of capacity but falling prey to the same cognitive limits as single humans do (Jones, 2001). Although they act to expedite or facilitate successes that would be impossible with individuals acting alone, they can canalize individuals’ cognitive processes, limiting them by creating canal-like processes of repetitive actions based in rules and norms. This same process is replicated in organizations because, like individuals, organizations must process available information taken from their environment through specialization, leading to the same process of canalization. It is too simple to say that organizations reflect directly the individuals that compose them, as formal organizations do not necessarily collapse when a single individual departs. However, the two levels are connected and have a causal relationship as it is true that the methods of processing information and generating action are similar and come directly from the human raw material. That is, organizations are guaranteed to reflect individual limitations to some degree because they are in fact creations of human beings.
Institutional arrangements can offset this key limitation by allowing a legislature, for example, to process several streams of information at once via a committee system. This same hierarchical arrangement allows both task specialization (commonly noted, e.g., members on a congressional committee specializing in a policy area) and canalization of finite attention (not often appreciated). This setup still requires legislative leaders to prioritize what committee recommendations are considered by the chamber floor, but it vastly simplifies the task. Of course, leaders are bounded themselves and are not immune to selective attention, emotional involvement, and an inability to compare complex sets of alternatives systematically. For example, institutions such as committees are likely to privilege certain types of information over time, engage in constrained patterns of information search, and have similar agendas (Breunig & Koski, 2009). This limitation causes organizations to mimic human limitations in making decisions.
Integration Into Scholarship: Public Policy, Political Science, and Public Administration
Distinguishing Between the Problem Space and the Solution Space
Students of bounded rationality in the policy process distinguish between the problem space and the solution space, a distinction that harks back to classic studies of problem solving by Allen Newell and Herbert Simon (1972). Rational choices must be based not only on choosing the optimal solution, but also on modeling the problem space correctly, something almost always ignored by rational choice theorists in political science. Newell and Simon found that the subjects of their problem-solving experiments failed to analyze the problem space, barreling ahead and forging a solution. They found that decision makers use heuristics, or cognitive shortcuts, in trying to understand the problem space and design solutions to it. A common approach was trial and error; only after multiple failures did decision makers turn to rethinking the problem space.
Baumgartner and Jones (2015) argue that in the policy-making process, assessing the problem space generally requires a diversity of viewpoints, whereas solution design relies more heavily on subject-area expertise. The problem space is characterized by defining and prioritizing signals from the larger environment, which is complicated and contains many viewpoints. Moving to the solution space is determined by cognitive processes, as a solution represents a path of action. “If the problem is truly complex, one could spend a lifetime assessing it before taking any action, and people demand action on important problems” (Baumgartner & Jones, 2015, p. 41). Settling on a solution requires the actor to comprehend the problem space and choose a remedy in the decided-upon solution.
An important solution-space heuristic is “satisficing.”2 When individuals satisfice, they choose an alternative that is satisfactory and sufficient, but not optimal. Satisficing applies to the solution space and assumes that the problem space is reasonably well understood. The idea of satisficing is dependent on the presumption that individuals have a standard, an aspirational level for satisfactory outcomes based on two sets of payoffs, satisfactory and unsatisfactory (Bendor, 2003). This process simplifies the decision calculus, allowing for quick but limited comparisons.
When actors satisfice, they look for a course of action that is satisfactory—or good enough. Satisficing is characterized by a few key attributes: (a) a limitation on human ability to plan extended behavior sequences, (b) the tendency to set aspirational levels for each of a variety of goals, (c) sequential (not simultaneous) action on goals (i.e., processing serially), and (d) satisficing instead of optimizing search behavior for alternatives (i.e., actors stop searching for alternatives once they find an alternative that meets a predefined satisfaction level) (Jones, 1997, 1999, 2003; Jones & McGee, 2018; Simon, 1957, 1996). Satisficing is important for understanding how individuals’ minds are populated with alternatives, which is critical for grasping how political issues end up on policy agendas or being implemented into law.
Identification With the Means
In addition to classical bounded rationality, Simon developed the concept of organizational identification, which came from his exploration of vertical specialization and its role in creating efficient organizations (1947).3 More specifically, Simon saw managers of organizations facing essentially two paths: (a) authority, where managers advise employees about decisions made at the top, or (b) identification with the means, where organizational goals permeate and become individual goals of actors, helping to induce equilibria within organizations. The logic behind identification with the means is that individuals will recognize that their organization’s success is also success for them.
The concept of identification with the means, though usually applied to bureaucratic agencies, is useful when applied to American political institutions more generally. Simon (1995) suggested that the U.S. Constitution itself is premised on bounded rationality. The Founding Fathers intended for the U.S. Congress to be the most powerful branch in the federal government, and upon taking office, members of both chambers established some basic level of institutional identification with their role in the legislative branch (Jenkins & Stewart, 2012; Matthews, 1959). Over time, however, a competing identification began to creep into members’ lives. That is, members began to identify with their respective political parties as well as their institution (Aldrich, 2011; Jenkins & Stewart, 2012; Theriault, 2008). Scholars note that parties tend to wax and wane in importance within Congress, with some periods being dominated by powerful party leaders and other periods devolving into factionalism (Bloch Rubin, 2017; Cooper & Brady, 1981; Cox & McCubbins, 2005; Froman & Ripley, 1965; Jenkins & Stewart, 2012; Rohde, 1991).
Students of bounded rationality might identify these trends of party importance as being driven by whether members of Congress are identifying with their role as a member of the legislative branch first or their role as a party member first. With this thinking in mind, the Founding Fathers were right to fear the formation of political parties because they can clearly shift how members views their role and therefore which goals they pursues with their power. If, for example, there is an ethics violation by a member of Congress, and the committee handling these violations is chaired by a member who identifies with her role as a member of Congress first, we expect that committee to gather information about this ethical conundrum. If this chairman identified with her party first, it seems less clear that she would pursue punishment for this ethics violation. Put another way, adopting a boundedly rational point of view can alter our theoretical expectations for outcomes from political decision making. Later in this article, we explore these differences further and attempt to understand how choosing one theoretical foundation over the other impacts what predictions we make about the political world.
In summary, identification with the means, combined with satisficing and the psychological underpinnings explored in the foundational discussions of bounded rationality, form the most foundational contributions of Simon’s work to influence both the policy process and political science literatures broadly. And in many ways, Administrative Behavior, and the ideas discussed therein, sets the stage for organizational studies in political science, and that impact is still felt today in the discipline.
Collective Action and Institutional Frameworks
Bounded rationality addresses the limitations of individuals and connects this to institutional studies by leveraging a macro-level perspective of the similar cognitive-processing limitations at the collective decision-making process. If humans have limitations, so too do the organizations they inhabit (Jones, 2001). The relationship is causal. Where individuals have limited attention spans, organizations prioritize information streams (Jones, 2017). This is procedural-bounded rationality because procedures in organizations are premised on the bounded rationality of humans.
Elinor Ostrom, political economist from Indiana University and Nobel Prize winner in economics, provided a second approach to linking micro-level behavior to collective action. At both the individual and collective levels, there are formal rules and social norms that govern and constrain action. Ostrom developed the Institutional Analysis and Development (IAD) framework to address this issue by incorporating three tiers of decision making—constitutional, collective choice, and operational decisions—into considerations of decision making (Ostrom, 2005, 2009). Institutions, commonly defined as collectively understood concepts “used by humans in repetitive situations organized by rules, norms, and strategies” (Ostrom, 2009), can be used to regulate the enforcement and conservation of public goods through social sanctions in local enforcement or self-governance. Incorporating bounded rationality and cognitive processing limitations onto comprehensive rationality leads to a more realistic view of decision making.
The IAD framework adapts the ideas of comprehensive rationality to reality by including how rational decision making individuals interact within organizations. This framework considers organizational design and how institutions develop to create an environment for decision making. Institutional analysis first conceptualizes institutions as the rules, norms, and strategies used by individuals operating within or across organizations (Ostrom, 2009, p. 23). Institutions here can be defined by their rules in use, which are most similar to norms developed through participation; rules in use are developed through socialization in an institution by its practitioners.
Rules in use imply the adherence to social norms as a means to enforce behavior and formal rules. Because institutions are understood to be the rules of the games people adhere to when making decisions collectively, in societies and organizations, rules in use complement game theory simulations found in comprehensive rationality by engaging directly with collective action dilemmas. Like a sport or game one picks up on the go, rules in use are learned by participants as they socialize or become accustomed to the process. For example, the rules in use, governed and enforced by fellow users of a public good, enforce the reality that people do in fact cooperate, acting together to achieve an outcome.
Ostrom studied common pool resources in which potential users have a motive to over exploit the common good. Such “tragedies of the commons” are common collective action dilemmas in natural resource policies. These emerge when individuals act solely on personal preferences, which leads to overuse. Through experiments and field studies, Ostrom and her colleagues show that enforcement can be achieved through the application of social sanctions and approbation in a manner that prevents overuse and degradation of limited common resources (1999). No formal rules or direct sanctions need be applied. This framework depends on localized control, in which norms can be enforced through boundary requirements for use, which regulate how formal rules and regulations developed by formal institutions of government are implemented. Although the rules governing use are not related to performance, systems operating on localized self-governance have advantages in more knowledge and more efficient monitoring (Ostrom, 1999).
Ostrom develops a theory of collective choice that avoids the traps of collective dilemmas even when collective ownership of common pool resources exists. Norms of cooperation and problem solving can emerge in these situations. Ostrom shows that collective action dilemmas can be overcome by social norms, particularly in local self-governance. Dependent on communication and social sanctions, these enforcement mechanisms can result in cooperation. External rules and monitoring, delivered top-down from government officials, are perhaps best deployed as supplemental measures rather than the first alternative to resource overuse (Ostrom, 2000, p. 147).
Ostrom’s contributions to bounded rationality center on her emphasis on social norms as strong influencers of decision making, particularly in collective action. Social norms are “shared understandings about actions that are obligatory, permitted, or forbidden” (Crawford & Ostrom, 1995; see also Ostrom, 2000). Institutions are a complex mix of formal rules and informal norms. At constitutional, collective, and operational levels, government agencies create formal rules regulating resources, and social norms govern the enforcement and monitoring of the policy. Norms depend on trust and knowledge of rules, as well as costs of use, and are enhanced in lower levels of federalism. Social norms of cooperation and preservation of collective goods can be as strong as formal rules governed by formally imposed rules, monitoring, and sanctions (Ostrom, 1999, 2000).
However, pathological social norms can also develop in a manner that can undermine collective action, and collective action itself can lead to destructive ends. Studies of the development and evolution of norms, as well as their breakdown and replacement, which clearly occurs, are sadly lacking in political science.
Behavioral Budgetary Theory
Public budgets ought to be a place where we can observe rational decision making in action, if it exists in politics. After all, budgets set priorities, involve clear trade-offs, and are quantified. Budgets begin with a proposal and end with an output in measurable units, in which objectives are clearly stated and priorities are solidified, clearly revealing the values held internally by policymakers and in coalitions (Lindblom, 1959). In other words, budget theory provides the ideal academic environment for the integration of the budding new theory of bounded rationality.
Charles Lindblom saw the heuristic approach as applying to broader policy decisions, including budgets, in “The Science of Muddling Through” (1959). “Successive limited comparisons,” dependent on cognitive limits and time restrictions for political decision making, lead to different types of approaches for making decisions. By “successive limited comparisons,” Lindblom means, in part, identifying the current state of a policy (or budget) and then examining alternatives that are already on the table, instead of endlessly searching for alternatives. In contrast to the comprehensive model of rationality, where actors are assumed to be able to debate infinite choices for how to resolve a policy problem with a clearly defined goal, Lindblom argues that decision making cannot hold to this high standard.
Characterizing decision making in policy formulation as a metaphorical tree’s branches and roots, Lindblom sets up the cognitive processes policymakers actually use to create policy, including budgets. The root approach (utilizing comprehensive rationality underpinnings) symbolizes a total remodel, starting from nothing to write a budget or make a decision. This often does not happen. Instead, policy formation and political decision making takes a different process. The branch approach (i.e., successive limited comparisons) represents incrementalism, or building upon an existing base. These comparisons, incremental and small, are based on an existing structure. For budgets, the base is the previous year’s budget.
Yet none of the copious studies of public budget processes find such choice, and where more rational budget processes based on rationality are established, such as “zero-based budgeting,” they invariably fail. Rather, budgeting is rife with heuristic decision making and attention shifting. In the case of supply-side tax cuts and their impacts on budgets, magical thinking is clearly in evidence. Early budget studies found heuristic decision making where actors employed informal routines to deal with a complex, and sometimes conflictual, reality. Wildavsky (1964) and Davis, Dempster, and Wildavsky (1966) found incremental budgeting in which the method of creating one year’s budget began with the foundation of the budget from the previous year, rather than a zero base. Budgeting then proceeded by adding or subtracting increments from a policy area in a process of negotiation; these increments reflect priorities. Cognitive limits in institutional decision making are made clear in the slow change and incremental building of the budget each year. The external environment had scant effect on most budget decisions, as it is characterized by its copious information, and making decisions requires trade-offs. Policymakers made no effort to match budgets to the changing information from the environment.
Although in subsequent studies it turned out that real data on budgets did not match the incrementalist or branch model as well as predicted (Jones & Baumgartner, 2005; Padgett, 1980), the incrementalists were far closer to the truth than advocates of Lindblom’s root approach. Budgets are best understood as responding to changes in the environment rapidly and suddenly, as if a problem that had been “off the radar,” and hence discounted by policymakers, suddenly appeared. Then policymakers had to scramble to produce the funds necessary to meet the crisis, which often could have been foretold. This pattern of under reaction and overreaction in a disjoint fashion is classic institutional bounded rationality (Jones & Baumgartner, 2005). The strongest model, then, combines the routines of incrementalism and predictable budgeting characterized by entitlements with attention shifts to address a festering problem. The external environment intrudes, but it does so episodically. Policy overreactions, policies that levy social costs without offsetting them with benefits (Jones, Thomas, & Wolfe, 2014; Maor, 2012, 2014), and under reactions, exemplified in institutional friction and gridlock (Jones & Baumgartner, 2005), are emblematic of the policy process. Instead of incremental marching of a small number of consistent policy outputs, policy is stymied through institutional gridlock or pushed through rapidly through punctuations.
Budgets are a consistent example of the trade-offs made by boundedly rational decision makers. The institutional constraints on policymakers in Washington for budgets are strong, as interest groups and scarce amounts of resources frame the debates and compromises that take place prior to policy outputs. In this way, budgets are incremental changes in policy, made by decision makers in an uncertain environment, characterized by complexity and the overabundance of information and by occasional large-scale attention interruptions.
Organization Theory in Practice: Decision Making in Congress
With theoretical foundations in place, a more in-depth example may be fruitful for cementing our argument that bounded rationality can better inform our understanding of the operation of political institutions. If rational-choice theory holds up, then we should see predictions from models using those assumptions come to fruition. But if rational choice is too stringent in its assumptions, as we have argued, individual habits and the unconscious use of heuristics will quickly become canalized and should result in less than rational decision making by members (Jones & McGee, 2018). Rational responses to incentives alone would not be able to explain these decisions. We turn to the U.S. Congress for an informal test. Congress serves as an ideal testing ground because it is constantly faced with seemingly endless lists of issues with which it must deal.
Consider the politics surrounding the passage of the Affordable Care Act (ACA) in 2010. If rational choice theory’s emphasis on utility maximization and self-interest holds, then the observed behavior of members of Congress would not be different from theoretical expectations. The classic understanding of members’ incentives comes from Mayhew’s (1974) argument that they are single-minded seekers of reelection. Subsequent applications of this principle have often been grounded in rational choice, so that Mayhew’s concept lends itself to theories of policy creation as incremental followed by sudden prioritization when an issue is viewed as electorally salient. Confined to Mayhew’s rational-choice conceptualization and faced with an impending decision on major healthcare reform legislation, members would be expected to evaluate whether their constituents were in favor of the plan being debated in Congress and then take a position based on their evaluation.
Of course, it is possible that a member’s goals are more diverse than simply seeking reelection (Rohde, 2013). As we have noted above, adding goals leads to difficulties in the rational choice conceptualization of member incentives. Fenno (1973) argues that members seek to achieve one or more of three possible goals: reelection, power within the chamber, and good public policy. Other scholars have added additional goals to Fenno’s original set. For example, certain members with institutional roles, like party leaders, have an additional set of both personal and institutional goals that drive their behavior (Green, 2010; Strahan, 2007), which is consistent with Simon’s (1947) concept of identification with the means.
We stop for a moment to re-evaluate this multiple goal perspective. If a researcher is free to specify goals that fit contexts, then the rational model breaks down because it becomes an unscientific story-telling exercise—a critique devastatingly leveled at the approach by Simon in the American Political Science Review (1985). If reelection is not a goal, and something else is, we have to either make up the goals as analysts or interview the participants and accept at face value the goals they communicate. If we accept multiple goals, then we need a model of how legislators trade off these goals; otherwise, rational models will not yield consistent predictions.
With the potential for multiple goals in mind, what would bounded rationality predict members of Congress do when faced with major healthcare reform legislation? Jones (1994) argues that attention to different preferences shifts as decisional contexts shift. In other words, preferences are multidimensional, but how attention is allocated impacts which set of preferences are used to make a given decision. If members are more concerned with solving the policy problems surrounding the American healthcare system, we can expect some members to support reform even if they do not benefit electorally from doing so. For some members, and especially for those exceptionally concerned about their margin of victory in their previous election, their attention may still be set on reelection and they may still oppose the bill. Other members, like the Speaker of the House at the time, Nancy Pelosi, might have their attention set on winning major legislative victories to solidify her and President Obama’s legacies. Where a given member’s attention is focused will impact their preference set and ultimately determine which set of preferences inform how they vote on the floor.
Scholars tend to attribute the passage of the ACA to Speaker Pelosi’s political abilities (Berkowitz, 2017), but its passage can also be attributed to the policy process in shaping the bill. Drafting was left to key members of Congress and revolved around public hearings with healthcare experts to allow for buy-in by members (Berkowitz, 2017). From the beginning, this bill was sold to members as a problem-solving opportunity that could update the national healthcare system in the United States. This framing of the bill was crucial for when it finally came back to the House after passage through the Senate, to allow for Speaker Pelosi to prod her members to support the bill, not because it would help them win their next election, but because the American people needed healthcare reform.
One might consider this strategy to be wrapped up in Speaker Pelosi’s ability to evaluate the distribution of preferences in her caucus and pick off the votes closest to the new equilibrium point that would be established by the ACA’s passage (Krehbiel, 1998). But, in reality, the bill was not popular among the public and members were being threatened every day not to support it for risk of being removed from office in midterm elections. If we lived in a rational choice world, there wouldn’t be anything Speaker Pelosi could do other than change members’ preferences. But, in practice, she needed to shift attention away from reelection concerns to problem-solving realities—not a change in mind but a change in focus (Jones, 1994). It is certainly more plausible that, rather than Speaker Pelosi being able to change members’ preferences, she could instead appeal to a set of goals outside of the traditional single-minded seekers of reelection conception to secure members’ support.
Next Steps: Developing a Full Behavioral Model of Rationality
Linking Decision Making: Micro–Macro Connections
Key to understanding political decision making is establishing that the relationship between individual and organizational choice is causal (Jones, 2001). March and Simon (1958) argue that organizations expand the limited capacities of actors to deal with complexity in their task environments. But, restrictions on the minds of individuals carry over into restrictions on the capacities and actions of organizations (Jones, 2017). As Simon (1947) argued, the inner system will show through and produce adaptive, but not fully rational, behavior over time. That is, actors will work to accomplish their goals as rationally as they can, but the overwhelming number of choices for any given decision and their inability to effectively match problems and solutions through stimulus response patterns impact their efficiency.
Our minds have not evolved to deal with the intense complexity and information overload of the modern world; instead, they are adapted to our ancient origins (Pinker, 1997). Our basic cognitive capacities are biologically not well matched to deal with the complexities of modern environments. These cognitive features are embedded in models of bounded rationality and are critical for understanding the connection between individual and organizational decision making. Heuristics, known as cognitive shortcuts for decision making, come in a variety of forms and attempt to resolve the issue of our extremely limited short-term memory bottlenecking our nearly infinite long-term memory (Fiske, Kinder, & Larter, 1983; Jones, 2001; Kuklinski & Quirk, 2000; Lau & Redlawsk, 2001; Martignon, 2001; Simon & Newell, 1958; Tversky & Kahneman, 1974).
How adaptive are these heuristics? On the one hand, psychologists Kahneman and Tversky (1984), along with a number of behavioral economists, have produced research that suggests “not very.” Individual actors in this perspective fall into the same predictable traps again and again. Kahneman and Tversky depict human decision makers who display clear tendencies to distinguish between a “domain of gains” and a “domain of losses” and keying decisions on that distinction, conditional on the framing of those choices. On the other hand, Gigerenzer and his colleagues (Gigerenzer et al., 1999; Gigerenzer, 2001) have produced a vigorous research program documenting how often these heuristics serve us well by aiding us in achieving goals. The Simon–March perspective incorporates an organizational component that offers compensation for weaknesses in individual cognitive processing. Yet Jones (2001) notes that organizations can nevertheless fall prey to the same traps that humans do—in particular, the problem of integrating multiple flows of information into the decision-making process.4
If there exists a mismatch between information flows and organizational responses, then errors in addressing a problem will accumulate, leading to a disjointed policy response. Imagine any given bureaucrat or an actor in an organization who has a formal role. This actor is required to complete a similar set of tasks on any given day. Over time, due to this actor’s reliance on heuristics and cognitive shortcuts (Simon & Newell, 1958), this actor will become canalized in how she handles a given task. That is, habits formed through repeatedly handling the same tasks will, over time, become institutionalized and affect how the organization operates (Jones, 2001). These canals are difficult to escape and they become institutionalized decision rules, which can drastically shape the path of public policy. For example, members of Congress frequently opt to continue a policy program with only incremental adjustments (Baumgartner & Jones, 1993). We’ve already noted how this system of under reaction leads to disjoint budgetary responses. It would be impossible for Congress to fully evaluate every program, department, and request in full in every single fiscal year. Simple formal or informal decision rules serve as a way to simplify the task environment and come to acceptable decisions consistently. Yet this leads to policy under reaction to information flows, and hence subsequent overreaction (Jones & Baumgartner, 2005; Maor, 2012; Jones, Thomas, & Wolfe, 2014).
It is not the complexity of linking problems with solutions alone that produces the disjoint and episodic policymaking that is commonly observed by scholars of public policy (Baumgartner & Jones, 1993; Jones & Baumgartner, 2005). It is also the difficulty in selecting among the alternatives presented to an actor at any given point in time (Jones, 2017). We have already outlined the requirements for fully rational decision making and made clear that satisficing is more realistic for understanding how solution spaces are addressed (Simon, 1947). But, in politics, new information is being produced constantly and sometimes this new information will lead to shifts in issue definitions (Baumgartner & Jones, 1993; Simon, 1983). When issue definitions shift, so do the policy problem and solution spaces, which lead to rapid changes in the proposed policy solutions (Jones & Baumgartner, 2005).
It is interesting that science fiction writers have often experimented with beings with far more intelligence than humans—Mr. Spock of Star Trek being the prototype—but not with beings with parallel processing power. Attention is dichotomous; that is, one is either paying attention to something or not (Jones, 2001). Our brains are serial processors and can only respond to one signal at a time (Simon, 1983). A world in which humans were not reliant on small working memories and hence limited attention spans would be much more different from today than a world full of beings with vast frontal-lobe intelligence. If actors had unlimited attention spans, the introduction of new information would not be an issue, but information would instead be processed immediately and updating would occur proportional to the strength of the information signal.
Yet we observe rapid policy changes—known as policy punctuations—regularly, even in the absence of disjoint information flows (Baumgartner & Jones, 1993; Baumgartner et al., 2009; Eissler, Russell, & Jones, 2014; Schrad, 2007; Worsham, 2006). This real-world observation indicates that heuristics, formal and informal organizational rules, and social norms may lead to more rational behavior in stable environments, but they often do not perform particularly well in rapidly changing ones, or even those in which a shift from one dimension of evaluation to another occurs, even in the absence of changing facts on the ground (Jones, 1994). As a consequence, these cognitive shortcuts do not always produce fully rational outcomes, and in broader decision making frameworks, they often lead to disjoint and episodic individual decision making, which then spills over into disjoint and episodic organizational decision making (Jones, 2017; Jones & McGee, 2018). As a consequence, the debate over how adaptive heuristics are is miscast. Often they are adaptive, but only in circumstances where they have had time to evolve toward a fit with the environment. Changing environments lead to maladaptive behavior and disjoint updating.
It is not just a match between the problem space implicitly modeled by decision makers that causes the mismatch. For the complex problem spaces that generally characterize public policy debates, the solution space is large and full of difficult trade-offs (Baumgartner & Jones, 2015). Information is frequently thought of as being facts and numbers (i.e., neutral and unidimensional). But information is ambiguous and each interpretation represents a different dimension, which can be weighted by its persuasiveness or importance to a policy area (Jones & Baumgartner, 2005). Moreover, elected officials will sometimes weight a perspective as essentially zero, effectively ignoring it altogether, only to later discover that it should have been the highest weighted dimension from the start (Baumgartner & Jones, 2015). The impact of attention allocation on understanding the choices made by both individuals and institutions cannot be overstated. In many ways, the key link between individual and organizational decision making is the dichotomous nature of attention that leads to disjoint and episodic institutional action.
Organizations are capable of expanding their decision making abilities, but only so much can be done. Congress is set up to be able to handle multiple issues at once by utilizing the committee system and therefore allowing for parallel processing (Baumgartner & Jones, 1993). But even then, the institution at the macro level (i.e., on the floor of each chamber) is still restricted to serial processing. Members can only vote on one bill or resolution at a time. And, even within the committees, the cognitive limitations of individuals remain at play. Committee chairs still need to decide which issues to prioritize for consideration, when to have hearings, who to call to testify, and the list goes on. Each decision made is subject to goal-oriented and boundedly rational decision making that relies on heuristics and information processing to determine the agenda of the committee. One cannot understand agenda-setting without understanding the boundedly rational nature of the agenda-setter.
Conclusions
Today in business schools as well as schools of public policy and administration, and in political science, professors teach their students the tenets of rational choice as if this normative model could easily be deployed by human decision makers. This proposition is quite simply not the case, nor can simple organizational reorganizations bring about rational decision making. This is especially true because these organizations are so often premised on top-down authority, which implicitly suggests some sort of global rationality at the top. A movement is emerging among some academics to see if decisions can be improved through teaching bounded rationality to decision makers. At first, these methods were developed in policy design to “nudge” policy recipients toward making better decisions (Thaler & Sunstein, 2008). Now, these approaches are being directed at decision makers themselves in order to focus on both the seeking of information and its flow; so too is the design of organizations that more directly recognize the roles of these organizations in compensating for decision-making limitations, including institutional design (Hallsworth Egan, Ritter & McCrae, 2018; John, 2018; Breunig & Koski, 2009).
Although theories of political decision making begin in the brain and are influenced by cognitive processes, they have implications on the organizations and systems levels. The father of bounded rationality, Herbert Simon, began with human nature and cognitive processes in order to grasp organization theory and systemic processes. There are limitations experienced by individuals and organizations, but the jump from micro- to macro-level processes requires a demystifying connection. We argue for an institutional link between the levels because the processes that take place at the individual level, because of cognitive limitations such as canalization, are replicated at the organizational level. This relationship is not coincidental, but causal. Institutions bridge this micro–macro gap. They are the “rules of the game,” or the collectively understood concepts that are organized by rules, norms, and strategies (Ostrom, 2009). And, institutions reflect and organize both individual and organizational processes. Because institutions organize behavior for both individuals and organizations, they provide a coherent connection between them.
With a shift toward a bounded rationality perspective, scholars in the political and policy sciences might find an entirely new set of theoretical outcomes for their phenomena of interest. Moreover, integrating institutions and cognitive limitations into models of political decision making will allow for social scientists to build more realistic models of choice. Furthermore, those models would align with research being done every day in cognitive and social psychology, sociology, and behavioral economics. Our hope is that integrated models of choice might become the standard across all disciplines, and for political scientists, the integration of an institutional connection holds the key.
References
- Aldrich, J. H. (2011). Why parties? A second look. Chicago, IL: University of Chicago Press.
- Baumgartner, F. R., & Jones, B. D. (1993). Agendas and instability in American politics. Chicago, IL: University of Chicago Press.
- Baumgartner, F. R., & Jones, B. D. (2015). The politics of information: Problem definition and the course of public policy in America. Chicago, IL: University of Chicago Press.
- Baumgartner, F. R., Breunig, C., Green-Pedersen, C., Jones, B. D., Mortensen, P. B., Nuytemans, M., & Walgrave, S. (2009). Punctuated equilibrium in comparative perspective. American Journal of Political Science, 53(3), 603–620.
- Beckmann, M. N. (2010). Pushing the agenda: Presidential leadership in U.S. lawmaking, 1953–2004. Cambridge, U.K.: Cambridge University Press.
- Bendor, J. (2003). Herbert A. Simon: Political scientist. Annual Review of Political Science, 6(1), 433–471.
- Berkowitz, E. (2017). Getting to the Affordable Care Act. Journal of Policy History, 29(4), 519–542.
- Bloch Rubin, R. (2017). Building the bloc: Intraparty organization in the U.S. Congress. Cambridge, U.K.: Cambridge University Press.
- Breunig, C., & Koski, C. (2009). Punctuated budgets and governors’ institutional powers. Governance, 37(6), 1116–1138.
- Cameron, C. M. (2000). Veto bargaining: Presidents and the politics of negative power. Cambridge, MA: Cambridge University Press.
- Cooper, J., & Brady, D. W. (1981). Institutional context and leadership style: The house from Cannon to Rayburn. American Political Science Review, 75(2), 411–425.
- Cox, G. W., & McCubbins, M. D. (2005). Setting the agenda: Responsible party government in the U.S. House of Representatives. Cambridge, U.K.: Cambridge University Press.
- Craig, A. W. (2017). Policy collaboration in the united states congress (Doctoral dissertation) The Ohio State University, Columbus.
- Crawford, S. E. S., & Ostrom, E. (1995). A grammar of institutions. American Political Science Review, 89(3), 582–600.
- Curry, J. M. & Lee, F. E. (2019). Non-party government: Bipartisan lawmaking and party power in Congress. Perspectives on Politics, 17(1), 47–65.
- Davis, O. A., Dempster, M. A. H., & Wildavsky, A. (1966). A theory of the budgetary process. American Political Science Review, 60(3), 529–547.
- Downs, A. (1957a). An economic theory of democracy. New York, NY: Harper & Row.
- Downs, A. (1957b). An economic theory of political action in a democracy. Journal of Political Economy, 65(2), 135–150.
- Eissler, R., Russell, A., & Jones, B. D. (2014). New avenues for the study of agenda-setting. Policy Studies Journal, 42(S1), S71–S86.
- Fenno, R. F. (1973). Congressmen in committees. Boston, MA: Addison–Wesley.
- Fiske, S. T., Kinder, D. R., & Larter, W. M. (1983). The novice and the expert: Knowledge-based strategies in political cognition. Journal of Experimental Social Psychology, 19(4), 381–400.
- Froman, L. A., & Ripley, R. B. (1965). Conditions for party leadership: The case of the house democrats. American Political Science Review, 59(1), 52–63.
- Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us smart. New York, NY: Oxford University Press.
- Gigerenzer, G. (2001). The Adaptive Toolbox. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox. (pp. 37–50). Cambridge, MA: MIT Press.
- Green, M. N. (2010). The speaker of the house: A study of leadership. New Haven, CT: Yale University Press.
- Hallsworth, M., Egan, M., Rutter, J., & McCrae, J. (2018) Behavioural government: Using behavioural science to improve how governments make decisions. London, U.K.: Behavioural Insights Team.
- Hinckley, B. (1972). Coalitions in Congress: Size and ideological distance. Midwest Journal of Political Science, 16(2), 197.
- Hogarth, R. M. (1987). Judgment and choice: The psychology of decision (2nd ed.). Oxford, U.K.: John Wiley & Sons.
- Jenkins, J. A., & Stewart, C. (2012). Fighting for the speakership: The house and the rise of party government. Princeton, NJ: Princeton University Press.
- John, P. (2018). How far to nudge? Assessing behavioral public policy. Cheltenham, U.K.: Edward Elgar.
- Jones, B. D. (1994). Reconceiving decision-making in democratic politics. Chicago, IL: University of Chicago Press.
- Jones, B. D. (1997). The rational decision-making model in politics. (Technical report). Dept. of Political Science, University of Washington, Seattle.
- Jones, B. D. (1999). Bounded rationality. Annual Review of Political Science, 2(1), 297–321.
- Jones, B. D. (2001). Politics and the architecture of choice. Chicago, IL: University of Chicago Press.
- Jones, B. D. (2003). Bounded rationality and political science: Lessons from public administration and public policy. Journal of Public Administration Research and Theory, 13(4), 395–412.
- Jones, B. D. (2017). Behavioral rationality as a foundation for public policy studies. Journal of Cognitive Systems Research, 43, 63–75.
- Jones, B. D., & Baumgartner, F. R. (2005). The politics of attention. Chicago, IL: University of Chicago Press.
- Jones, B, D., & Baumgartner, F. R. (2012). From there to here: Punctuated equilibrium to the general punctuation thesis to a theory of government information processing. Policy Studies Journal, 40(1), 1–19.
- Jones, B. D., & McGee, Z. A. (2018). Agenda setting and bounded rationality. In A. Mintz & L. Terris (Eds.), The Oxford handbook of behavioral political science. Oxford, U.K.: Oxford University Press.
- Jones, B. D., McGee, Z. A., & Shannon, B. N. (2018). Bounded rationality in political science. In R. Viale & K. Katzikopoulos (Eds.), The handbook on bounded rationality. New York, NY: Routledge.
- Jones, B. D., Thomas, H. F., & Wolfe, M. (2014). Policy bubbles. Policy Studies Journal, 42(1), 146–171.
- Jones, B. D., Workman, S., & Jochim, A. (2009). Information processing and policy dynamics. Policy Studies Journal, 37(1), 75–92.
- Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350.
- Kelman, M. (2011). The heuristics debate. New York, NY: Oxford University Press.
- Koski, C., & Workman, A. (2018). Drawing practical lessons from punctuated equilibrium theory. Policy and Politics, 46(2), 293–308.
- Krehbiel, K. (1992). Information and legislative organization. Ann Arbor, MI: University of Michigan Press.
- Krehbiel, K. (1998). Pivotal politics: A theory of U.S. lawmaking. Chicago, IL: University of Chicago Press.
- Kuklinski, J. H., & Quirk, P. J. (2000). Reconsidering the rational public: Cognition, heuristics, and mass opinion. In A. Lupia, M. D. McCubbins, & S. L. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 153–182). New York, NY: Cambridge University Press.
- Lau, R. R., & Redlawsk, D. P. (2001). Advantages and disadvantages of cognitive heuristics in political decision making. American Journal of Politics Science, 45(4), 951–971.
- Lindblom, C. E. (1959). The science of muddling through. Public Administration Review, 19(2), 79–88.
- Lodge, M., Steenbergen, M. R., & Brau, S. (1995). The responsive voter. American Political Science Review, 89(2), 309–326.
- Maor, M. (2012). Policy overreaction. Journal of Public Policy, 32(3), 231–259.
- Maor, M. (2014). Policy bubbles: Policy overreaction and positive feedback. Governance, 27(3), 469–487.
- March, J. G. (1994). Primer on decision making: How decisions happen. New York, NY: Simon & Schuster.
- March, J. G., & Simon, H. A. (1958). Organizations (2nd ed.). Cambridge, U.K.: Wiley–Blackwell.
- Martignon, L. (2001). Comparing fast and frugal heuristics and optimal models. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox (pp. 14–172). Cambridge, MA: MIT Press.
- Matthews, D. R. (1959). The folkways of the United States Senate: Conformity to group norms and legislative effectiveness. American Political Science Review, 53(4), 1064–1089.
- Mayhew, D. (1974). Congress: The electoral connection. New Haven, CT: Yale University Press.
- Newell, A., & Simon, H. A. (1972). Human problem solving. New York, NY: Prentice Hall.
- Ostrom, E. (1997). A behavioral approach to the rational choice theory of collective action: Presidential address, American Political Science Association, 1997. American Political Science Review, 92(1), 1–22.
- Ostrom, E. (1999). Coping with the tragedies of the commons. Annual Review of Political Science, 2(1), 493–535.
- Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic Perspectives, 14(3), 137–158.
- Ostrom, E. (2005). Understanding institutional diversity. Princeton, NJ: Princeton University Press.
- Ostrom, E. (2009). Institutional rational choice: An assessment of institutional analysis and development framework. In C. Weible & P. A. Sabatier (Eds.), Theories of the policy process (2nd ed.). Boulder, CO: Westview Press.
- Ostrom, E. (2011). Background on the institutional analysis and development framework. Policy Studies Journal, 39(1), 7–27.
- Padgett, J, F. (1980). Bounded rationality. American Political Science Review, 74(2), 354–372.
- Petersen, M. B. (2015). Evolutionary political psychology: On the origin and structure of heuristics and biases in politics. Political Psychology, 36(1), 45–78.
- Pinker, S. (1997). How the mind works. New York, NY: Norton.
- Redlawsk, D., & Lau, R. R. (2013). Behavioral decision-making. In L. Huddy, D. O. Sears, & J. S. Levy (Eds.), The Oxford handbook of political psychology (2nd ed., pp. 130–164). New York, NY: Oxford University Press.
- Riker, W. H. (1962). The theory of political coalitions. New Haven, NY: Yale University Press.
- Riker, W. H., & Ordeshook, P. C. (1968). A theory of the calculus of voting. American Political Science Review, 62(1), 25–42.
- Rohde, D. W. (1991). Parties and leaders in the postreform House. Chicago, IL: University of Chicago Press.
- Rohde, D. W. (2013). Reflections on the practice of theorizing: Conditional party government in the twenty-first century. Journal of Politics, 75(4), 849–864.
- Schrad, M. L. (2007). Constitutional blemishes: American alcohol prohibition and repeal as policy punctuation. Policy Studies Journal, 35(3), 437–463.
- Shepsle, K. A. (2010). Analyzing politics: Rationality, behavior and institutions (2nd ed.). New York, NY: W. W. Norton.
- Simon, H. A. (1947). Administrative behavior. New York, NY: Free Press.
- Simon, H. A. (1957). Models of man: Social and rational. Hoboken, NJ: Wiley.
- Simon, H. A. (1983). Reason in human affairs. Stanford, CA: Stanford University Press.
- Simon, H. A. (1985). Human nature in politics: The dialogue with political science and psychology. American Political Science Review, 79(2), 293–304.
- Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1–20.
- Simon, H. A. (1996). The sciences of the artificial. Cambridge, MA: MIT Press.
- Simon, H. A., & Newell, A. (1958). Heuristic problem solving: The next advance in operations research. Operations Research, 6(1), 1–10.
- Strahan, R. (2007). Leading representatives: The agency of leaders in the politics of the U.S. house. Baltimore, MD: JHU Press.
- Taylor, M. (1987). Anarchy and cooperation. New York, NY: John Wiley.
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.
- Theriault, S. M. (2008). Party polarization in Congress. Cambridge, U.K.: Cambridge University Press.
- Tversky, A., & Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases. Science, 185(4517), 1124–1130.
- Walker, J. L. (1983). The origins and maintenance of interest groups in America. American Political Science Review, 77(2), 390–406.
- Wildavsky, A. (1964). The politics of the budgetary process. Boston, MA: Little, Brown.
- Worsham, J. (2006). Up in smoke: Mapping subsystem dynamics in tobacco policy. Policy Studies Journal, 34(3), 437–452.
- Workman, S., Shafran, B., & Bark, T. (2017). Problem definition and information provision by federal bureaucrats. Cognitive Systems Research, 33, 140–152.
Notes
1. Simon (1985) notes that this shift can be observed as psychologists began asking subjects to speak aloud as they completed experimental tasks. Before this shift in thinking within psychology data, on what subjects thought during experiments were not seen as wholly objective for analysis (Simon, 1985).
2. Despite “satisficing” not being used explicitly in the first edition, the fourth edition contains a note from Simon that Administrative Behavior did include the first sketch of this later-developed idea.
3. This concept is sometimes referred to as “identification with the means.”
4. Despite Gigerenzer et al.’s (1999) empirical evidence, the debate about the role of heuristics remains unresolved. The tension between the two most prominent schools of thought is not only rooted in whether heuristics are harmful to decision making. Instead, as Kelman (2011, p. 6) points out, “scholars in the two schools think differently about what it means to behave rationally, think differently about how the use of particular heuristics emerges, think differently about the processes by which we reach judgments when reasoning heuristically, and think differently about whether people are less prone to use heuristics” given their attributes and time to think. In other words, the dichotomy presented in this article is simplistic, and we encourage readers to explore the debate further. See Kelman (2011) or Petersen (2015) for excellent summaries.