Agent-based computational modeling (ABM, for short) is a formal and supplementary methodological approach used in international relations (IR) theory and research, based on the general ABM paradigm and computational methodology as applied to IR phenomena. ABM of such phenomena varies according to three fundamental dimensions: scale of organization—spanning foreign policy, international relations, regional systems, and global politics—as well as by geospatial and temporal scales. ABM is part of the broader complexity science paradigm, although ABMs can also be applied without complexity concepts. There have been scores of peer-reviewed publications using ABM to develop IR theory in recent years, based on earlier pioneering work in computational IR that originated in the 1960s that was pre-agent based. Main areas of theory and research using ABM in IR theory include dynamics of polity formation (politogenesis), foreign policy decision making, conflict dynamics, transnational terrorism, and environment impacts such as climate change. Enduring challenges for ABM in IR theory include learning the applicable ABM methodology itself, publishing sufficiently complete models, accumulation of knowledge, evolving new standards and methodology, and the special demands of interdisciplinary research, among others. Besides further development of main themes identified thus far, future research directions include ABM applied to IR in political interaction domains of space and cyber; new integrated models of IR dynamics across domains of land, sea, air, space, and cyber; and world order and long-range models.
Article
Agent-Based Computational Modeling and International Relations Theory: Quo Vadis?
Claudio Cioffi-Revilla
Article
Agent-Based Modeling in Political Decision Making
Lin Qiu and Riyang Phang
Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.
Article
Attitudes Toward Homosexuality and LGBT People: Causal Attributions for Sexual Orientation
Peter Hegarty
Social scientists have debated whether belief in a biological basis for sexual orientation engenders more positive attitudes toward gay men and lesbians. Belief in the biological theory has often been observed to be correlated with pro-lesbian/gay attitudes, and this gives some “weak” support for the hypothesis. There is far less “strong” evidence that biological beliefs have caused a noteworthy shift in heterosexist attitudes, or that they hold any essential promise of so doing. One reason for this divergence between the weak and strong hypothesis is that beliefs about causality are influenced by attitudes and group identities. Consequently beliefs about a biological basis of sexual orientation have identity-expressive functions over and above their strictly logical causal implications about nature/nurture issues. Four other factors explain why the biological argument of the 1990s was an intuitively appealing as a pro-gay tool, although there is no strong evidence that it had a very substantive impact in making public opinion in the USA more pro-gay. These factors are that the biological argument (a) implied that sexuality is a discrete social category grounded in fundamental differences between people, (b) implied that sexual orientation categories are historically and culturally invariant, (c) implied that gender roles and stereotypes have a biological basis, and (d) framed homosexual development, not heterosexual development, as needing explanation. Understanding this literature is important and relevant for conceptualizing the relationship between biological attributions and social attitudes in domains beyond sexual orientations, such as in the more recent research on reducing transphobia and essentialist beliefs about gender.
Article
Bayesian Analyses of Political Decision Making
Kumail Wasif and Jeff Gill
Bayes’ theorem is a relatively simple equation but one of the most important mathematical principles discovered. It is a formalization of a basic cognitive process: updating expectations as new information is obtained. It was derived from the laws of conditional probability by Reverend Thomas Bayes and published posthumously in 1763. In the 21st century, it is used in academic fields ranging from computer science to social science.
The theorem’s most prominent use is in statistical inference. In this regard, there are three essential tenets of Bayesian thought that distinguish it from standard approaches. First, any quantity that is not known as an absolute fact is treated probabilistically, meaning that a numerical probability or a probability distribution is assigned. Second, research questions and designs are based on prior knowledge and expressed as prior distributions. Finally, these prior distributions are updated by conditioning on new data through the use of Bayes’ theorem to create a posterior distribution that is a compromise between prior and data knowledge.
This approach has a number of advantages, especially in social science. First, it gives researchers the probability of observing the parameter given the data, which is the inverse of the results from frequentist inference and more appropriate for social scientific data and parameters. Second, Bayesian approaches excel at estimating parameters for complex data structures and functional forms, and provide more information about these parameters compared to standard approaches. This is possible due to stochastic simulation techniques called Markov Chain Monte Carlo. Third, Bayesian approaches allow for the explicit incorporation of previous estimates through the use of the prior distribution. This provides a formal mechanism for incorporating previous estimates and a means of comparing potential results.
Bayes’ theorem is also used in machine learning, which is a subset of computer science that focuses on algorithms that learn from data to make predictions. One such algorithm is the Naive Bayes Classifier, which uses Bayes’ theorem to classify objects such as documents based on prior relationships. Bayesian networks can be seen as a complicated version of the Naive Classifier that maps, estimates, and predicts relationships in a network. It is useful for more complicated prediction problems. Lastly, the theorem has even been used by qualitative social scientists as a formal mechanism for stating and evaluating beliefs and updating knowledge.
Article
Capitalist Peace Theory: A Critical Appraisal
Gerald Schneider
Capitalist peace theory (CPT) has gained considerable attention in international relations theory and the conflict literature. Its proponents maintain that a capitalist organization of an economy pacifies states internally and externally. They portray CPT either as a complement to or a substitute for other liberal explanations, such as the democratic peace thesis, but disagree about the facet of capitalism that is supposed to reduce the risk of political violence. Key contributions have identified three main drivers of the capitalist peace phenomenon: the fiscal constraints that a laissez-faire regimen puts on potentially aggressive governments, the mollifying norms that a capitalist organization creates, and the increased ability of capitalist governments to signal their intentions effectively in a confrontation with an adversary. CPT should be based on a narrow definition of capitalism and should scrutinize motives and constraints of the main actors more deeply. Future contributions to the CPT literature should pay close attention to classic theories of capitalism, which all considered individual risk taking and the dramatic changes between booms and busts to be key constitutive features of this form of economic governance. Finally, empirical tests of the proposed causal mechanism should rely on data sets in which capitalists appear as actors and not as “structures.” If the literature takes these objections seriously, CPT could establish itself as central theory of peace and war in two respects: First, it could serve as an antidote to “critical” approaches on the far left or far right that see in capitalism a source of conflict rather than of peace. Second, it could become an important complement to commercial liberalism that stresses the external openness rather than the internal freedoms as an economic cause of peace and that particularly sees trade and foreign direct investment as pacifying forces.
Article
The Challenges of Making Research Collaboration in Africa More Equitable
Susan Dodsworth
Collaborative research has a critical role to play in furthering our understanding of African politics. Many of the most important and interesting questions in the field are difficult, if not impossible, to tackle without some form of collaboration, either between academics within and outside of Africa—often termed North–South research partnerships—or between those researchers and organizations from outside the academic world. In Africa in particular, collaborative research is becoming more frequent and more extensive. This is due not only to the value of the research that it can produce but also to pressures on the funding of African scholars and academics in the Global North, alongside similar pressures on the budgets of non-academic collaborators, including bilateral aid agencies, multilateral organizations, and national and international non-government organizations.
Collaborative projects offer many advantages to these actors beyond access to new funding sources, so they constitute more than mere “marriages of convenience.” These benefits typically include access to methodological expertise and valuable new data sources, as well as opportunities to increase both the academic and “real-world” impact of research findings. Yet collaborative research also raises a number of challenges, many of which relate to equity. They center on issues such as who sets the research agenda, whether particular methodological approaches are privileged over others, how responsibility for different research tasks is allocated, how the benefits of that research are distributed, and the importance of treating colleagues with respect despite the narrative of “capacity-building.” Each challenge manifests in slightly different ways, and to varying extents, depending on the type of collaboration at hand: North–South research partnership or collaboration between academics and policymakers or practitioners. This article discusses both types of collaboration together because of their potential to overlap in ways that affect the severity and complexity of those challenges.
These challenges are not unique to research in Africa, but they tend to manifest in ways that are distinct or particularly acute on the continent because of the context in which collaboration takes place. In short, the legacy of colonialism matters. That history not only shapes who collaborates with whom but also who does so from a position of power and who does not. Thus, the inequitable nature of some research collaborations is not simply the result of oversights or bad habits; it is the product of entrenched structural factors that produce, and reproduce, imbalances of power. This means that researchers seeking to make collaborative projects in Africa more equitable must engage with these issues early, proactively, and continuously throughout the entire life cycle of those research projects. This is true not just for researchers based in the Global North but for scholars from, or working in, Africa as well.
Article
Civil War Termination
Caroline A. Hartzell
Civil wars typically have been terminated by a variety of means, including military victories, negotiated settlements and ceasefires, and “draws.” Three very different historical trends in the means by which civil wars have ended can be identified for the post–World War II period. A number of explanations have been developed to account for those trends, some of which focus on international factors and others on national or actor-level variables. Efforts to explain why civil wars end as they do are considered important because one of the most contested issues among political scientists who study civil wars is how “best” to end a civil war if the goal is to achieve a stable peace. Several factors have contributed to this debate, among them conflicting results produced by various studies on this topic as well as different understandings of the concepts war termination, civil war resolution, peace-building, and stable peace.
Article
Comparative Political Regimes: Consensus and Majoritarian Democracy
Matthijs Bogaards
Ever since Aristotle, the comparative study of political regimes and their performance has relied on classifications and typologies. The study of democracy today has been influenced heavily by Arend Lijphart’s typology of consensus versus majoritarian democracy. Scholars have applied it to more than 100 countries and sought to demonstrate its impact on no less than 70 dependent variables. This paper summarizes our knowledge about the origins, functioning, and consequences of two basic types of democracy: those that concentrate power and those that share and divide power. In doing so, it will review the experience of established democracies and question the applicability of received wisdom to new democracies.
Article
Comparative Public Policy
Guillaume Fontaine
We contribute to the debate surrounding comparative public policy (CPP) analysis as a method and an area of policy studies, based on the following questions: What is CPP? What is it for? How can it be conducted? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the recent attempts to bridge the gap between basic and applied research through the policy design framework. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a revision of the main protocols in use, then presenting their main methodological pitfalls. The article concludes with a reflection about the contribution of CPP to policy studies through design.
Article
Computational Models of Political Decision Making
Sung-youn Kim
A growing body of research uses computational models to study political decision making and behavior such as voter turnout, vote choice, party competition, social networks, and cooperation in social dilemmas. Advances in the computational modeling of political decision making are closely related to the idea of bounded rationality. In effect, models of full rationality can usually be analyzed by hand, but models of bounded rationality are complex and require computer-assisted analysis. Most computational models used in the literature are agent based, that is, they specify how decisions are made by autonomous, interacting computational objects called “agents.” However, an important distinction can be made between two classes of models based on the approaches they take: behavioral and information processing. Behavioral models specify relatively simple behavioral rules to relax the standard rationality assumption and investigate the system-level consequences of these rules in conjunction with deductive, game-theoretic analysis. In contrast, information-processing models specify the underlying information processes of decision making—the way political actors receive, store, retrieve, and use information to make judgment and choice—within the structural constraints on human cognition, and examine whether and how these processes produce the observed behavior in question at the individual or aggregate level. Compared to behavioral models, information-processing computational models are relatively rare, new to political scientists, and underexplored. However, focusing on the underlying mental processes of decision making that must occur within the structural constraints on human cognition, they have the potential to provide a more general, psychologically realistic account for political decision making and behavior.
Article
Conflict Management of Territorial Disputes
Krista E. Wiegand
Despite the decline in interstate wars, there remain dozens of interstate disputes that could erupt into diplomatic crises and evolve into military escalation. By far the most difficult interstate dispute that exists are territorial disputes, followed by maritime and river boundary disputes. These disputes are not only costly for the states involved, but also potentially dangerous for states in the region and allies of disputant states who could become entrapped in armed conflicts. Fortunately, though many disputes remain unresolved and some disputes endure for decades or more than a century, many other disputes are peacefully resolved through conflict management tools.
Understanding the factors that influence conflict management—the means by which governments decide their foreign policy strategies relating to interstate disputes and civil conflicts—is critical to policy makers and scholars interested in the peaceful resolution of such disputes. Though conflict management of territorial and maritime disputes can include a spectrum of management tools, including use of force, most conflict management tools are peaceful, involving direct bilateral negotiations between the disputant states, non-binding third party mediation, or binding legal dispute resolution. Governments most often attempt the most direct dispute resolution method, which is bilateral negotiations, but often, such negotiations break down due to uncompromising positions of the disputing states, leading governments to turn to other resolution methods. There are pros and cons of each of the dispute resolution methods and certain factors will influence the decisions that governments make about the management of their territorial and maritime disputes. Overall, the peaceful resolution of territorial and maritime disputes is an important but complicated issue for states both directly involved and indirectly affected by the persistence of such disputes.
Article
Counterfactuals and Foreign Policy Analysis
Richard Ned Lebow
Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals.
There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.
Article
Decision Support Systems
Sean B. Eom
A decision support system is an interactive human–computer decision-making system that supports decision makers rather than replaces them, utilizing data and models. It solves unstructured and semi-structured problems with a focus on effectiveness rather than efficiency in decision processes. In the early 1970s, scholars in this field began to recognize the important roles that decision support systems (DSS) play in supporting managers in their semistructured or unstructured decision-making activities. Over the past five decades, DSS has made progress toward becoming a solid academic field. Nevertheless, since the mid-1990s, the inability of DSS to fully satisfy a wide range of information needs of practitioners provided an impetus for a new breed of DSS, business intelligence systems (BIS). The academic discipline of DSS has undergone numerous changes in technological environments including the adoption of data warehouses. Until the late 1990s, most textbooks referred to “decision support systems.” Nowadays, many of them have replaced “decision support systems” with “business intelligence.” While DSS/BIS began in academia and were quickly adopted in business, in recent years these tools have moved into government and the academic field of public administration. In addition, modern political campaigns, especially at the national level, are based on data analytics and the use of big data analytics. The first section of this article reviews the development of DSS as an academic discipline. The second section discusses BIS and their components (the data warehousing environment and the analytical environment). The final section introduces two emerging topics in DSS/BIS: big data analytics and cloud computing analytics. Before the era of big data, most data collected by business organizations could easily be managed by traditional relational database management systems with a serial processing system. Social networks, e-business networks, Internet of Things (IoT), and many other wireless sensor networks are generating huge volumes of data every day. The challenge of big data has demanded a new business intelligence infrastructure with new tools (Hadoop cluster, the data warehousing environment, and the business analytical environment).
Article
The Decision to Vote or to Abstain
Elisabeth Gidengil
Why voters turn out on Election Day has eluded a straightforward explanation. Rational choice theorists have proposed a parsimonious model, but its logical implication is that hardly anyone would vote since their one vote is unlikely to determine the election outcome. Attempts to save the rational choice model incorporate factors like the expressive benefits of voting, yet these modifications seem to be at odds with core assumptions of rational choice theory. Still, some people do weigh the expected costs and benefits of voting and take account of the closeness of the election when deciding whether or not to vote. Many more, though, vote out of a sense of civic duty. In contrast to the calculus of voting model, the civic voluntarism model focuses on the role of resources, political engagement, and to a lesser extent, recruitment in encouraging people to vote. It pays particular attention to the sources of these factors and traces complex paths among them.
There are many other theories of why people vote in elections. Intergenerational transmission and education play central roles in the civic voluntarism models. Studies that link official voting records with census data provide persuasive evidence of the influence of parental turnout. Education is one of the best individual-level predictors of voter turnout, but critics charge that it is simply a proxy for pre-adult experiences within the home. Studies using equally sophisticated designs that mimic the logic of controlled experiments have reached contradictory conclusions about the association between education and turnout. Some of the most innovative work on voter turnout is exploring the role of genetic influences and personality traits, both of which have an element of heritability. This work is in its infancy, but it is likely that many genes shape the predisposition to vote and that they interact in complex ways with environmental influences. Few clear patterns have emerged in the association between personality and turnout. Finally, scholars are beginning to recognize the importance of exploring the connection between health and turnout.
Article
Demobilization Challenges After Armed Conflict
Margit Bussmann
Demobilization of ex-combatants is a major obstacle in the transition to a stable postconflict society. The combatants must be convinced to abandon the armed confrontation and hand over their weapons in light of security concerns and a lack of alternative means of income. The challenges to overcoming the commitment problem differ in terms of numbers of combatants who must be demobilized for conflicts that end in a decisive victory and conflicts that reach a military stalemate. Peace agreements can offer several solutions for overcoming the parties’ commitment problems, but often the implementation of the provisions is incomplete. Third parties can offer to monitor an agreement and provide security guarantees. International actors increasingly assist with demobilization and reintegration programs for former combatants and help to overcome security-related concerns and economic challenges. Another solution offered is military power-sharing arrangements and the integration of rebel fighters into the national military. These measures are intended to reduce the pool for potential recruitment for existing or new rebel groups. If ex-combatants are left without means of income to support themselves and their families, the risk is higher that they will remobilize and conflict will recur. Reintegration in the civilian labor market, however, is often difficult in the weak economies of war-affected countries.
Article
The Diversification of Deterrence: New Data and Novel Realities
Shannon Carcelli and Erik A. Gartzke
Deterrence theory is slowly beginning to emerge from a long sleep after the Cold War, and from its theoretical origins over half a century ago. New realities have led to a diversification of deterrence in practice, as well as to new avenues for its study and empirical analysis. Three major categories of changes in the international system—new actors, new means of warfare, and new contexts—have led to corresponding changes in the way that deterrence is theorized and studied. First, the field of deterrence has broadened to include nonstate and nonnuclear actors, which has challenged scholars with new types of theories and tests. Second, cyberthreats, terrorism, and diverse nuclear force structures have led scholars to consider means in new ways. Third, the likelihood of an international crisis has shifted as a result of physical, economic, and normative changes in the costs of crisis, which had led scholars to more closely address the crisis context itself. The assumptions of classical deterrence are breaking down, in research as well as in reality. However, more work needs to be done in understanding these international changes and building successful deterrence policy. A better understanding of new modes of deterrence will aid policymakers in managing today’s threats and in preventing future deterrence failures, even as it prompts the so-called virtuous cycle of new theory and additional empirical testing.
Article
Don't Expose Yourself: Discretionary Exposure to Political Information
Gaurav Sood and Yphtach Lelkes
The news media have been disrupted. Broadcasting has given way to narrowcasting, editorial control to control by “friends” and personalization algorithms, and a few reputable producers to millions with shallower reputations. Today, not only is there a much broader variety of news, but there is also more of it. The news is also always on. And it is available almost everywhere. The search costs have come crashing down, so much so that much of the world’s information is at our fingertips. Google anything and the chances are that there will be multiple pages of relevant results.
Such a dramatic expansion of choice and access is generally considered a Pareto improvement. But the worry is that we have fashioned defeat from the bounty by choosing badly. The expansion in choice is blamed for both, increasing the “knowledge gap,” the gap between how much the politically interested and politically disinterested know about politics, and increasing partisan polarization. We reconsider the evidence for the claims. The claim about media’s role in rising knowledge gaps does not need explaining because knowledge gaps are not increasing. For polarization, the story is nuanced. Whatever evidence exists suggests that the effect is modest, but measuring long-term effects of a rapidly changing media landscape is hard and may explain the results.
As we also find, even describing trends in basic explanatory variables is hard. Current measures are beset with five broad problems. The first is conceptual errors. For instance, people frequently equate preference for information from partisan sources with a preference for congenial information. Second, survey measures of news consumption are heavily biased. Third, behavioral survey experimental measures are unreliable and inapt for learning how much information of a particular kind people consume in their real lives. Fourth, measures based on passive observation of behavior only capture a small (likely biased) set of the total information consumed by people. Fifth, content is often coded crudely—broad judgments are made about coarse units, eliding over important variation.
These measurement issues impede our ability to answer the extent to which people choose badly and the attendant consequences of such. Improving measures will do much to advance our ability to answer important questions.
Article
Expected Utility and Political Decision Making
Jona Linde
Expected utility theory is widely used to formally model decisions in situations where outcomes are uncertain. As uncertainty is arguably commonplace in political decisions, being able to take that uncertainty into account is of great importance when building useful models and interpreting empirical results. Expected utility theory has provided possible explanations for a host of phenomena, from the failure of the median voter theorem to the making of vague campaign promises and the delegation of policymaking.
A good expected utility model may provide alternative explanations for empirical phenomena and can structure reasoning about the effect of political actors’ goals, circumstances, and beliefs on their behavior. For example, expected utility theory shows that whether the median voter theorem can be expected to hold or not depends on candidates’ goals (office, policy, or vote seeking), and the nature of their uncertainty about voters. In this way expected utility theory can help empirical researchers derive hypotheses and guide them towards the data required to exclude alternative explanations.
Expected utility has been especially successful in spatial voting models, but the range of topics to which it can be applied is far broader. Applications to pivotal voting or politicians’ redistribution decisions show this wider value. However, there is also a range of promising topics that have received ample attention from empirical researchers, but that have so far been largely ignored by theorists applying expected utility theory.
Although expected utility theory has its limitations, more modern theories that build on the expected utility framework, such as prospect theory, can help overcome these limitations. Notably these extensions rely on the same modeling techniques as expected utility theory and can similarly elucidate the mechanisms that may explain empirical phenomena. This structured way of thinking about behavior under uncertainty is the main benefit provided by both expected utility theory and its extensions.
Article
Fast and Frugal Heuristics
Konstantinos V. Katsikopoulos
Polymath, and also political scientist, Herbert Simon dared to point out that the amounts of time, information, computation, and other resources required for maximizing utility far exceed what is possible when real people have to make real decisions in the real world. In psychology, there are two main approaches to studying actual human judgment and decision making—the heuristics-and-bias and the fast-and-frugal-heuristics research programs. A distinctive characteristic of the fast-and-frugal-heuristics program is that it specifies formal models of heuristics and attempts to determine when people use them and what performance they achieve. These models rely on a few pieces of information that are processed in computationally simple ways. The information and computation are within human reach, which means that people rely on information they have relatively easy access to and employ simple operations such as summing or comparing numbers. Research in the laboratory and in the wild has found that most people use fast and frugal heuristics most of the time if a decision must be made quickly, information is expensive financially or cognitively to gather, or a single/few attributes of the problem strongly point towards an option. The ways in which people switch between heuristics is studied in the framework of the adaptive toolbox. Work employing computer simulations and mathematical analyses has uncovered conditions under which fast and frugal heuristics achieve higher performance than benchmarks from statistics and machine learning, and vice versa. These conditions constitute the theory of ecological rationality. This theory suggests that fast and frugal heuristics perform better than complex optimization models if the available information is of low quality or scarce, or if there exist dominant options or attributes. The bias-variance decomposition of statistical prediction error, which is explained in layperson’s terms, underpins these claims. Research on fast and frugal heuristics suggests a governance approach not based on nudging, but on boosting citizen competence.
Article
Flood Damage Assessments: Theory and Evidence From the United States
Laura Bakkensen and Logan Blair
Flooding remains one of the globe’s most devastating natural hazards and a leading driver of natural disaster losses across many countries, including the United States. As such, a rich and growing literature aims to better understand, model, and assess flood losses. Several major theoretical and empirical themes emerge from the literature. Fundamental to the flood damage assessment literature are definitions of flood damage, including a typology of flood damage, such as direct and indirect losses. In addition, the literature theoretically and empirically assesses major determinants of flood damage including hydrological factors, measurement of the physical features in harm’s way, as well as understanding and modeling protective activities, such as flood risk mitigation and adaptation, that all co-determine the overall flood losses. From there, common methods to quantify flood damage take these factors as inputs, modeling hydrological risk, exposure, and vulnerability into quantifiable flood loss estimates through a flood damage function, and include both ex ante expected loss assessments and ex post event-specific analyses. To do so, high-quality data are key across all model steps and can be found across a variety of sources. Early 21st-century advancements in spatial data and remote sensing push the literature forward. While topics and themes apply more generally to flood damage across the globe, examples from the United States illustrate key topics. Understanding main themes and insights in this important research area is critical for researchers, policy-makers, and practitioners to better understand, utilize, and extend existing flood damage assessment literatures in order to lessen or even prevent future tragedy.