Agent-based computational modeling (ABM, for short) is a formal and supplementary methodological approach used in international relations (IR) theory and research, based on the general ABM paradigm and computational methodology as applied to IR phenomena. ABM of such phenomena varies according to three fundamental dimensions: scale of organization—spanning foreign policy, international relations, regional systems, and global politics—as well as by geospatial and temporal scales. ABM is part of the broader complexity science paradigm, although ABMs can also be applied without complexity concepts. There have been scores of peer-reviewed publications using ABM to develop IR theory in recent years, based on earlier pioneering work in computational IR that originated in the 1960s that was pre-agent based. Main areas of theory and research using ABM in IR theory include dynamics of polity formation (politogenesis), foreign policy decision making, conflict dynamics, transnational terrorism, and environment impacts such as climate change. Enduring challenges for ABM in IR theory include learning the applicable ABM methodology itself, publishing sufficiently complete models, accumulation of knowledge, evolving new standards and methodology, and the special demands of interdisciplinary research, among others. Besides further development of main themes identified thus far, future research directions include ABM applied to IR in political interaction domains of space and cyber; new integrated models of IR dynamics across domains of land, sea, air, space, and cyber; and world order and long-range models.
Lin Qiu and Riyang Phang
Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.
Social scientists have debated whether belief in a biological basis for sexual orientation engenders more positive attitudes toward gay men and lesbians. Belief in the biological theory has often been observed to be correlated with pro-lesbian/gay attitudes, and this gives some “weak” support for the hypothesis. There is far less “strong” evidence that biological beliefs have caused a noteworthy shift in heterosexist attitudes, or that they hold any essential promise of so doing. One reason for this divergence between the weak and strong hypothesis is that beliefs about causality are influenced by attitudes and group identities. Consequently beliefs about a biological basis of sexual orientation have identity-expressive functions over and above their strictly logical causal implications about nature/nurture issues. Four other factors explain why the biological argument of the 1990s was an intuitively appealing as a pro-gay tool, although there is no strong evidence that it had a very substantive impact in making public opinion in the USA more pro-gay. These factors are that the biological argument (a) implied that sexuality is a discrete social category grounded in fundamental differences between people, (b) implied that sexual orientation categories are historically and culturally invariant, (c) implied that gender roles and stereotypes have a biological basis, and (d) framed homosexual development, not heterosexual development, as needing explanation. Understanding this literature is important and relevant for conceptualizing the relationship between biological attributions and social attitudes in domains beyond sexual orientations, such as in the more recent research on reducing transphobia and essentialist beliefs about gender.
Capitalist peace theory (CPT) has gained considerable attention in international relations theory and the conflict literature. Its proponents maintain that a capitalist organization of an economy pacifies states internally and externally. They portray CPT either as a complement or as a substitute to other liberal explanations such as the democratic peace thesis. They, however, disagree about the facet of capitalism that is supposed to reduce the risk of political violence. Key contributions have identified three main drivers of the capitalist peace phenomenon: the fiscal constraints that a laissez-faire regimen puts on potentially aggressive governments, the mollifying norms that a capitalist organization creates; and the increased ability of capitalist governments to signal their intentions effectively in a confrontation with an adversary. Defining capitalism narrowly through the freedom entrepreneurs enjoy domestically, this article evaluates the key causal mechanisms and empirical evidence that have been advanced in support of these competing claims. The article argues that CPT needs to be based on a narrow definition of capitalism and that it should scrutinize motives and constraints of the main actors more deeply. Future contributions to the CPT literature should also pay close attention to classic theories of capitalism, which all considered individual risk taking and the dramatic changes between booms and busts to be key constitutive features of this form of economic governance. Finally, empirical tests of the proposed causal mechanism should rely on data sets in which capitalists appear as actors and not as “structures.” If the literature takes these objections seriously, CPT could establish itself as central theory of peace and war in two respects. First, it could serve as an antidote to the theory of imperialism and other “critical” approaches that see in capitalism a source of conflict rather than of peace. Second, it could become an important complement to commercial liberalism that stresses the external openness rather than the internal freedoms as an economic cause of peace and that particularly sees trade and foreign direct investment as pacifying forces.
Collaborative research has a critical role to play in furthering our understanding of African politics. Many of the most important and interesting questions in the field are difficult, if not impossible, to tackle without some form of collaboration, either between academics within and outside of Africa—often termed North–South research partnerships—or between those researchers and organizations from outside the academic world. In Africa in particular, collaborative research is becoming more frequent and more extensive. This is due not only to the value of the research that it can produce but also to pressures on the funding of African scholars and academics in the Global North, alongside similar pressures on the budgets of non-academic collaborators, including bilateral aid agencies, multilateral organizations, and national and international non-government organizations.
Collaborative projects offer many advantages to these actors beyond access to new funding sources, so they constitute more than mere “marriages of convenience.” These benefits typically include access to methodological expertise and valuable new data sources, as well as opportunities to increase both the academic and “real-world” impact of research findings. Yet collaborative research also raises a number of challenges, many of which relate to equity. They center on issues such as who sets the research agenda, whether particular methodological approaches are privileged over others, how responsibility for different research tasks is allocated, how the benefits of that research are distributed, and the importance of treating colleagues with respect despite the narrative of “capacity-building.” Each challenge manifests in slightly different ways, and to varying extents, depending on the type of collaboration at hand: North–South research partnership or collaboration between academics and policymakers or practitioners. This article discusses both types of collaboration together because of their potential to overlap in ways that affect the severity and complexity of those challenges.
These challenges are not unique to research in Africa, but they tend to manifest in ways that are distinct or particularly acute on the continent because of the context in which collaboration takes place. In short, the legacy of colonialism matters. That history not only shapes who collaborates with whom but also who does so from a position of power and who does not. Thus, the inequitable nature of some research collaborations is not simply the result of oversights or bad habits; it is the product of entrenched structural factors that produce, and reproduce, imbalances of power. This means that researchers seeking to make collaborative projects in Africa more equitable must engage with these issues early, proactively, and continuously throughout the entire life cycle of those research projects. This is true not just for researchers based in the Global North but for scholars from, or working in, Africa as well.
Caroline A. Hartzell
Civil wars typically have been terminated by a variety of means, including military victories, negotiated settlements and ceasefires, and “draws.” Three very different historical trends in the means by which civil wars have ended can be identified for the post–World War II period. A number of explanations have been developed to account for those trends, some of which focus on international factors and others on national or actor-level variables. Efforts to explain why civil wars end as they do are considered important because one of the most contested issues among political scientists who study civil wars is how “best” to end a civil war if the goal is to achieve a stable peace. Several factors have contributed to this debate, among them conflicting results produced by various studies on this topic as well as different understandings of the concepts war termination, civil war resolution, peace-building, and stable peace.
Ever since Aristotle, the comparative study of political regimes and their performance has relied on classifications and typologies. The study of democracy today has been influenced heavily by Arend Lijphart’s typology of consensus versus majoritarian democracy. Scholars have applied it to more than 100 countries and sought to demonstrate its impact on no less than 70 dependent variables. This paper summarizes our knowledge about the origins, functioning, and consequences of two basic types of democracy: those that concentrate power and those that share and divide power. In doing so, it will review the experience of established democracies and question the applicability of received wisdom to new democracies.
Comparative public policy (CPP) is a multidisciplinary enterprise aimed at policy learning through lesson drawing and theory building or testing. We argue that CPP faces the challenge of conceptual and analytical standardization if it is to make a significant contribution to the explanation of policy decision-making. This argument is developed in three sections based on the following questions: What is CPP? What is it for? How should it be done? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the persistence of two distinct bodies of literature made of basic and applied studies. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a synthesis of the main methodological pitfalls and the problems of case selection and then revising the main protocols in use. We conclude with a reflection on the contribution of CPP to policy design and policy analysis.
Krista E. Wiegand
Despite the decline in interstate wars, there remain dozens of interstate disputes that could erupt into diplomatic crises and evolve into military escalation. By far the most difficult interstate dispute that exists are territorial disputes, followed by maritime and river boundary disputes. These disputes are not only costly for the states involved, but also potentially dangerous for states in the region and allies of disputant states who could become entrapped in armed conflicts. Fortunately, though many disputes remain unresolved and some disputes endure for decades or more than a century, many other disputes are peacefully resolved through conflict management tools.
Understanding the factors that influence conflict management—the means by which governments decide their foreign policy strategies relating to interstate disputes and civil conflicts—is critical to policy makers and scholars interested in the peaceful resolution of such disputes. Though conflict management of territorial and maritime disputes can include a spectrum of management tools, including use of force, most conflict management tools are peaceful, involving direct bilateral negotiations between the disputant states, non-binding third party mediation, or binding legal dispute resolution. Governments most often attempt the most direct dispute resolution method, which is bilateral negotiations, but often, such negotiations break down due to uncompromising positions of the disputing states, leading governments to turn to other resolution methods. There are pros and cons of each of the dispute resolution methods and certain factors will influence the decisions that governments make about the management of their territorial and maritime disputes. Overall, the peaceful resolution of territorial and maritime disputes is an important but complicated issue for states both directly involved and indirectly affected by the persistence of such disputes.
Richard Ned Lebow
Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals.
There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.
Sean B. Eom
A decision support system is an interactive human–computer decision-making system that supports decision makers rather than replaces them, utilizing data and models. It solves unstructured and semi-structured problems with a focus on effectiveness rather than efficiency in decision processes. In the early 1970s, scholars in this field began to recognize the important roles that decision support systems (DSS) play in supporting managers in their semi-structured or unstructured decision-making activities. Over the past five decades, DSS has made progress toward becoming a solid academic field. Nevertheless, since the mid-1990s, the inability of DSS to fully satisfy a wide range of information needs of practitioners provided an impetus for a new breed of DSS called business intelligence systems (BIS). The academic discipline of DSS has undergone numerous changes in technological environments including the adoption of data warehouses. Until the late 1990s, most textbooks referred to “decision support systems.” Nowadays, many of them have replaced “decision support systems” with “business intelligence.” While DSS/BIS began in academia and were quickly adopted in business, in recent years these tools have moved into government and the academic field of public administration. In addition, modern political campaigns, especially at the national level, are based on data analytics and the use of big data analytics. The first part of this article reviews the development of DSS as an academic discipline. The second part discusses BIS and their components (the data warehousing environment and the analytical environment). The final part introduces two emerging topics in DSS/BI: big data analytics and cloud computing analytics. Before the era of big data, most data collected by business organizations could easily be managed by traditional relational database management systems with a serial processing system. Social networks, e-business networks, Internet of Things (IoT), and many other wireless sensor networks are generating huge volumes of data every day. The challenge of big data has demanded a new business intelligence infrastructure with new tools (Hadoop cluster, the data warehousing environment, and the business analytical environment).
Why voters turn out on Election Day has eluded a straightforward explanation. Rational choice theorists have proposed a parsimonious model, but its logical implication is that hardly anyone would vote since their one vote is unlikely to determine the election outcome. Attempts to save the rational choice model incorporate factors like the expressive benefits of voting, yet these modifications seem to be at odds with core assumptions of rational choice theory. Still, some people do weigh the expected costs and benefits of voting and take account of the closeness of the election when deciding whether or not to vote. Many more, though, vote out of a sense of civic duty. In contrast to the calculus of voting model, the civic voluntarism model focuses on the role of resources, political engagement, and to a lesser extent, recruitment in encouraging people to vote. It pays particular attention to the sources of these factors and traces complex paths among them.
There are many other theories of why people vote in elections. Intergenerational transmission and education play central roles in the civic voluntarism models. Studies that link official voting records with census data provide persuasive evidence of the influence of parental turnout. Education is one of the best individual-level predictors of voter turnout, but critics charge that it is simply a proxy for pre-adult experiences within the home. Studies using equally sophisticated designs that mimic the logic of controlled experiments have reached contradictory conclusions about the association between education and turnout. Some of the most innovative work on voter turnout is exploring the role of genetic influences and personality traits, both of which have an element of heritability. This work is in its infancy, but it is likely that many genes shape the predisposition to vote and that they interact in complex ways with environmental influences. Few clear patterns have emerged in the association between personality and turnout. Finally, scholars are beginning to recognize the importance of exploring the connection between health and turnout.
Shannon Carcelli and Erik A. Gartzke
Deterrence theory is slowly beginning to emerge from a long sleep after the Cold War, and from its theoretical origins over half a century ago. New realities have led to a diversification of deterrence in practice, as well as to new avenues for its study and empirical analysis. Three major categories of changes in the international system—new actors, new means of warfare, and new contexts—have led to corresponding changes in the way that deterrence is theorized and studied. First, the field of deterrence has broadened to include nonstate and nonnuclear actors, which has challenged scholars with new types of theories and tests. Second, cyberthreats, terrorism, and diverse nuclear force structures have led scholars to consider means in new ways. Third, the likelihood of an international crisis has shifted as a result of physical, economic, and normative changes in the costs of crisis, which had led scholars to more closely address the crisis context itself. The assumptions of classical deterrence are breaking down, in research as well as in reality. However, more work needs to be done in understanding these international changes and building successful deterrence policy. A better understanding of new modes of deterrence will aid policymakers in managing today’s threats and in preventing future deterrence failures, even as it prompts the so-called virtuous cycle of new theory and additional empirical testing.
Gaurav Sood and Yphtach Lelkes
The news media have been disrupted. Broadcasting has given way to narrowcasting, editorial control to control by “friends” and personalization algorithms, and a few reputable producers to millions with shallower reputations. Today, not only is there a much broader variety of news, but there is also more of it. The news is also always on. And it is available almost everywhere. The search costs have come crashing down, so much so that much of the world’s information is at our fingertips. Google anything and the chances are that there will be multiple pages of relevant results.
Such a dramatic expansion of choice and access is generally considered a Pareto improvement. But the worry is that we have fashioned defeat from the bounty by choosing badly. The expansion in choice is blamed for both, increasing the “knowledge gap,” the gap between how much the politically interested and politically disinterested know about politics, and increasing partisan polarization. We reconsider the evidence for the claims. The claim about media’s role in rising knowledge gaps does not need explaining because knowledge gaps are not increasing. For polarization, the story is nuanced. Whatever evidence exists suggests that the effect is modest, but measuring long-term effects of a rapidly changing media landscape is hard and may explain the results.
As we also find, even describing trends in basic explanatory variables is hard. Current measures are beset with five broad problems. The first is conceptual errors. For instance, people frequently equate preference for information from partisan sources with a preference for congenial information. Second, survey measures of news consumption are heavily biased. Third, behavioral survey experimental measures are unreliable and inapt for learning how much information of a particular kind people consume in their real lives. Fourth, measures based on passive observation of behavior only capture a small (likely biased) set of the total information consumed by people. Fifth, content is often coded crudely—broad judgments are made about coarse units, eliding over important variation.
These measurement issues impede our ability to answer the extent to which people choose badly and the attendant consequences of such. Improving measures will do much to advance our ability to answer important questions.
Expected utility theory is widely used to formally model decisions in situations where outcomes are uncertain. As uncertainty is arguably commonplace in political decisions, being able to take that uncertainty into account is of great importance when building useful models and interpreting empirical results. Expected utility theory has provided possible explanations for a host of phenomena, from the failure of the median voter theorem to the making of vague campaign promises and the delegation of policymaking.
A good expected utility model may provide alternative explanations for empirical phenomena and can structure reasoning about the effect of political actors’ goals, circumstances, and beliefs on their behavior. For example, expected utility theory shows that whether the median voter theorem can be expected to hold or not depends on candidates’ goals (office, policy, or vote seeking), and the nature of their uncertainty about voters. In this way expected utility theory can help empirical researchers derive hypotheses and guide them towards the data required to exclude alternative explanations.
Expected utility has been especially successful in spatial voting models, but the range of topics to which it can be applied is far broader. Applications to pivotal voting or politicians’ redistribution decisions show this wider value. However, there is also a range of promising topics that have received ample attention from empirical researchers, but that have so far been largely ignored by theorists applying expected utility theory.
Although expected utility theory has its limitations, more modern theories that build on the expected utility framework, such as prospect theory, can help overcome these limitations. Notably these extensions rely on the same modeling techniques as expected utility theory and can similarly elucidate the mechanisms that may explain empirical phenomena. This structured way of thinking about behavior under uncertainty is the main benefit provided by both expected utility theory and its extensions.
Laura Bakkensen and Logan Blair
Flooding remains one of the globe’s most devastating natural hazards and a leading driver of natural disaster losses across many countries, including the United States. As such, a rich and growing literature aims to better understand, model, and assess flood losses. Several major theoretical and empirical themes emerge from the literature. Fundamental to the flood damage assessment literature are definitions of flood damage, including a typology of flood damage, such as direct and indirect losses. In addition, the literature theoretically and empirically assesses major determinants of flood damage including hydrological factors, measurement of the physical features in harm’s way, as well as understanding and modeling protective activities, such as flood risk mitigation and adaptation, that all co-determine the overall flood losses. From there, common methods to quantify flood damage take these factors as inputs, modeling hydrological risk, exposure, and vulnerability into quantifiable flood loss estimates through a flood damage function, and include both ex ante expected loss assessments and ex post event-specific analyses. To do so, high-quality data are key across all model steps and can be found across a variety of sources. Early 21st-century advancements in spatial data and remote sensing push the literature forward. While topics and themes apply more generally to flood damage across the globe, examples from the United States illustrate key topics. Understanding main themes and insights in this important research area is critical for researchers, policy-makers, and practitioners to better understand, utilize, and extend existing flood damage assessment literatures in order to lessen or even prevent future tragedy.
Erika Forsberg and Louise Olsson
Prior research has found robust support for a relationship between gender inequality and civil war. These results all point in the same direction; countries that display lower levels of gender equality are more likely to become involved in civil conflict, and violence is likely to be even more severe, than in countries where women have a higher status. But what does gender inequality mean in this area of research? And how does research explain why we see this effect on civil war? To explore this, we start with reviewing existing definitions and measurements of gender inequality, noting that the concept has several dimensions. We then proceed to outline several clusters of explanations of how gender inequality could be related to civil war while more equal societies are better able to prevent violent conflict, as described in previous research. It is clear that existing misconceptions that gender inequality primarily involves the role of women are clouding the fact that it clearly speaks to much broader societal developments which play central roles in civil war. We conclude by identifying some remaining lacunas and directions for future research.
Brian J. Gaines and Benjamin R. Kantack
Although motivation undergirds virtually all aspects of political decision making, its influence is often unacknowledged, or taken for granted, in behavioral political science. Motivations are inevitably important in generic models of decision theory. In real-world politics, two crucially important venues for motivational effects are the decision of whether or not to vote, and how (or, whether) partisanship and other policy views color information-collection, so that people choose and then justify, rather than studying options before choosing. For researchers, motivations of survey respondents and experimental subjects are deeply important, but only just beginning to garner the attention they deserve.
Recognizing its causal power, contemporary scholars of media effects commonly leverage experimental methodology. For most of the 20th century, however, political scientists and communication scholars relied on observational data, particularly after the development of scientific survey methodology around the mid-point of the century. As the millennium approached, Iyengar and Kinder’s seminal News That Matters experiments ushered in an era of renewed interest in experimental methods. Political communication scholars have been particularly reliant on experiments, due to their advantages over observational studies in identifying media effects. Although what is meant by “media effects” has not always been clear or undisputed, scholars generally agree that the news media influences mass opinion and behavior through its agenda-setting, framing, and priming powers. Scholars have adopted techniques and practices for gauging the particular effects these powers have, including measuring the mediating role of affect (or emotion).
Although experiments provide researchers with causal leverage, political communication scholars must consider challenges endemic to media-effects studies, including problems related to selective exposure. Various efforts to determine if selective exposure occurs and if it has consequences have come to different conclusions. The origin of conflicting conclusions can be traced back to the different methodological choices scholars have made. Achieving experimental realism has been a particularly difficult challenge for selective exposure experiments. Nonetheless, there are steps media-effects scholars can take to bolster causal arguments in an era of high media choice. While the advent of social media has brought new challenges for media-effects experimentalists, there are new opportunities in the form of objective measures of media exposure and effects.
Modern Populism: Research Advances, Conceptual and Methodological Pitfalls, and the Minimal Definition
Takis S. Pappas
Populism is one of the most dynamic fields of comparative political research. Although its study began in earnest only in the late 1960s, it has since developed through four distinct waves of scholarship, each pertaining to distinct empirical phenomena and with specific methodological and theoretical priorities. Today, the field is in need of a comprehensive general theory that will be able to capture the phenomenon specifically within the context of our contemporary democracies. This, however, requires our breaking away from recurring conceptual and methodological errors and, above all, a consensus about the minimal definition of populism.
All in all, the study of populism has been plagued by 10 drawbacks: (1) unspecified empirical universe, (2) lack of historical and cultural context specificity, (3) essentialism, (4) conceptual stretching, (5) unclear negative pole, (6) degreeism, (7) defective observable-measurable indicators, (8) a neglect of micromechanisms, (9) poor data and inattention to crucial cases, and (10) normative indeterminacy. Most, if not all, of the foregoing methodological errors are cured if we define, and study, modern populism simply as “democratic illiberalism,” which also opens the door to understanding the malfunctioning and pathologies of our modern-day liberal representative democracies.