Agent-based computational modeling (ABM, for short) is a formal and supplementary methodological approach used in international relations (IR) theory and research, based on the general ABM paradigm and computational methodology as applied to IR phenomena. ABM of such phenomena varies according to three fundamental dimensions: scale of organization—spanning foreign policy, international relations, regional systems, and global politics—as well as by geospatial and temporal scales. ABM is part of the broader complexity science paradigm, although ABMs can also be applied without complexity concepts. There have been scores of peer-reviewed publications using ABM to develop IR theory in recent years, based on earlier pioneering work in computational IR that originated in the 1960s that was pre-agent based. Main areas of theory and research using ABM in IR theory include dynamics of polity formation (politogenesis), foreign policy decision making, conflict dynamics, transnational terrorism, and environment impacts such as climate change. Enduring challenges for ABM in IR theory include learning the applicable ABM methodology itself, publishing sufficiently complete models, accumulation of knowledge, evolving new standards and methodology, and the special demands of interdisciplinary research, among others. Besides further development of main themes identified thus far, future research directions include ABM applied to IR in political interaction domains of space and cyber; new integrated models of IR dynamics across domains of land, sea, air, space, and cyber; and world order and long-range models.
Capitalist peace theory (CPT) has gained considerable attention in international relations theory and the conflict literature. Its proponents maintain that a capitalist organization of an economy pacifies states internally and externally. They portray CPT either as a complement or as a substitute to other liberal explanations such as the democratic peace thesis. They, however, disagree about the facet of capitalism that is supposed to reduce the risk of political violence. Key contributions have identified three main drivers of the capitalist peace phenomenon: the fiscal constraints that a laissez-faire regimen puts on potentially aggressive governments, the mollifying norms that a capitalist organization creates; and the increased ability of capitalist governments to signal their intentions effectively in a confrontation with an adversary. Defining capitalism narrowly through the freedom entrepreneurs enjoy domestically, this article evaluates the key causal mechanisms and empirical evidence that have been advanced in support of these competing claims. The article argues that CPT needs to be based on a narrow definition of capitalism and that it should scrutinize motives and constraints of the main actors more deeply. Future contributions to the CPT literature should also pay close attention to classic theories of capitalism, which all considered individual risk taking and the dramatic changes between booms and busts to be key constitutive features of this form of economic governance. Finally, empirical tests of the proposed causal mechanism should rely on data sets in which capitalists appear as actors and not as “structures.” If the literature takes these objections seriously, CPT could establish itself as central theory of peace and war in two respects. First, it could serve as an antidote to the theory of imperialism and other “critical” approaches that see in capitalism a source of conflict rather than of peace. Second, it could become an important complement to commercial liberalism that stresses the external openness rather than the internal freedoms as an economic cause of peace and that particularly sees trade and foreign direct investment as pacifying forces.
Collaborative research has a critical role to play in furthering our understanding of African politics. Many of the most important and interesting questions in the field are difficult, if not impossible, to tackle without some form of collaboration, either between academics within and outside of Africa—often termed North–South research partnerships—or between those researchers and organizations from outside the academic world. In Africa in particular, collaborative research is becoming more frequent and more extensive. This is due not only to the value of the research that it can produce but also to pressures on the funding of African scholars and academics in the Global North, alongside similar pressures on the budgets of non-academic collaborators, including bilateral aid agencies, multilateral organizations, and national and international non-government organizations.
Collaborative projects offer many advantages to these actors beyond access to new funding sources, so they constitute more than mere “marriages of convenience.” These benefits typically include access to methodological expertise and valuable new data sources, as well as opportunities to increase both the academic and “real-world” impact of research findings. Yet collaborative research also raises a number of challenges, many of which relate to equity. They center on issues such as who sets the research agenda, whether particular methodological approaches are privileged over others, how responsibility for different research tasks is allocated, how the benefits of that research are distributed, and the importance of treating colleagues with respect despite the narrative of “capacity-building.” Each challenge manifests in slightly different ways, and to varying extents, depending on the type of collaboration at hand: North–South research partnership or collaboration between academics and policymakers or practitioners. This article discusses both types of collaboration together because of their potential to overlap in ways that affect the severity and complexity of those challenges.
These challenges are not unique to research in Africa, but they tend to manifest in ways that are distinct or particularly acute on the continent because of the context in which collaboration takes place. In short, the legacy of colonialism matters. That history not only shapes who collaborates with whom but also who does so from a position of power and who does not. Thus, the inequitable nature of some research collaborations is not simply the result of oversights or bad habits; it is the product of entrenched structural factors that produce, and reproduce, imbalances of power. This means that researchers seeking to make collaborative projects in Africa more equitable must engage with these issues early, proactively, and continuously throughout the entire life cycle of those research projects. This is true not just for researchers based in the Global North but for scholars from, or working in, Africa as well.
Caroline A. Hartzell
Civil wars typically have been terminated by a variety of means, including military victories, negotiated settlements and ceasefires, and “draws.” Three very different historical trends in the means by which civil wars have ended can be identified for the post–World War II period. A number of explanations have been developed to account for those trends, some of which focus on international factors and others on national or actor-level variables. Efforts to explain why civil wars end as they do are considered important because one of the most contested issues among political scientists who study civil wars is how “best” to end a civil war if the goal is to achieve a stable peace. Several factors have contributed to this debate, among them conflicting results produced by various studies on this topic as well as different understandings of the concepts war termination, civil war resolution, peace-building, and stable peace.
Ever since Aristotle, the comparative study of political regimes and their performance has relied on classifications and typologies. The study of democracy today has been influenced heavily by Arend Lijphart’s typology of consensus versus majoritarian democracy. Scholars have applied it to more than 100 countries and sought to demonstrate its impact on no less than 70 dependent variables. This paper summarizes our knowledge about the origins, functioning, and consequences of two basic types of democracy: those that concentrate power and those that share and divide power. In doing so, it will review the experience of established democracies and question the applicability of received wisdom to new democracies.
Comparative public policy (CPP) is a multidisciplinary enterprise aimed at policy learning through lesson drawing and theory building or testing. We argue that CPP faces the challenge of conceptual and analytical standardization if it is to make a significant contribution to the explanation of policy decision-making. This argument is developed in three sections based on the following questions: What is CPP? What is it for? How should it be done? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the persistence of two distinct bodies of literature made of basic and applied studies. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a synthesis of the main methodological pitfalls and the problems of case selection and then revising the main protocols in use. We conclude with a reflection on the contribution of CPP to policy design and policy analysis.
Krista E. Wiegand
Despite the decline in interstate wars, there remain dozens of interstate disputes that could erupt into diplomatic crises and evolve into military escalation. By far the most difficult interstate dispute that exists are territorial disputes, followed by maritime and river boundary disputes. These disputes are not only costly for the states involved, but also potentially dangerous for states in the region and allies of disputant states who could become entrapped in armed conflicts. Fortunately, though many disputes remain unresolved and some disputes endure for decades or more than a century, many other disputes are peacefully resolved through conflict management tools.
Understanding the factors that influence conflict management—the means by which governments decide their foreign policy strategies relating to interstate disputes and civil conflicts—is critical to policy makers and scholars interested in the peaceful resolution of such disputes. Though conflict management of territorial and maritime disputes can include a spectrum of management tools, including use of force, most conflict management tools are peaceful, involving direct bilateral negotiations between the disputant states, non-binding third party mediation, or binding legal dispute resolution. Governments most often attempt the most direct dispute resolution method, which is bilateral negotiations, but often, such negotiations break down due to uncompromising positions of the disputing states, leading governments to turn to other resolution methods. There are pros and cons of each of the dispute resolution methods and certain factors will influence the decisions that governments make about the management of their territorial and maritime disputes. Overall, the peaceful resolution of territorial and maritime disputes is an important but complicated issue for states both directly involved and indirectly affected by the persistence of such disputes.
Richard Ned Lebow
Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals.
There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.
Why voters turn out on Election Day has eluded a straightforward explanation. Rational choice theorists have proposed a parsimonious model, but its logical implication is that hardly anyone would vote since their one vote is unlikely to determine the election outcome. Attempts to save the rational choice model incorporate factors like the expressive benefits of voting, yet these modifications seem to be at odds with core assumptions of rational choice theory. Still, some people do weigh the expected costs and benefits of voting and take account of the closeness of the election when deciding whether or not to vote. Many more, though, vote out of a sense of civic duty. In contrast to the calculus of voting model, the civic voluntarism model focuses on the role of resources, political engagement, and to a lesser extent, recruitment in encouraging people to vote. It pays particular attention to the sources of these factors and traces complex paths among them.
There are many other theories of why people vote in elections. Intergenerational transmission and education play central roles in the civic voluntarism models. Studies that link official voting records with census data provide persuasive evidence of the influence of parental turnout. Education is one of the best individual-level predictors of voter turnout, but critics charge that it is simply a proxy for pre-adult experiences within the home. Studies using equally sophisticated designs that mimic the logic of controlled experiments have reached contradictory conclusions about the association between education and turnout. Some of the most innovative work on voter turnout is exploring the role of genetic influences and personality traits, both of which have an element of heritability. This work is in its infancy, but it is likely that many genes shape the predisposition to vote and that they interact in complex ways with environmental influences. Few clear patterns have emerged in the association between personality and turnout. Finally, scholars are beginning to recognize the importance of exploring the connection between health and turnout.
Shannon Carcelli and Erik A. Gartzke
Deterrence theory is slowly beginning to emerge from a long sleep after the Cold War, and from its theoretical origins over half a century ago. New realities have led to a diversification of deterrence in practice, as well as to new avenues for its study and empirical analysis. Three major categories of changes in the international system—new actors, new means of warfare, and new contexts—have led to corresponding changes in the way that deterrence is theorized and studied. First, the field of deterrence has broadened to include nonstate and nonnuclear actors, which has challenged scholars with new types of theories and tests. Second, cyberthreats, terrorism, and diverse nuclear force structures have led scholars to consider means in new ways. Third, the likelihood of an international crisis has shifted as a result of physical, economic, and normative changes in the costs of crisis, which had led scholars to more closely address the crisis context itself. The assumptions of classical deterrence are breaking down, in research as well as in reality. However, more work needs to be done in understanding these international changes and building successful deterrence policy. A better understanding of new modes of deterrence will aid policymakers in managing today’s threats and in preventing future deterrence failures, even as it prompts the so-called virtuous cycle of new theory and additional empirical testing.
Gaurav Sood and Yphtach Lelkes
The news media have been disrupted. Broadcasting has given way to narrowcasting, editorial control to control by “friends” and personalization algorithms, and a few reputable producers to millions with shallower reputations. Today, not only is there a much broader variety of news, but there is also more of it. The news is also always on. And it is available almost everywhere. The search costs have come crashing down, so much so that much of the world’s information is at our fingertips. Google anything and the chances are that there will be multiple pages of relevant results.
Such a dramatic expansion of choice and access is generally considered a Pareto improvement. But the worry is that we have fashioned defeat from the bounty by choosing badly. The expansion in choice is blamed for both, increasing the “knowledge gap,” the gap between how much the politically interested and politically disinterested know about politics, and increasing partisan polarization. We reconsider the evidence for the claims. The claim about media’s role in rising knowledge gaps does not need explaining because knowledge gaps are not increasing. For polarization, the story is nuanced. Whatever evidence exists suggests that the effect is modest, but measuring long-term effects of a rapidly changing media landscape is hard and may explain the results.
As we also find, even describing trends in basic explanatory variables is hard. Current measures are beset with five broad problems. The first is conceptual errors. For instance, people frequently equate preference for information from partisan sources with a preference for congenial information. Second, survey measures of news consumption are heavily biased. Third, behavioral survey experimental measures are unreliable and inapt for learning how much information of a particular kind people consume in their real lives. Fourth, measures based on passive observation of behavior only capture a small (likely biased) set of the total information consumed by people. Fifth, content is often coded crudely—broad judgments are made about coarse units, eliding over important variation.
These measurement issues impede our ability to answer the extent to which people choose badly and the attendant consequences of such. Improving measures will do much to advance our ability to answer important questions.
Erika Forsberg and Louise Olsson
Prior research has found robust support for a relationship between gender inequality and civil war. These results all point in the same direction; countries that display lower levels of gender equality are more likely to become involved in civil conflict, and violence is likely to be even more severe, than in countries where women have a higher status. But what does gender inequality mean in this area of research? And how does research explain why we see this effect on civil war? To explore this, we start with reviewing existing definitions and measurements of gender inequality, noting that the concept has several dimensions. We then proceed to outline several clusters of explanations of how gender inequality could be related to civil war while more equal societies are better able to prevent violent conflict, as described in previous research. It is clear that existing misconceptions that gender inequality primarily involves the role of women are clouding the fact that it clearly speaks to much broader societal developments which play central roles in civil war. We conclude by identifying some remaining lacunas and directions for future research.
Modern Populism: Research Advances, Conceptual and Methodological Pitfalls, and the Minimal Definition
Takis S. Pappas
Populism is one of the most dynamic fields of comparative political research. Although its study began in earnest only in the late 1960s, it has since developed through four distinct waves of scholarship, each pertaining to distinct empirical phenomena and with specific methodological and theoretical priorities. Today, the field is in need of a comprehensive general theory that will be able to capture the phenomenon specifically within the context of our contemporary democracies. This, however, requires our breaking away from recurring conceptual and methodological errors and, above all, a consensus about the minimal definition of populism.
All in all, the study of populism has been plagued by 10 drawbacks: (1) unspecified empirical universe, (2) lack of historical and cultural context specificity, (3) essentialism, (4) conceptual stretching, (5) unclear negative pole, (6) degreeism, (7) defective observable-measurable indicators, (8) a neglect of micromechanisms, (9) poor data and inattention to crucial cases, and (10) normative indeterminacy. Most, if not all, of the foregoing methodological errors are cured if we define, and study, modern populism simply as “democratic illiberalism,” which also opens the door to understanding the malfunctioning and pathologies of our modern-day liberal representative democracies.
More Than Mixed Results: What We Have Learned From Quantitative Research on the Diversionary Hypothesis
Benjamin O. Fordham
In the three decades since Jack Levy published his seminal review essay on the topic, there has been a great deal of quantitative research on the proposition that state leaders can use international conflict to enhance their political prospects at home. The findings of this work are frequently described as “mixed” or “inconsistent.” This characterization is superficially correct, but it is also misleading in some important respects. Focusing on two of Levy’s most important concerns about previous research, there has been substantial progress in our understanding of this phenomenon.
First, as Levy suggests in his essay, researchers have elaborated a range of different mechanisms linking domestic political trouble with international conflict rather than a single diversionary argument. Processes creating diversionary incentives bear a family resemblance to one another but can have different behavioral implications. Four of them are (1) in-group/out-group dynamics, (2) agenda setting, (3) leader efforts to demonstrate competence in foreign policy, and (4) efforts to blame foreign leaders or perhaps domestic minorities for problems. In addition, researchers have identified some countervailing mechanisms that may inhibit state leaders’ ability to pursue diversionary strategies, the most important of which is the possibility that potential targets may strategically avoid conflict with leaders likely to behave aggressively.
Second, research has identified scope conditions that limit the applicability of diversionary arguments, another of Levy’s concerns about the research he reviewed. Above all, diversionary uses of military force (though not other diversionary strategies) may be possible for only a narrow range of states. Though very powerful states may pursue such a strategy against a wide range of targets, the leaders of less powerful states may have this option only during fairly serious episodes of interstate hostility, such as rivalries and territorial disputes. A substantial amount of research has focused exclusively on the United States, a country that clearly has the capacity to pursue this strategy. While the findings of this work cannot be generalized to many other states, they have revealed some important nuances in the processes that create diversionary incentives. The extent to which these incentives hinge on highly specific political and institutional characteristics point to the difficulty of applying realistic diversionary arguments to a large sample of states. Research on smaller, more homogenous samples or individual states is more promising, even though it will not produce an answer to the broad question of how prevalent diversionary behavior is. As with many broad questions about political phenomena, the only correct answer may be “it depends.” Diversionary foreign policy happens, but not in the same way in every instance and not in every state in the international system.
Josep M. Colomer
Logical models and statistical techniques have been used for measuring political and institutional variables, quantifying and explaining the relationships between them, testing theories, and evaluating institutional and policy alternatives. A number of cumulative and complementary findings refer to major institutional features of a political process of decision-making: from the size of the assembly to the territorial structure of the country, the electoral system, the number of parties in the assembly and in the government, the government’s duration, and the degree of policy instability. Mathematical equations based on sound theory are validated by empirical tests and can predict precise observations.
Sabine C. Carey and Neil J. Mitchell
Pro-government militias are a prominent feature of civil wars. Governments in Colombia, Syria, and Sudan recruit irregular forces in their armed struggle against insurgents. The United States collaborated with Awakening groups to counter the insurgency in Iraq, just as colonizers used local armed groups to fight rebellions in their colonies. An emerging cross-disciplinary literature on pro-government non-state armed groups generates a variety of research questions for scholars interested in conflict, political violence, and political stability: Does the presence of such groups indicate a new type of conflict? What are the dynamics that drive governments to align with informal armed groups and that make armed groups choose to side with the government? Given the risks entailed in surrendering a monopoly of violence, is there a turning point in a conflict when governments enlist these groups? How successful are these groups? Why do governments use these non-state armed actors to shape foreign conflicts whether as insurgents or counterinsurgents abroad? Are these non-state armed actors always useful to governments or perhaps even an indicator for state failure?
We examine the demand for and supply of pro-government armed groups and the legacies that shape their role in civil wars. The enduring pattern of collaboration between governments and these armed non-state actors challenges conventional theory and the idea of an evolutionary process of the modern state consolidating the means of violence. Research on these groups and their consequences began with case studies, and these continue to yield valuable insights. More recently, survey work and cross-national quantitative research contribute to our knowledge. This mix of methods is opening new lines of inquiry for research on insurgencies and the delivery of the core public good of effective security.
Mathew V. Hibbing, Melissa N. Baker, and Kathryn A. Herzog
Since the early 2010s, political science has seen a rise in the use of physiological measures in order to inform theories about decision-making in politics. A commonly used physiological measure is skin conductance (electrodermal activity). Skin conductance measures the changes in levels of sweat in the eccrine glands, usually on the fingertips, in order to help inform how the body responds to stimuli. These changes result from the sympathetic nervous system (popularly known as the fight or flight system) responding to external stimuli. Due to the nature of physiological responses, skin conductance is especially useful when researchers hope to have good temporal resolution and make causal claims about a type of stimulus eliciting physiological arousal in individuals. Researchers interested in areas that involve emotion or general affect (e.g., campaign messages, political communication and advertising, information processing, and general political psychology) may be especially interested in integrating skin conductance into their methodological toolbox. Skin conductance is a particularly useful tool since its implicit and unconscious nature means that it avoids some of the pitfalls that can accompany self-report measures (e.g., social desirability bias and inability to accurately remember and report emotions). Future decision-making research will benefit from pairing traditional self-report measures with physiological measures such as skin conductance.
Steven R. Brown
Q methodology was introduced in 1935 and has evolved to become the most elaborate philosophical, conceptual, and technical means for the systematic study of subjectivity across an increasing array of human activities, most recently including decision making. Subjectivity is an inescapable dimension of all decision making since we all have thoughts, perspectives, and preferences concerning the wide range of matters that come to our attention and that enter into consideration when choices have to be made among options, and Q methodology provides procedures and a rationale for clarifying and examining the various viewpoints at issue. The application of Q methodology commonly begins by accumulating the various comments in circulation concerning a topic and then reducing them to a smaller set for administration to select participants, who then typically rank the statements in the Q sample from agree to disagree in the form of a Q sort. Q sorts are then correlated and factor analyzed, giving rise to a typology of persons who have ordered the statements in similar ways. As an illustration, Q methodology was administered to a diverse set of stakeholders concerned with the problems associated with the conservation and control of large carnivores in the Northern Rockies. Participants nominated a variety of possible solutions that each person then Q sorted from those solutions judged most effective to those judged most ineffective, the factor analysis of which revealed four separate perspectives that are compared and contrasted. A second study demonstrates how Q methodology can be applied to the examination of single cases by focusing on two members of a group contemplating how they might alter the governing structures and culture of their organization. The results are used to illustrate the quantum character of subjective behavior as well as the laws of subjectivity. Discussion focuses on the broader role of decisions in the social order.
Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.
There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.
Katelyn E. Stauffer and Diana Z. O'Brien
Quantitative methods are among the most useful, but also historically contentious, tools in feminist research. Despite the controversy that sometimes surrounds these methods, feminist scholars in political science have often drawn on them to examine questions related to gender and politics. Researchers have used quantitative methods to explore gender in political behavior, institutions, and policy, as well as gender bias in the discipline. Just as quantitative methods have aided the advancement of feminist political science, a feminist perspective likewise has implications for data production, measurement, and analysis. Yet, the continued underrepresentation of women in the methods community needs to be addressed, and greater dialogue between feminist researchers and quantitative methodologists is required.