Simulations are an important component of crisis preparedness, because they allow for training responders and testing plans in advance of a crisis materializing. However, traditional simulations can all too easily fall prey to a range of cognitive and organizational distortions that tend to reduce their efficacy. These shortcomings become even more problematic in the increasingly complex, highly dynamic crisis environment of the early 21st century. This situation calls for the incorporation of alternative approaches to crisis simulation, ones that by design incorporate multiple perspectives and explicit challenges to the status quo.
As a distinct approach to formulating, conducting, and analyzing simulations and exercises, the central distinguishing feature of red teaming is the simulation of adversaries or competitors (or at least adopting an adversarial perspective). In this respect, red teaming can be viewed as practices that simulate adversary or adversarial decisions or behaviors, where the purpose is informing or improving defensive capabilities, and outputs are measured. Red teaming, according to this definition, significantly overlaps with but does not directly correspond to related activities such as wargaming, alternative analysis, and risk assessment.
Some of the more important additional benefits provided by red teaming include the following:
▪ The explicit recognition and amelioration of several cognitive biases and other critical thinking shortfalls displayed by crisis decision makers and managers in both their planning processes and their decision-making during a crisis.
▪ The ability to robustly test existing standard operating procedures and plans at the strategic, operational, and tactical levels against emerging threats and hazards by exposing them to the machinations of adaptive, creative adversaries and other potentially problematic actors.
▪ Instilling more flexible, adaptive, and in-depth sense-making and decision-making skills in crisis response personnel at all levels by focusing the training aspects of simulations on iterated, evolving scenarios with high degrees of realism, unpredictability through exploration of nth-order effects, and multiple stakeholders.
▪ The identification of new vulnerabilities, opportunities, and risks that might otherwise remain hidden if relying on traditional, nonadversarial simulation approaches.
Key guidance in conducting red teaming in the crisis preparedness context includes avoiding mirror imaging, having clear objectives and simulation parameters, remaining independent of the organizational unit being served, judicious application in terms of the frequency of red teaming, and the proper recording and presentation of red-teaming simulation outputs. Overall, red teaming—as a specific species of simulation—holds much promise for enhancing crisis preparedness and the crucial decision-making that attends a variety of emerging issues in the crisis management context.
12
Article
Red Teaming and Crisis Preparedness
Gary Ackerman and Douglas Clifford
Article
Qualitative Comparative Analysis: Discovering Core Combinations of Conditions in Political Decision Making
Benoît Rihoux
Qualitative Comparative Analysis (QCA) was launched in the late 1980s by Charles Ragin, as a research approach bridging case-oriented and variable-oriented perspectives. It conceives cases as complex combinations of attributes (i.e. configurations), is designed to process multiple cases, and enables one to identify, through minimization algorithms, the core equifinal combinations of conditions leading to an outcome of interest. It systematizes the analysis in terms of necessity and sufficiency, models social reality in terms of set-theoretic relations, and provides powerful logical tools for complexity reduction. It initially came along with one technique, crisp-set QCA (csQCA), requiring dichotomized coding of data.
As it has expanded, the QCA field has been enriched by new techniques such as multi-value QCA (mvQCA) and especially fuzzy-set QCA (fsQCA), both of which enable finer-grained calibration. It has also developed further with diverse extensions and more advanced designs, including mixed- and multimethod designs in which QCA is sequenced with focused case studies or with statistical analyses.
QCA’s emphasis on causal complexity makes it very fit to address various types of objects and research questions touching upon political decision making—and indeed QCA has been applied in multiple related social scientific fields. While QCA can be exploited in different ways, it is most frequently used for theory evaluation purposes, with a streamlined protocol including a sequence of core operations and good practices. Several reliable software options are also available to implement the core of the QCA procedure. However, given QCA’s case-based foundation, much researcher input is still required at different stages.
As it has further developed, QCA has been subject to fierce criticism, especially from a mainstream statistical perspective. This has stimulated further innovations and refinements, in particular in terms of parameters of fit and robustness tests which also correspond to the growth of QCA applications in larger-n designs. Altogether the field has diversified and broadened, and different users may exploit QCA in various ways, from smaller-n case-oriented uses to larger-n more analytic uses, and following different epistemological positions regarding causal claims. This broader field can therefore be labeled as that of both “Configurational Comparative Methods” (CCMs) and “Set-Theoretic Methods” (STMs).
Article
Q Methodology in Research on Political Decision Making
Steven R. Brown
Q methodology was introduced in 1935 and has evolved to become the most elaborate philosophical, conceptual, and technical means for the systematic study of subjectivity across an increasing array of human activities, most recently including decision making. Subjectivity is an inescapable dimension of all decision making since we all have thoughts, perspectives, and preferences concerning the wide range of matters that come to our attention and that enter into consideration when choices have to be made among options, and Q methodology provides procedures and a rationale for clarifying and examining the various viewpoints at issue. The application of Q methodology commonly begins by accumulating the various comments in circulation concerning a topic and then reducing them to a smaller set for administration to select participants, who then typically rank the statements in the Q sample from agree to disagree in the form of a Q sort. Q sorts are then correlated and factor analyzed, giving rise to a typology of persons who have ordered the statements in similar ways. As an illustration, Q methodology was administered to a diverse set of stakeholders concerned with the problems associated with the conservation and control of large carnivores in the Northern Rockies. Participants nominated a variety of possible solutions that each person then Q sorted from those solutions judged most effective to those judged most ineffective, the factor analysis of which revealed four separate perspectives that are compared and contrasted. A second study demonstrates how Q methodology can be applied to the examination of single cases by focusing on two members of a group contemplating how they might alter the governing structures and culture of their organization. The results are used to illustrate the quantum character of subjective behavior as well as the laws of subjectivity. Discussion focuses on the broader role of decisions in the social order.
Article
The Search for Real-World Media Effects on Political Decision Making
Thomas J. Leeper
Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two.
While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires.
The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.
Article
The Challenges of Making Research Collaboration in Africa More Equitable
Susan Dodsworth
Collaborative research has a critical role to play in furthering our understanding of African politics. Many of the most important and interesting questions in the field are difficult, if not impossible, to tackle without some form of collaboration, either between academics within and outside of Africa—often termed North–South research partnerships—or between those researchers and organizations from outside the academic world. In Africa in particular, collaborative research is becoming more frequent and more extensive. This is due not only to the value of the research that it can produce but also to pressures on the funding of African scholars and academics in the Global North, alongside similar pressures on the budgets of non-academic collaborators, including bilateral aid agencies, multilateral organizations, and national and international non-government organizations.
Collaborative projects offer many advantages to these actors beyond access to new funding sources, so they constitute more than mere “marriages of convenience.” These benefits typically include access to methodological expertise and valuable new data sources, as well as opportunities to increase both the academic and “real-world” impact of research findings. Yet collaborative research also raises a number of challenges, many of which relate to equity. They center on issues such as who sets the research agenda, whether particular methodological approaches are privileged over others, how responsibility for different research tasks is allocated, how the benefits of that research are distributed, and the importance of treating colleagues with respect despite the narrative of “capacity-building.” Each challenge manifests in slightly different ways, and to varying extents, depending on the type of collaboration at hand: North–South research partnership or collaboration between academics and policymakers or practitioners. This article discusses both types of collaboration together because of their potential to overlap in ways that affect the severity and complexity of those challenges.
These challenges are not unique to research in Africa, but they tend to manifest in ways that are distinct or particularly acute on the continent because of the context in which collaboration takes place. In short, the legacy of colonialism matters. That history not only shapes who collaborates with whom but also who does so from a position of power and who does not. Thus, the inequitable nature of some research collaborations is not simply the result of oversights or bad habits; it is the product of entrenched structural factors that produce, and reproduce, imbalances of power. This means that researchers seeking to make collaborative projects in Africa more equitable must engage with these issues early, proactively, and continuously throughout the entire life cycle of those research projects. This is true not just for researchers based in the Global North but for scholars from, or working in, Africa as well.
Article
Reconceptualizing Field Research
Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read
Generations of political scientists have set out for destinations near and far to pursue field research. Even in a digitally networked era, the researcher’s personal presence and engagement with the field context continue to be essential. Yet exactly what does fieldwork mean, what is it good for, and how can scholars make their time in the field as reflective and productive as possible? Thinking of field research in broad terms—as leaving one’s home institution to collect information, generate data, and/or develop insights that significantly inform one’s research—reveals that scholars of varying epistemological commitments, methodological bents, and substantive foci all engage in fieldwork. Moreover, they face similar challenges, engage in comparable practices, and even follow similar principles. Thus, while every scholar’s specific project is unique, we also have much to learn from each other.
In preparing for and conducting field research, political scientists connect the high-level fundamentals of their research design with the practicalities of day-to-day inquiry. While in the field, they take advantage of the multiplicity of opportunities that the field setting provides and often triangulate by cross-checking among different perspectives or data sources. To a large extent, they do not regard initial research design decisions as final; instead, they iteratively update concepts, hypotheses, the research question itself, and other elements of their projects—carefully justifying these adaptations—as their fieldwork unfolds. Incorporating what they are learning in a dynamic and ongoing fashion, while also staying on task, requires both flexibility and discipline.
Political scientists are increasingly writing about the challenges of special types of field environments (such as authoritarian regimes or conflict settings) and about issues of positionality that arise from their own particular identities interacting with those of the people they study or with whom they work. So too, they are grappling with what it means to conduct research in a way that aligns with their ethical commitments, and what the possibilities and limits of research transparency are in relation to fieldwork. In short, political scientists have joined other social scientists in undertaking critical reflection on what they do in the field—and this self-awareness is itself a hallmark of high-quality research.
Article
Agent-Based Computational Modeling and International Relations Theory: Quo Vadis?
Claudio Cioffi-Revilla
Agent-based computational modeling (ABM, for short) is a formal and supplementary methodological approach used in international relations (IR) theory and research, based on the general ABM paradigm and computational methodology as applied to IR phenomena. ABM of such phenomena varies according to three fundamental dimensions: scale of organization—spanning foreign policy, international relations, regional systems, and global politics—as well as by geospatial and temporal scales. ABM is part of the broader complexity science paradigm, although ABMs can also be applied without complexity concepts. There have been scores of peer-reviewed publications using ABM to develop IR theory in recent years, based on earlier pioneering work in computational IR that originated in the 1960s that was pre-agent based. Main areas of theory and research using ABM in IR theory include dynamics of polity formation (politogenesis), foreign policy decision making, conflict dynamics, transnational terrorism, and environment impacts such as climate change. Enduring challenges for ABM in IR theory include learning the applicable ABM methodology itself, publishing sufficiently complete models, accumulation of knowledge, evolving new standards and methodology, and the special demands of interdisciplinary research, among others. Besides further development of main themes identified thus far, future research directions include ABM applied to IR in political interaction domains of space and cyber; new integrated models of IR dynamics across domains of land, sea, air, space, and cyber; and world order and long-range models.
Article
The Diversification of Deterrence: New Data and Novel Realities
Shannon Carcelli and Erik A. Gartzke
Deterrence theory is slowly beginning to emerge from a long sleep after the Cold War, and from its theoretical origins over half a century ago. New realities have led to a diversification of deterrence in practice, as well as to new avenues for its study and empirical analysis. Three major categories of changes in the international system—new actors, new means of warfare, and new contexts—have led to corresponding changes in the way that deterrence is theorized and studied. First, the field of deterrence has broadened to include nonstate and nonnuclear actors, which has challenged scholars with new types of theories and tests. Second, cyberthreats, terrorism, and diverse nuclear force structures have led scholars to consider means in new ways. Third, the likelihood of an international crisis has shifted as a result of physical, economic, and normative changes in the costs of crisis, which had led scholars to more closely address the crisis context itself. The assumptions of classical deterrence are breaking down, in research as well as in reality. However, more work needs to be done in understanding these international changes and building successful deterrence policy. A better understanding of new modes of deterrence will aid policymakers in managing today’s threats and in preventing future deterrence failures, even as it prompts the so-called virtuous cycle of new theory and additional empirical testing.
Article
Qualitative Comparative Analysis (QCA) and Set Theory
Claudius Wagemann
Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.
There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.
Article
Counterfactuals and Foreign Policy Analysis
Richard Ned Lebow
Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals.
There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.
12