Scholarship in international relations has taken a more quantitative turn in the past four decades. The field of foreign policy analysis was arguably the forerunner in the development and application of quantitative methodologies in international relations. From public opinion surveys to events data to experimental methods, many of the earliest uses of quantitative methodologies can be found in foreign policy analysis. On substantive questions ranging from the causes of war to the dynamics of public opinion, the analysis of data quantitatively has informed numerous debates in foreign policy analysis and international relations. Emerging quantitative methods will be useful in future efforts to analyze foreign policy.
Micah Dillard and Jon C.W. Pevehouse
Evgeniia Iakhnis, Stefanie Neumeier, Anne Van Wijk, and Patrick James
Quantitative methodology in crisis studies is a topic of substantial scope. The principal rallying point for such research is the long-standing International Crisis Behavior (ICB) Project, which from 1975 onward has produced a comprehensive and heavily accessed data set for the study of conflict processes. A prehistory of crisis studies based on statistical methods, which identified connections between and among various conflict-related events, pointed increasingly toward the need for a program of research on escalation. The potential of quantitative methodology to contribute seriously to crisis studies has been realized along multiple dimensions by the ICB Project in particular. For example, quantitative methods have been applied productively to study the effects of both global and regional organizations, along with individual states, upon the process of crisis escalation. Current research in crisis studies is based on the premise that research designs so far have covered only one of multiple relevant stages regarding the process of escalation. This is where the concept of a “near crisis” becomes relevant: a near crisis entails perception of threat and finite time, but not an increased likelihood of military hostilities. Data analysis pertaining to multiple stages of escalation is at an early stage of development, but initial results are intriguing. A further critique of quantitative research begins with the observation that it is mostly state-centered and reductionist in nature. A key question emerges: How can the concept of crisis and associated data collection be revised to include a humanistic element that would entail new and potentially more enlightening configurations of independent and dependent variables?
Stephen L. Quackenbush
Deterrence is an important subject, and its study has spanned more than seven decades. Much research on deterrence has focused on a theoretical understanding of the subject. Particularly important is the distinction between classical deterrence theory and perfect deterrence theory. Other studies have employed empirical analyses. The empirical literature on deterrence developed at different times and took different approaches. The early empirical deterrence literature was highly limited for varying reasons. Much of the early case study literature did not seek to test deterrence theory. Early quantitative studies did seek to do so, but they were hampered by rudimentary methods, poor research design, and/or a disconnect between quantitative studies and formal theories of deterrence. Modern empirical research on deterrence has made great strides toward bridging the formal-quantitative divide in the study of deterrence and conducting theoretically driven case studies. Further, researchers have explored the effect of specific variables on deterrence, such as alliances, reputations and credibility, and nuclear weapons. Future empirical studies of deterrence should build on these modern developments. In addition, they should build on perfect deterrence theory, given its logical consistency and empirical support.
Public diplomacy has become an essential subject for both practitioners of foreign policy and scholars of international relations/world politics. The more the term achieves popularity and is used in policy papers, magazines, academic books, and articles, the greater the number of different definitions of the concept. Unfortunately, no universally agreed-upon definition exists. With regard to the international relations debate on the “-isms,” some researchers claim that public diplomacy is part of constructivism. Yet, while it may be appropriate to categorize public diplomacy as constructivist for norm-oriented reputation politics such as “naming and shaming,” many realists working from the rationalist paradigm have recognized the importance of public diplomacy in international relations. Recently, beyond discussions on definitions and scope of public diplomacy, many data-oriented, empirical studies have been published on the subject. For instance, moves have been made to rank which state can achieve the greatest level of soft power through the effective practice of public diplomacy. Moreover, quantitative text analysis (QTA) or content analysis frameworks have frequently been utilized to study how international media focus on controversial diplomatic issues between states. Even tweets and social networks are being studied to reveal what types of international diplomatic communications are supported and opposed by third-party domestic audiences. Rapid developments continue to be made in the methodological sophistication of public diplomacy studies.
Kumail Wasif and Jeff Gill
Bayes’ theorem is a relatively simple equation but one of the most important mathematical principles discovered. It is a formalization of a basic cognitive process: updating expectations as new information is obtained. It was derived from the laws of conditional probability by Reverend Thomas Bayes and published posthumously in 1763. In the 21st century, it is used in academic fields ranging from computer science to social science. The theorem’s most prominent use is in statistical inference. In this regard, there are three essential tenets of Bayesian thought that distinguish it from standard approaches. First, any quantity that is not known as an absolute fact is treated probabilistically, meaning that a numerical probability or a probability distribution is assigned. Second, research questions and designs are based on prior knowledge and expressed as prior distributions. Finally, these prior distributions are updated by conditioning on new data through the use of Bayes’ theorem to create a posterior distribution that is a compromise between prior and data knowledge. This approach has a number of advantages, especially in social science. First, it gives researchers the probability of observing the parameter given the data, which is the inverse of the results from frequentist inference and more appropriate for social scientific data and parameters. Second, Bayesian approaches excel at estimating parameters for complex data structures and functional forms, and provide more information about these parameters compared to standard approaches. This is possible due to stochastic simulation techniques called Markov Chain Monte Carlo. Third, Bayesian approaches allow for the explicit incorporation of previous estimates through the use of the prior distribution. This provides a formal mechanism for incorporating previous estimates and a means of comparing potential results. Bayes’ theorem is also used in machine learning, which is a subset of computer science that focuses on algorithms that learn from data to make predictions. One such algorithm is the Naive Bayes Classifier, which uses Bayes’ theorem to classify objects such as documents based on prior relationships. Bayesian networks can be seen as a complicated version of the Naive Classifier that maps, estimates, and predicts relationships in a network. It is useful for more complicated prediction problems. Lastly, the theorem has even been used by qualitative social scientists as a formal mechanism for stating and evaluating beliefs and updating knowledge.