21-40 of 53 Results  for:

  • Quantitative Political Methodology x
Clear all

Article

Erika Forsberg and Louise Olsson

Prior research has found robust support for a relationship between gender inequality and civil war. These results all point in the same direction; countries that display lower levels of gender equality are more likely to become involved in civil conflict, and violence is likely to be even more severe, than in countries where women have a higher status. But what does gender inequality mean in this area of research? And how does research explain why we see this effect on civil war? Exploring this requires reviewing existing definitions and measurements of gender inequality, a concept that has several dimensions. Several clusters of explanations show how gender inequality could be related to civil war while more equal societies are better able to prevent violent conflict. It is clear that existing misconceptions that gender inequality primarily involves the role of women are clouding the fact that it clearly speaks to much broader societal developments which play central roles in civil war.

Article

In 2005, political scientists claimed that parent-child similarities, in addition to parenting, socialization, or shared social factors by the family, are also driven by genetic similarity. This claim upended a century of orthodoxy in political science. Many social scientists are uncomfortable with this concept, and this discomfort often stems from a multitude of misunderstandings. Claims about the genetics and heritability of political phenomena predate 2005 and wave of studies over the decade that followed swept through political science and then died down as quickly as they came. The behavior genetic research agenda faces several challenges within political science, including (a) resistance to these ideas within all of the social sciences, (b) difficulties faced by scholars in the production of meaningful theoretical and empirical contributions, and (c) developments in the field of genetics and their (negative) impact on the related scholarship within the study of politics.

Article

Brian J. Gaines and Benjamin R. Kantack

Although motivation undergirds virtually all aspects of political decision making, its influence is often unacknowledged, or taken for granted, in behavioral political science. Motivations are inevitably important in generic models of decision theory. In real-world politics, two crucially important venues for motivational effects are the decision of whether or not to vote, and how (or, whether) partisanship and other policy views color information-collection, so that people choose and then justify, rather than studying options before choosing. For researchers, motivations of survey respondents and experimental subjects are deeply important, but only just beginning to garner the attention they deserve.

Article

Evgeniia Iakhnis, Stefanie Neumeier, Anne Van Wijk, and Patrick James

Quantitative methodology in crisis studies is a topic of substantial scope. The principal rallying point for such research is the long-standing International Crisis Behavior (ICB) Project, which from 1975 onward has produced a comprehensive and heavily accessed data set for the study of conflict processes. A prehistory of crisis studies based on statistical methods, which identified connections between and among various conflict-related events, pointed increasingly toward the need for a program of research on escalation. The potential of quantitative methodology to contribute seriously to crisis studies has been realized along multiple dimensions by the ICB Project in particular. For example, quantitative methods have been applied productively to study the effects of both global and regional organizations, along with individual states, upon the process of crisis escalation. Current research in crisis studies is based on the premise that research designs so far have covered only one of multiple relevant stages regarding the process of escalation. This is where the concept of a “near crisis” becomes relevant: a near crisis entails perception of threat and finite time, but not an increased likelihood of military hostilities. Data analysis pertaining to multiple stages of escalation is at an early stage of development, but initial results are intriguing. A further critique of quantitative research begins with the observation that it is mostly state-centered and reductionist in nature. A key question emerges: How can the concept of crisis and associated data collection be revised to include a humanistic element that would entail new and potentially more enlightening configurations of independent and dependent variables?

Article

Kyle Beardsley, Patrick James, Jonathan Wilkenfeld, and Michael Brecher

Over the course of more than four decades the International Crisis Behavior (ICB) Project, a major and ongoing data-gathering enterprise in the social sciences, has compiled data that continues to be accessed heavily in scholarship on conflict processes. ICB holdings consist of full-length qualitative case studies, along with an expanding range of quantitative data sets. Founded in 1975, the ICB Project is among the most visible and influential within the discipline of International Relations (IR). A wide range of studies based either primarily or in part on the ICB’s concepts and data have accumulated and cover subjects that include the causes, processes, and consequences of crises. The breadth of ICB’s contribution has expanded over time to go beyond a purely state-centric approach to include crisis-related activities of transnational actors across a range of categories. ICB also offers depth through, for example, potential resolution of contemporary debates about mediation in crises on the basis of nuanced findings about long- versus short-term impact with regard to conflict resolution.

Article

Over the last decades, in many so-called Western countries, the social, political, and legal standing of lesbians, gay men, and bisexual and trans* individuals (henceforth, LGBT* individuals) has considerably improved, and concurrently, attitudes toward these groups have become more positive. Consequently, people are aware that blatantly prejudiced statements are less socially accepted, and thus, negative attitudes toward LGBT* individuals (also referred to as antigay attitudes, sexual prejudice, or homonegativity) and their rights need to be measured in more subtle ways than previously. At the same time, discrimination and brutal hate crimes toward LGBT* individuals still exist (e.g., Orlando shooting, torture of gay men in Chechnya). Attitudes are one of the best predictors of overt behavior. Thus, examining attitudes toward LGBT* individuals in an adequate way helps to predict discriminatory behavior, to identify underlying processes, and to develop interventions to reduce negative attitudes and thus, ultimately, hate crimes. The concept of attitudes is theoretically postulated to consist of three components (i.e., the cognitive, affective, and behavioral attitude components). Further, explicit and implicit attitude measures are distinguished. Explicit measures directly ask participants to state their opinions regarding the attitude object and are thus transparent, they require awareness, and they are subject to social desirability. In contrast, implicit measures infer attitudes indirectly from observed behavior, typically from reaction times in different computer-assisted tasks; they are therefore less transparent, they do not require awareness, and they are less prone to socially desirable responding. With regard to explicit attitude measures, old-fashioned and modern forms of prejudice have been distinguished. When it comes to measuring LGBT* attitudes, measures should differentiate between attitudes toward different sexual minorities (as well as their rights). So far, research has mostly focused on lesbians and gay men; however, there is increasing interest in attitudes toward bisexual and trans* individuals. Also, attitude measures need to be able to adequately capture attitudes of more or less prejudiced segments of society. To measure attitudes toward sexual minorities adequately, the attitude measure needs to fulfill several methodological criteria (i.e., to be psychometrically sound, which means being reliable and valid). In order to demonstrate the quality of an attitude measure, it is essential to know the relationship between scores on the measure and important variables that are known to be related to LGBT* attitudes. Different measures for LGBT* attitudes exist; which one is used should depend on the (research) purpose.

Article

Recognizing its causal power, contemporary scholars of media effects commonly leverage experimental methodology. For most of the 20th century, however, political scientists and communication scholars relied on observational data, particularly after the development of scientific survey methodology around the mid-point of the century. As the millennium approached, Iyengar and Kinder’s seminal News That Matters experiments ushered in an era of renewed interest in experimental methods. Political communication scholars have been particularly reliant on experiments, due to their advantages over observational studies in identifying media effects. Although what is meant by “media effects” has not always been clear or undisputed, scholars generally agree that the news media influences mass opinion and behavior through its agenda-setting, framing, and priming powers. Scholars have adopted techniques and practices for gauging the particular effects these powers have, including measuring the mediating role of affect (or emotion). Although experiments provide researchers with causal leverage, political communication scholars must consider challenges endemic to media-effects studies, including problems related to selective exposure. Various efforts to determine if selective exposure occurs and if it has consequences have come to different conclusions. The origin of conflicting conclusions can be traced back to the different methodological choices scholars have made. Achieving experimental realism has been a particularly difficult challenge for selective exposure experiments. Nonetheless, there are steps media-effects scholars can take to bolster causal arguments in an era of high media choice. While the advent of social media has brought new challenges for media-effects experimentalists, there are new opportunities in the form of objective measures of media exposure and effects.

Article

The interdisciplinary field of migration studies is broadly interested in the causes, patterns, and consequences of migration. Much of this work, united under the umbrella of the “new economics of migration” research program, argues that personal networks within and across households drive a wide variety of migration-related actions. Findings from this micro-level research have been extremely valuable, but it has struggled to develop generalizable lessons and aggregate into macro-level and meso-level insights. In addition, at group, region, and country levels, existing work is often limited by only considering migration total inflows and/or total outflows. This focus misses many critical features of migration. Using location networks, network measures such as preferential attachment, preferential disattachment, transitivity, betweenness centrality, and homophily provide valuable information about migration cascades and transit migration. Some insights from migration research tidily aggregate from personal networks up to location networks, whereas other insights uniquely originate from examining location networks.

Article

Populism is one of the most dynamic fields of comparative political research. Although its study began in earnest only in the late 1960s, it has since developed through four distinct waves of scholarship, each pertaining to distinct empirical phenomena and with specific methodological and theoretical priorities. Today, the field is in need of a comprehensive general theory that will be able to capture the phenomenon specifically within the context of our contemporary democracies. This, however, requires our breaking away from recurring conceptual and methodological errors and, above all, a consensus about the minimal definition of populism. All in all, the study of populism has been plagued by 10 drawbacks: (1) unspecified empirical universe, (2) lack of historical and cultural context specificity, (3) essentialism, (4) conceptual stretching, (5) unclear negative pole, (6) degreeism, (7) defective observable-measurable indicators, (8) a neglect of micromechanisms, (9) poor data and inattention to crucial cases, and (10) normative indeterminacy. Most, if not all, of the foregoing methodological errors are cured if we define, and study, modern populism simply as “democratic illiberalism,” which also opens the door to understanding the malfunctioning and pathologies of our modern-day liberal representative democracies.

Article

In the three decades since Jack Levy published his seminal review essay on the topic, there has been a great deal of quantitative research on the proposition that state leaders can use international conflict to enhance their political prospects at home. The findings of this work are frequently described as “mixed” or “inconsistent.” This characterization is superficially correct, but it is also misleading in some important respects. Focusing on two of Levy’s most important concerns about previous research, there has been substantial progress in our understanding of this phenomenon. First, as Levy suggests in his essay, researchers have elaborated a range of different mechanisms linking domestic political trouble with international conflict rather than a single diversionary argument. Processes creating diversionary incentives bear a family resemblance to one another but can have different behavioral implications. Four of them are (1) in-group/out-group dynamics, (2) agenda setting, (3) leader efforts to demonstrate competence in foreign policy, and (4) efforts to blame foreign leaders or perhaps domestic minorities for problems. In addition, researchers have identified some countervailing mechanisms that may inhibit state leaders’ ability to pursue diversionary strategies, the most important of which is the possibility that potential targets may strategically avoid conflict with leaders likely to behave aggressively. Second, research has identified scope conditions that limit the applicability of diversionary arguments, another of Levy’s concerns about the research he reviewed. Above all, diversionary uses of military force (though not other diversionary strategies) may be possible for only a narrow range of states. Though very powerful states may pursue such a strategy against a wide range of targets, the leaders of less powerful states may have this option only during fairly serious episodes of interstate hostility, such as rivalries and territorial disputes. A substantial amount of research has focused exclusively on the United States, a country that clearly has the capacity to pursue this strategy. While the findings of this work cannot be generalized to many other states, they have revealed some important nuances in the processes that create diversionary incentives. The extent to which these incentives hinge on highly specific political and institutional characteristics point to the difficulty of applying realistic diversionary arguments to a large sample of states. Research on smaller, more homogenous samples or individual states is more promising, even though it will not produce an answer to the broad question of how prevalent diversionary behavior is. As with many broad questions about political phenomena, the only correct answer may be “it depends.” Diversionary foreign policy happens, but not in the same way in every instance and not in every state in the international system.

Article

Rupal N. Mehta and Rachel Elizabeth Whitlark

What will nuclear proliferation look like in the future? While the quest for nuclear weapons has largely quieted after the turn of the 21st century, states are still interested in acquiring nuclear technology. Nuclear latency, an earlier step on the proliferation pathway, and here defined as operational uranium enrichment or plutonium reprocessing capability, is increasingly likely to be the next phase of proliferation concern. The drivers of nuclear latency, namely security factors, including rivalries with neighboring adversaries and the existence of alliances, are especially consequential in an increasingly challenging geopolitical environment. Though poised to play a significant role in international politics moving forward, latency remains a core area of exploration and subject of debate within the nuclear weapons literature writ large. While in many ways similar to nuclear weapons’ proliferation, the pursuit of nuclear latency has distinct features that merit further attention from scholars and policymakers alike.

Article

Logical models and statistical techniques have been used for measuring political and institutional variables, quantifying and explaining the relationships between them, testing theories, and evaluating institutional and policy alternatives. A number of cumulative and complementary findings refer to major institutional features of a political process of decision-making: from the size of the assembly to the territorial structure of the country, the electoral system, the number of parties in the assembly and in the government, the government’s duration, and the degree of policy instability. Mathematical equations based on sound theory are validated by empirical tests and can predict precise observations.

Article

Sabine C. Carey, Neil J. Mitchell, and Adam Scharpf

Pro-government militias are a prominent feature of civil wars. Governments in Ukraine, Russia, Syria, and Sudan recruit irregular forces in their armed struggle against insurgents. The United States collaborated with Awakening groups to counter the insurgency in Iraq, just as colonizers used local armed groups to fight rebellions in their colonies. A now quite wide and established cross-disciplinary literature on pro-government nonstate armed groups has generated a variety of research questions for scholars interested in conflict, political violence, and political stability: Does the presence of such groups indicate a new type of conflict? What are the dynamics that drive governments to align with informal armed groups and that make armed groups choose to side with the government? Given the risks entailed in surrendering a monopoly of violence, is there a turning point in a conflict when governments enlist these groups? How successful are these groups? Why do governments use these nonstate armed actors to shape foreign conflicts, whether as insurgents or counterinsurgents abroad? Are these nonstate armed actors always useful to governments or perhaps even an indicator of state failure? How do pro-government militias affect the safety and security of civilians? The enduring pattern of collaboration between governments and pro-government armed groups challenges conventional theory and the idea of an evolutionary process of the modern state consolidating the means of violence. Research on these groups and their consequences began with case studies, and these continue to yield valuable insights. More recently, survey work and cross-national quantitative research have contributed to our knowledge. This mix of methods is opening new lines of inquiry for research on insurgencies and the delivery of the core public good of effective security.

Article

Mathew V. Hibbing, Melissa N. Baker, and Kathryn A. Herzog

Since the early 2010s, political science has seen a rise in the use of physiological measures in order to inform theories about decision-making in politics. A commonly used physiological measure is skin conductance (electrodermal activity). Skin conductance measures the changes in levels of sweat in the eccrine glands, usually on the fingertips, in order to help inform how the body responds to stimuli. These changes result from the sympathetic nervous system (popularly known as the fight or flight system) responding to external stimuli. Due to the nature of physiological responses, skin conductance is especially useful when researchers hope to have good temporal resolution and make causal claims about a type of stimulus eliciting physiological arousal in individuals. Researchers interested in areas that involve emotion or general affect (e.g., campaign messages, political communication and advertising, information processing, and general political psychology) may be especially interested in integrating skin conductance into their methodological toolbox. Skin conductance is a particularly useful tool since its implicit and unconscious nature means that it avoids some of the pitfalls that can accompany self-report measures (e.g., social desirability bias and inability to accurately remember and report emotions). Future decision-making research will benefit from pairing traditional self-report measures with physiological measures such as skin conductance.

Article

Q methodology was introduced in 1935 and has evolved to become the most elaborate philosophical, conceptual, and technical means for the systematic study of subjectivity across an increasing array of human activities, most recently including decision making. Subjectivity is an inescapable dimension of all decision making since we all have thoughts, perspectives, and preferences concerning the wide range of matters that come to our attention and that enter into consideration when choices have to be made among options, and Q methodology provides procedures and a rationale for clarifying and examining the various viewpoints at issue. The application of Q methodology commonly begins by accumulating the various comments in circulation concerning a topic and then reducing them to a smaller set for administration to select participants, who then typically rank the statements in the Q sample from agree to disagree in the form of a Q sort. Q sorts are then correlated and factor analyzed, giving rise to a typology of persons who have ordered the statements in similar ways. As an illustration, Q methodology was administered to a diverse set of stakeholders concerned with the problems associated with the conservation and control of large carnivores in the Northern Rockies. Participants nominated a variety of possible solutions that each person then Q sorted from those solutions judged most effective to those judged most ineffective, the factor analysis of which revealed four separate perspectives that are compared and contrasted. A second study demonstrates how Q methodology can be applied to the examination of single cases by focusing on two members of a group contemplating how they might alter the governing structures and culture of their organization. The results are used to illustrate the quantum character of subjective behavior as well as the laws of subjectivity. Discussion focuses on the broader role of decisions in the social order.

Article

Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant. There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.

Article

Qualitative Comparative Analysis (QCA) was launched in the late 1980s by Charles Ragin, as a research approach bridging case-oriented and variable-oriented perspectives. It conceives cases as complex combinations of attributes (i.e. configurations), is designed to process multiple cases, and enables one to identify, through minimization algorithms, the core equifinal combinations of conditions leading to an outcome of interest. It systematizes the analysis in terms of necessity and sufficiency, models social reality in terms of set-theoretic relations, and provides powerful logical tools for complexity reduction. It initially came along with one technique, crisp-set QCA (csQCA), requiring dichotomized coding of data. As it has expanded, the QCA field has been enriched by new techniques such as multi-value QCA (mvQCA) and especially fuzzy-set QCA (fsQCA), both of which enable finer-grained calibration. It has also developed further with diverse extensions and more advanced designs, including mixed- and multimethod designs in which QCA is sequenced with focused case studies or with statistical analyses. QCA’s emphasis on causal complexity makes it very fit to address various types of objects and research questions touching upon political decision making—and indeed QCA has been applied in multiple related social scientific fields. While QCA can be exploited in different ways, it is most frequently used for theory evaluation purposes, with a streamlined protocol including a sequence of core operations and good practices. Several reliable software options are also available to implement the core of the QCA procedure. However, given QCA’s case-based foundation, much researcher input is still required at different stages. As it has further developed, QCA has been subject to fierce criticism, especially from a mainstream statistical perspective. This has stimulated further innovations and refinements, in particular in terms of parameters of fit and robustness tests which also correspond to the growth of QCA applications in larger-n designs. Altogether the field has diversified and broadened, and different users may exploit QCA in various ways, from smaller-n case-oriented uses to larger-n more analytic uses, and following different epistemological positions regarding causal claims. This broader field can therefore be labeled as that of both “Configurational Comparative Methods” (CCMs) and “Set-Theoretic Methods” (STMs).

Article

Katelyn E. Stauffer and Diana Z. O'Brien

Quantitative methods are among the most useful, but also historically contentious, tools in feminist research. Despite the controversy that sometimes surrounds these methods, feminist scholars in political science have often drawn on them to examine questions related to gender and politics. Researchers have used quantitative methods to explore gender in political behavior, institutions, and policy, as well as gender bias in the discipline. Just as quantitative methods have aided the advancement of feminist political science, a feminist perspective likewise has implications for data production, measurement, and analysis. Yet, the continued underrepresentation of women in the methods community needs to be addressed, and greater dialogue between feminist researchers and quantitative methodologists is required.

Article

Micah Dillard and Jon C.W. Pevehouse

Scholarship in international relations has taken a more quantitative turn in the past four decades. The field of foreign policy analysis was arguably the forerunner in the development and application of quantitative methodologies in international relations. From public opinion surveys to events data to experimental methods, many of the earliest uses of quantitative methodologies can be found in foreign policy analysis. On substantive questions ranging from the causes of war to the dynamics of public opinion, the analysis of data quantitatively has informed numerous debates in foreign policy analysis and international relations. Emerging quantitative methods will be useful in future efforts to analyze foreign policy.

Article

Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read

Generations of political scientists have set out for destinations near and far to pursue field research. Even in a digitally networked era, the researcher’s personal presence and engagement with the field context continue to be essential. Yet exactly what does fieldwork mean, what is it good for, and how can scholars make their time in the field as reflective and productive as possible? Thinking of field research in broad terms—as leaving one’s home institution to collect information, generate data, and/or develop insights that significantly inform one’s research—reveals that scholars of varying epistemological commitments, methodological bents, and substantive foci all engage in fieldwork. Moreover, they face similar challenges, engage in comparable practices, and even follow similar principles. Thus, while every scholar’s specific project is unique, we also have much to learn from each other. In preparing for and conducting field research, political scientists connect the high-level fundamentals of their research design with the practicalities of day-to-day inquiry. While in the field, they take advantage of the multiplicity of opportunities that the field setting provides and often triangulate by cross-checking among different perspectives or data sources. To a large extent, they do not regard initial research design decisions as final; instead, they iteratively update concepts, hypotheses, the research question itself, and other elements of their projects—carefully justifying these adaptations—as their fieldwork unfolds. Incorporating what they are learning in a dynamic and ongoing fashion, while also staying on task, requires both flexibility and discipline. Political scientists are increasingly writing about the challenges of special types of field environments (such as authoritarian regimes or conflict settings) and about issues of positionality that arise from their own particular identities interacting with those of the people they study or with whom they work. So too, they are grappling with what it means to conduct research in a way that aligns with their ethical commitments, and what the possibilities and limits of research transparency are in relation to fieldwork. In short, political scientists have joined other social scientists in undertaking critical reflection on what they do in the field—and this self-awareness is itself a hallmark of high-quality research.