11-20 of 54 Results  for:

  • Quantitative Political Methodology x
Clear all

Article

The International Crisis Behavior Project  

Kyle Beardsley, Patrick James, Jonathan Wilkenfeld, and Michael Brecher

Over the course of more than four decades the International Crisis Behavior (ICB) Project, a major and ongoing data-gathering enterprise in the social sciences, has compiled data that continues to be accessed heavily in scholarship on conflict processes. ICB holdings consist of full-length qualitative case studies, along with an expanding range of quantitative data sets. Founded in 1975, the ICB Project is among the most visible and influential within the discipline of International Relations (IR). A wide range of studies based either primarily or in part on the ICB’s concepts and data have accumulated and cover subjects that include the causes, processes, and consequences of crises. The breadth of ICB’s contribution has expanded over time to go beyond a purely state-centric approach to include crisis-related activities of transnational actors across a range of categories. ICB also offers depth through, for example, potential resolution of contemporary debates about mediation in crises on the basis of nuanced findings about long- versus short-term impact with regard to conflict resolution.

Article

Measuring Attitudes Toward LGBT Individuals: Theoretical and Practical Considerations  

Melanie C. Steffens and Sabine Preuß

Over the last decades, in many so-called Western countries, the social, political, and legal standing of lesbians, gay men, and bisexual and trans* individuals (henceforth, LGBT* individuals) has considerably improved, and concurrently, attitudes toward these groups have become more positive. Consequently, people are aware that blatantly prejudiced statements are less socially accepted, and thus, negative attitudes toward LGBT* individuals (also referred to as antigay attitudes, sexual prejudice, or homonegativity) and their rights need to be measured in more subtle ways than previously. At the same time, discrimination and brutal hate crimes toward LGBT* individuals still exist (e.g., Orlando shooting, torture of gay men in Chechnya). Attitudes are one of the best predictors of overt behavior. Thus, examining attitudes toward LGBT* individuals in an adequate way helps to predict discriminatory behavior, to identify underlying processes, and to develop interventions to reduce negative attitudes and thus, ultimately, hate crimes. The concept of attitudes is theoretically postulated to consist of three components (i.e., the cognitive, affective, and behavioral attitude components). Further, explicit and implicit attitude measures are distinguished. Explicit measures directly ask participants to state their opinions regarding the attitude object and are thus transparent, they require awareness, and they are subject to social desirability. In contrast, implicit measures infer attitudes indirectly from observed behavior, typically from reaction times in different computer-assisted tasks; they are therefore less transparent, they do not require awareness, and they are less prone to socially desirable responding. With regard to explicit attitude measures, old-fashioned and modern forms of prejudice have been distinguished. When it comes to measuring LGBT* attitudes, measures should differentiate between attitudes toward different sexual minorities (as well as their rights). So far, research has mostly focused on lesbians and gay men; however, there is increasing interest in attitudes toward bisexual and trans* individuals. Also, attitude measures need to be able to adequately capture attitudes of more or less prejudiced segments of society. To measure attitudes toward sexual minorities adequately, the attitude measure needs to fulfill several methodological criteria (i.e., to be psychometrically sound, which means being reliable and valid). In order to demonstrate the quality of an attitude measure, it is essential to know the relationship between scores on the measure and important variables that are known to be related to LGBT* attitudes. Different measures for LGBT* attitudes exist; which one is used should depend on the (research) purpose.

Article

Attitudes Toward Homosexuality and LGBT People: Causal Attributions for Sexual Orientation  

Peter Hegarty

Social scientists have debated whether belief in a biological basis for sexual orientation engenders more positive attitudes toward gay men and lesbians. Belief in the biological theory has often been observed to be correlated with pro-lesbian/gay attitudes, and this gives some “weak” support for the hypothesis. There is far less “strong” evidence that biological beliefs have caused a noteworthy shift in heterosexist attitudes, or that they hold any essential promise of so doing. One reason for this divergence between the weak and strong hypothesis is that beliefs about causality are influenced by attitudes and group identities. Consequently beliefs about a biological basis of sexual orientation have identity-expressive functions over and above their strictly logical causal implications about nature/nurture issues. Four other factors explain why the biological argument of the 1990s was an intuitively appealing as a pro-gay tool, although there is no strong evidence that it had a very substantive impact in making public opinion in the USA more pro-gay. These factors are that the biological argument (a) implied that sexuality is a discrete social category grounded in fundamental differences between people, (b) implied that sexual orientation categories are historically and culturally invariant, (c) implied that gender roles and stereotypes have a biological basis, and (d) framed homosexual development, not heterosexual development, as needing explanation. Understanding this literature is important and relevant for conceptualizing the relationship between biological attributions and social attitudes in domains beyond sexual orientations, such as in the more recent research on reducing transphobia and essentialist beliefs about gender.

Article

Qualitative Comparative Analysis: Discovering Core Combinations of Conditions in Political Decision Making  

Benoît Rihoux

Qualitative Comparative Analysis (QCA) was launched in the late 1980s by Charles Ragin, as a research approach bridging case-oriented and variable-oriented perspectives. It conceives cases as complex combinations of attributes (i.e. configurations), is designed to process multiple cases, and enables one to identify, through minimization algorithms, the core equifinal combinations of conditions leading to an outcome of interest. It systematizes the analysis in terms of necessity and sufficiency, models social reality in terms of set-theoretic relations, and provides powerful logical tools for complexity reduction. It initially came along with one technique, crisp-set QCA (csQCA), requiring dichotomized coding of data. As it has expanded, the QCA field has been enriched by new techniques such as multi-value QCA (mvQCA) and especially fuzzy-set QCA (fsQCA), both of which enable finer-grained calibration. It has also developed further with diverse extensions and more advanced designs, including mixed- and multimethod designs in which QCA is sequenced with focused case studies or with statistical analyses. QCA’s emphasis on causal complexity makes it very fit to address various types of objects and research questions touching upon political decision making—and indeed QCA has been applied in multiple related social scientific fields. While QCA can be exploited in different ways, it is most frequently used for theory evaluation purposes, with a streamlined protocol including a sequence of core operations and good practices. Several reliable software options are also available to implement the core of the QCA procedure. However, given QCA’s case-based foundation, much researcher input is still required at different stages. As it has further developed, QCA has been subject to fierce criticism, especially from a mainstream statistical perspective. This has stimulated further innovations and refinements, in particular in terms of parameters of fit and robustness tests which also correspond to the growth of QCA applications in larger-n designs. Altogether the field has diversified and broadened, and different users may exploit QCA in various ways, from smaller-n case-oriented uses to larger-n more analytic uses, and following different epistemological positions regarding causal claims. This broader field can therefore be labeled as that of both “Configurational Comparative Methods” (CCMs) and “Set-Theoretic Methods” (STMs).

Article

Expected Utility and Political Decision Making  

Jona Linde

Expected utility theory is widely used to formally model decisions in situations where outcomes are uncertain. As uncertainty is arguably commonplace in political decisions, being able to take that uncertainty into account is of great importance when building useful models and interpreting empirical results. Expected utility theory has provided possible explanations for a host of phenomena, from the failure of the median voter theorem to the making of vague campaign promises and the delegation of policymaking. A good expected utility model may provide alternative explanations for empirical phenomena and can structure reasoning about the effect of political actors’ goals, circumstances, and beliefs on their behavior. For example, expected utility theory shows that whether the median voter theorem can be expected to hold or not depends on candidates’ goals (office, policy, or vote seeking), and the nature of their uncertainty about voters. In this way expected utility theory can help empirical researchers derive hypotheses and guide them towards the data required to exclude alternative explanations. Expected utility has been especially successful in spatial voting models, but the range of topics to which it can be applied is far broader. Applications to pivotal voting or politicians’ redistribution decisions show this wider value. However, there is also a range of promising topics that have received ample attention from empirical researchers, but that have so far been largely ignored by theorists applying expected utility theory. Although expected utility theory has its limitations, more modern theories that build on the expected utility framework, such as prospect theory, can help overcome these limitations. Notably these extensions rely on the same modeling techniques as expected utility theory and can similarly elucidate the mechanisms that may explain empirical phenomena. This structured way of thinking about behavior under uncertainty is the main benefit provided by both expected utility theory and its extensions.

Article

Agent-Based Modeling in Political Decision Making  

Lin Qiu and Riyang Phang

Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.

Article

Decision Support Systems  

Sean B. Eom

A decision support system is an interactive human–computer decision-making system that supports decision makers rather than replaces them, utilizing data and models. It solves unstructured and semi-structured problems with a focus on effectiveness rather than efficiency in decision processes. In the early 1970s, scholars in this field began to recognize the important roles that decision support systems (DSS) play in supporting managers in their semistructured or unstructured decision-making activities. Over the past five decades, DSS has made progress toward becoming a solid academic field. Nevertheless, since the mid-1990s, the inability of DSS to fully satisfy a wide range of information needs of practitioners provided an impetus for a new breed of DSS, business intelligence systems (BIS). The academic discipline of DSS has undergone numerous changes in technological environments including the adoption of data warehouses. Until the late 1990s, most textbooks referred to “decision support systems.” Nowadays, many of them have replaced “decision support systems” with “business intelligence.” While DSS/BIS began in academia and were quickly adopted in business, in recent years these tools have moved into government and the academic field of public administration. In addition, modern political campaigns, especially at the national level, are based on data analytics and the use of big data analytics. The first section of this article reviews the development of DSS as an academic discipline. The second section discusses BIS and their components (the data warehousing environment and the analytical environment). The final section introduces two emerging topics in DSS/BIS: big data analytics and cloud computing analytics. Before the era of big data, most data collected by business organizations could easily be managed by traditional relational database management systems with a serial processing system. Social networks, e-business networks, Internet of Things (IoT), and many other wireless sensor networks are generating huge volumes of data every day. The challenge of big data has demanded a new business intelligence infrastructure with new tools (Hadoop cluster, the data warehousing environment, and the business analytical environment).

Article

Flood Damage Assessments: Theory and Evidence From the United States  

Laura Bakkensen and Logan Blair

Flooding remains one of the globe’s most devastating natural hazards and a leading driver of natural disaster losses across many countries, including the United States. As such, a rich and growing literature aims to better understand, model, and assess flood losses. Several major theoretical and empirical themes emerge from the literature. Fundamental to the flood damage assessment literature are definitions of flood damage, including a typology of flood damage, such as direct and indirect losses. In addition, the literature theoretically and empirically assesses major determinants of flood damage including hydrological factors, measurement of the physical features in harm’s way, as well as understanding and modeling protective activities, such as flood risk mitigation and adaptation, that all co-determine the overall flood losses. From there, common methods to quantify flood damage take these factors as inputs, modeling hydrological risk, exposure, and vulnerability into quantifiable flood loss estimates through a flood damage function, and include both ex ante expected loss assessments and ex post event-specific analyses. To do so, high-quality data are key across all model steps and can be found across a variety of sources. Early 21st-century advancements in spatial data and remote sensing push the literature forward. While topics and themes apply more generally to flood damage across the globe, examples from the United States illustrate key topics. Understanding main themes and insights in this important research area is critical for researchers, policy-makers, and practitioners to better understand, utilize, and extend existing flood damage assessment literatures in order to lessen or even prevent future tragedy.

Article

Media-Effects Experiments in Political Decision Making  

Bryan Gervais

Recognizing its causal power, contemporary scholars of media effects commonly leverage experimental methodology. For most of the 20th century, however, political scientists and communication scholars relied on observational data, particularly after the development of scientific survey methodology around the mid-point of the century. As the millennium approached, Iyengar and Kinder’s seminal News That Matters experiments ushered in an era of renewed interest in experimental methods. Political communication scholars have been particularly reliant on experiments, due to their advantages over observational studies in identifying media effects. Although what is meant by “media effects” has not always been clear or undisputed, scholars generally agree that the news media influences mass opinion and behavior through its agenda-setting, framing, and priming powers. Scholars have adopted techniques and practices for gauging the particular effects these powers have, including measuring the mediating role of affect (or emotion). Although experiments provide researchers with causal leverage, political communication scholars must consider challenges endemic to media-effects studies, including problems related to selective exposure. Various efforts to determine if selective exposure occurs and if it has consequences have come to different conclusions. The origin of conflicting conclusions can be traced back to the different methodological choices scholars have made. Achieving experimental realism has been a particularly difficult challenge for selective exposure experiments. Nonetheless, there are steps media-effects scholars can take to bolster causal arguments in an era of high media choice. While the advent of social media has brought new challenges for media-effects experimentalists, there are new opportunities in the form of objective measures of media exposure and effects.

Article

How Motivation Influences Political Decision Making  

Brian J. Gaines and Benjamin R. Kantack

Although motivation undergirds virtually all aspects of political decision making, its influence is often unacknowledged, or taken for granted, in behavioral political science. Motivations are inevitably important in generic models of decision theory. In real-world politics, two crucially important venues for motivational effects are the decision of whether or not to vote, and how (or, whether) partisanship and other policy views color information-collection, so that people choose and then justify, rather than studying options before choosing. For researchers, motivations of survey respondents and experimental subjects are deeply important, but only just beginning to garner the attention they deserve.