1-19 of 19 Results  for:

  • Quantitative Political Methodology x
  • Political Psychology x
Clear all

Article

Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.

Article

Social scientists have debated whether belief in a biological basis for sexual orientation engenders more positive attitudes toward gay men and lesbians. Belief in the biological theory has often been observed to be correlated with pro-lesbian/gay attitudes, and this gives some “weak” support for the hypothesis. There is far less “strong” evidence that biological beliefs have caused a noteworthy shift in heterosexist attitudes, or that they hold any essential promise of so doing. One reason for this divergence between the weak and strong hypothesis is that beliefs about causality are influenced by attitudes and group identities. Consequently beliefs about a biological basis of sexual orientation have identity-expressive functions over and above their strictly logical causal implications about nature/nurture issues. Four other factors explain why the biological argument of the 1990s was an intuitively appealing as a pro-gay tool, although there is no strong evidence that it had a very substantive impact in making public opinion in the USA more pro-gay. These factors are that the biological argument (a) implied that sexuality is a discrete social category grounded in fundamental differences between people, (b) implied that sexual orientation categories are historically and culturally invariant, (c) implied that gender roles and stereotypes have a biological basis, and (d) framed homosexual development, not heterosexual development, as needing explanation. Understanding this literature is important and relevant for conceptualizing the relationship between biological attributions and social attitudes in domains beyond sexual orientations, such as in the more recent research on reducing transphobia and essentialist beliefs about gender.

Article

A growing body of research uses computational models to study political decision making and behavior such as voter turnout, vote choice, party competition, social networks, and cooperation in social dilemmas. Advances in the computational modeling of political decision making are closely related to the idea of bounded rationality. In effect, models of full rationality can usually be analyzed by hand, but models of bounded rationality are complex and require computer-assisted analysis. Most computational models used in the literature are agent based, that is, they specify how decisions are made by autonomous, interacting computational objects called “agents.” However, an important distinction can be made between two classes of models based on the approaches they take: behavioral and information processing. Behavioral models specify relatively simple behavioral rules to relax the standard rationality assumption and investigate the system-level consequences of these rules in conjunction with deductive, game-theoretic analysis. In contrast, information-processing models specify the underlying information processes of decision making—the way political actors receive, store, retrieve, and use information to make judgment and choice—within the structural constraints on human cognition, and examine whether and how these processes produce the observed behavior in question at the individual or aggregate level. Compared to behavioral models, information-processing computational models are relatively rare, new to political scientists, and underexplored. However, focusing on the underlying mental processes of decision making that must occur within the structural constraints on human cognition, they have the potential to provide a more general, psychologically realistic account for political decision making and behavior.

Article

Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals. There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.

Article

The news media have been disrupted. Broadcasting has given way to narrowcasting, editorial control to control by “friends” and personalization algorithms, and a few reputable producers to millions with shallower reputations. Today, not only is there a much broader variety of news, but there is also more of it. The news is also always on. And it is available almost everywhere. The search costs have come crashing down, so much so that much of the world’s information is at our fingertips. Google anything and the chances are that there will be multiple pages of relevant results. Such a dramatic expansion of choice and access is generally considered a Pareto improvement. But the worry is that we have fashioned defeat from the bounty by choosing badly. The expansion in choice is blamed for both, increasing the “knowledge gap,” the gap between how much the politically interested and politically disinterested know about politics, and increasing partisan polarization. We reconsider the evidence for the claims. The claim about media’s role in rising knowledge gaps does not need explaining because knowledge gaps are not increasing. For polarization, the story is nuanced. Whatever evidence exists suggests that the effect is modest, but measuring long-term effects of a rapidly changing media landscape is hard and may explain the results. As we also find, even describing trends in basic explanatory variables is hard. Current measures are beset with five broad problems. The first is conceptual errors. For instance, people frequently equate preference for information from partisan sources with a preference for congenial information. Second, survey measures of news consumption are heavily biased. Third, behavioral survey experimental measures are unreliable and inapt for learning how much information of a particular kind people consume in their real lives. Fourth, measures based on passive observation of behavior only capture a small (likely biased) set of the total information consumed by people. Fifth, content is often coded crudely—broad judgments are made about coarse units, eliding over important variation. These measurement issues impede our ability to answer the extent to which people choose badly and the attendant consequences of such. Improving measures will do much to advance our ability to answer important questions.

Article

Expected utility theory is widely used to formally model decisions in situations where outcomes are uncertain. As uncertainty is arguably commonplace in political decisions, being able to take that uncertainty into account is of great importance when building useful models and interpreting empirical results. Expected utility theory has provided possible explanations for a host of phenomena, from the failure of the median voter theorem to the making of vague campaign promises and the delegation of policymaking. A good expected utility model may provide alternative explanations for empirical phenomena and can structure reasoning about the effect of political actors’ goals, circumstances, and beliefs on their behavior. For example, expected utility theory shows that whether the median voter theorem can be expected to hold or not depends on candidates’ goals (office, policy, or vote seeking), and the nature of their uncertainty about voters. In this way expected utility theory can help empirical researchers derive hypotheses and guide them towards the data required to exclude alternative explanations. Expected utility has been especially successful in spatial voting models, but the range of topics to which it can be applied is far broader. Applications to pivotal voting or politicians’ redistribution decisions show this wider value. However, there is also a range of promising topics that have received ample attention from empirical researchers, but that have so far been largely ignored by theorists applying expected utility theory. Although expected utility theory has its limitations, more modern theories that build on the expected utility framework, such as prospect theory, can help overcome these limitations. Notably these extensions rely on the same modeling techniques as expected utility theory and can similarly elucidate the mechanisms that may explain empirical phenomena. This structured way of thinking about behavior under uncertainty is the main benefit provided by both expected utility theory and its extensions.

Article

Konstantinos V. Katsikopoulos

Polymath, and also political scientist, Herbert Simon dared to point out that the amounts of time, information, computation, and other resources required for maximizing utility far exceed what is possible when real people have to make real decisions in the real world. In psychology, there are two main approaches to studying actual human judgment and decision making—the heuristics-and-bias and the fast-and-frugal-heuristics research programs. A distinctive characteristic of the fast-and-frugal-heuristics program is that it specifies formal models of heuristics and attempts to determine when people use them and what performance they achieve. These models rely on a few pieces of information that are processed in computationally simple ways. The information and computation are within human reach, which means that people rely on information they have relatively easy access to and employ simple operations such as summing or comparing numbers. Research in the laboratory and in the wild has found that most people use fast and frugal heuristics most of the time if a decision must be made quickly, information is expensive financially or cognitively to gather, or a single/few attributes of the problem strongly point towards an option. The ways in which people switch between heuristics is studied in the framework of the adaptive toolbox. Work employing computer simulations and mathematical analyses has uncovered conditions under which fast and frugal heuristics achieve higher performance than benchmarks from statistics and machine learning, and vice versa. These conditions constitute the theory of ecological rationality. This theory suggests that fast and frugal heuristics perform better than complex optimization models if the available information is of low quality or scarce, or if there exist dominant options or attributes. The bias-variance decomposition of statistical prediction error, which is explained in layperson’s terms, underpins these claims. Research on fast and frugal heuristics suggests a governance approach not based on nudging, but on boosting citizen competence.

Article

In 2005, political scientists claimed that parent-child similarities, in addition to parenting, socialization, or shared social factors by the family, are also driven by genetic similarity. This claim upended a century of orthodoxy in political science. Many social scientists are uncomfortable with this concept, and this discomfort often stems from a multitude of misunderstandings. Claims about the genetics and heritability of political phenomena predate 2005 and wave of studies over the decade that followed swept through political science and then died down as quickly as they came. The behavior genetic research agenda faces several challenges within political science, including (a) resistance to these ideas within all of the social sciences, (b) difficulties faced by scholars in the production of meaningful theoretical and empirical contributions, and (c) developments in the field of genetics and their (negative) impact on the related scholarship within the study of politics.

Article

Brian J. Gaines and Benjamin R. Kantack

Although motivation undergirds virtually all aspects of political decision making, its influence is often unacknowledged, or taken for granted, in behavioral political science. Motivations are inevitably important in generic models of decision theory. In real-world politics, two crucially important venues for motivational effects are the decision of whether or not to vote, and how (or, whether) partisanship and other policy views color information-collection, so that people choose and then justify, rather than studying options before choosing. For researchers, motivations of survey respondents and experimental subjects are deeply important, but only just beginning to garner the attention they deserve.

Article

Over the last decades, in many so-called Western countries, the social, political, and legal standing of lesbians, gay men, and bisexual and trans* individuals (henceforth, LGBT* individuals) has considerably improved, and concurrently, attitudes toward these groups have become more positive. Consequently, people are aware that blatantly prejudiced statements are less socially accepted, and thus, negative attitudes toward LGBT* individuals (also referred to as antigay attitudes, sexual prejudice, or homonegativity) and their rights need to be measured in more subtle ways than previously. At the same time, discrimination and brutal hate crimes toward LGBT* individuals still exist (e.g., Orlando shooting, torture of gay men in Chechnya). Attitudes are one of the best predictors of overt behavior. Thus, examining attitudes toward LGBT* individuals in an adequate way helps to predict discriminatory behavior, to identify underlying processes, and to develop interventions to reduce negative attitudes and thus, ultimately, hate crimes. The concept of attitudes is theoretically postulated to consist of three components (i.e., the cognitive, affective, and behavioral attitude components). Further, explicit and implicit attitude measures are distinguished. Explicit measures directly ask participants to state their opinions regarding the attitude object and are thus transparent, they require awareness, and they are subject to social desirability. In contrast, implicit measures infer attitudes indirectly from observed behavior, typically from reaction times in different computer-assisted tasks; they are therefore less transparent, they do not require awareness, and they are less prone to socially desirable responding. With regard to explicit attitude measures, old-fashioned and modern forms of prejudice have been distinguished. When it comes to measuring LGBT* attitudes, measures should differentiate between attitudes toward different sexual minorities (as well as their rights). So far, research has mostly focused on lesbians and gay men; however, there is increasing interest in attitudes toward bisexual and trans* individuals. Also, attitude measures need to be able to adequately capture attitudes of more or less prejudiced segments of society. To measure attitudes toward sexual minorities adequately, the attitude measure needs to fulfill several methodological criteria (i.e., to be psychometrically sound, which means being reliable and valid). In order to demonstrate the quality of an attitude measure, it is essential to know the relationship between scores on the measure and important variables that are known to be related to LGBT* attitudes. Different measures for LGBT* attitudes exist; which one is used should depend on the (research) purpose.

Article

Recognizing its causal power, contemporary scholars of media effects commonly leverage experimental methodology. For most of the 20th century, however, political scientists and communication scholars relied on observational data, particularly after the development of scientific survey methodology around the mid-point of the century. As the millennium approached, Iyengar and Kinder’s seminal News That Matters experiments ushered in an era of renewed interest in experimental methods. Political communication scholars have been particularly reliant on experiments, due to their advantages over observational studies in identifying media effects. Although what is meant by “media effects” has not always been clear or undisputed, scholars generally agree that the news media influences mass opinion and behavior through its agenda-setting, framing, and priming powers. Scholars have adopted techniques and practices for gauging the particular effects these powers have, including measuring the mediating role of affect (or emotion). Although experiments provide researchers with causal leverage, political communication scholars must consider challenges endemic to media-effects studies, including problems related to selective exposure. Various efforts to determine if selective exposure occurs and if it has consequences have come to different conclusions. The origin of conflicting conclusions can be traced back to the different methodological choices scholars have made. Achieving experimental realism has been a particularly difficult challenge for selective exposure experiments. Nonetheless, there are steps media-effects scholars can take to bolster causal arguments in an era of high media choice. While the advent of social media has brought new challenges for media-effects experimentalists, there are new opportunities in the form of objective measures of media exposure and effects.

Article

Sabine C. Carey, Neil J. Mitchell, and Adam Scharpf

Pro-government militias are a prominent feature of civil wars. Governments in Ukraine, Russia, Syria, and Sudan recruit irregular forces in their armed struggle against insurgents. The United States collaborated with Awakening groups to counter the insurgency in Iraq, just as colonizers used local armed groups to fight rebellions in their colonies. A now quite wide and established cross-disciplinary literature on pro-government nonstate armed groups has generated a variety of research questions for scholars interested in conflict, political violence, and political stability: Does the presence of such groups indicate a new type of conflict? What are the dynamics that drive governments to align with informal armed groups and that make armed groups choose to side with the government? Given the risks entailed in surrendering a monopoly of violence, is there a turning point in a conflict when governments enlist these groups? How successful are these groups? Why do governments use these nonstate armed actors to shape foreign conflicts, whether as insurgents or counterinsurgents abroad? Are these nonstate armed actors always useful to governments or perhaps even an indicator of state failure? How do pro-government militias affect the safety and security of civilians? The enduring pattern of collaboration between governments and pro-government armed groups challenges conventional theory and the idea of an evolutionary process of the modern state consolidating the means of violence. Research on these groups and their consequences began with case studies, and these continue to yield valuable insights. More recently, survey work and cross-national quantitative research have contributed to our knowledge. This mix of methods is opening new lines of inquiry for research on insurgencies and the delivery of the core public good of effective security.

Article

Mathew V. Hibbing, Melissa N. Baker, and Kathryn A. Herzog

Since the early 2010s, political science has seen a rise in the use of physiological measures in order to inform theories about decision-making in politics. A commonly used physiological measure is skin conductance (electrodermal activity). Skin conductance measures the changes in levels of sweat in the eccrine glands, usually on the fingertips, in order to help inform how the body responds to stimuli. These changes result from the sympathetic nervous system (popularly known as the fight or flight system) responding to external stimuli. Due to the nature of physiological responses, skin conductance is especially useful when researchers hope to have good temporal resolution and make causal claims about a type of stimulus eliciting physiological arousal in individuals. Researchers interested in areas that involve emotion or general affect (e.g., campaign messages, political communication and advertising, information processing, and general political psychology) may be especially interested in integrating skin conductance into their methodological toolbox. Skin conductance is a particularly useful tool since its implicit and unconscious nature means that it avoids some of the pitfalls that can accompany self-report measures (e.g., social desirability bias and inability to accurately remember and report emotions). Future decision-making research will benefit from pairing traditional self-report measures with physiological measures such as skin conductance.

Article

Q methodology was introduced in 1935 and has evolved to become the most elaborate philosophical, conceptual, and technical means for the systematic study of subjectivity across an increasing array of human activities, most recently including decision making. Subjectivity is an inescapable dimension of all decision making since we all have thoughts, perspectives, and preferences concerning the wide range of matters that come to our attention and that enter into consideration when choices have to be made among options, and Q methodology provides procedures and a rationale for clarifying and examining the various viewpoints at issue. The application of Q methodology commonly begins by accumulating the various comments in circulation concerning a topic and then reducing them to a smaller set for administration to select participants, who then typically rank the statements in the Q sample from agree to disagree in the form of a Q sort. Q sorts are then correlated and factor analyzed, giving rise to a typology of persons who have ordered the statements in similar ways. As an illustration, Q methodology was administered to a diverse set of stakeholders concerned with the problems associated with the conservation and control of large carnivores in the Northern Rockies. Participants nominated a variety of possible solutions that each person then Q sorted from those solutions judged most effective to those judged most ineffective, the factor analysis of which revealed four separate perspectives that are compared and contrasted. A second study demonstrates how Q methodology can be applied to the examination of single cases by focusing on two members of a group contemplating how they might alter the governing structures and culture of their organization. The results are used to illustrate the quantum character of subjective behavior as well as the laws of subjectivity. Discussion focuses on the broader role of decisions in the social order.

Article

Christina Ladam, Ian Shapiro, and Anand Sokhey

As the most common form of voluntary association in America, houses of worship remain an unquestionably critical component of American civil society. Major approaches to studying religion and politics in the United States are described, and the authors present an argument for focusing more attention on the organizational experience provided by religious contexts: studying how individuals’ social networks intersect with their associational involvements (i.e., studying religion from a “interpersonal” perspective) may actually shed new light on intrapersonal, psychological constructs like identity and religiosity. Evidence is presented from two nationally representative data sets that suggests considerable variance in the degree to which individuals’ core social networks overlap with their houses of worship. This variance exists within and between individuals identifying with major religious traditions, and such networks are not characterized solely by agreement (as theories of self-selection might suggest).

Article

Damien Bol and Tom Verthé

People do not always vote for the party that they like the most. Sometimes, they choose to vote for another one because they want to maximize their influence on the outcome of the election. This behavior driven by strategic considerations is often labeled as “strategic voting.” It is opposed to “sincere voting,” which refers to the act of voting for one’s favorite party. Strategic voting can take different forms. It can consist in deserting a small party for a bigger one that has more chances of forming the government, or to the contrary, deserting a big party for a smaller one in order to send a signal to the political class. More importantly the strategies employed by voters differ across electoral systems. The presence of frequent government coalitions in proportional representation systems gives different opportunities, or ways, for people to influence the electoral outcome with their vote. In total, the literature identifies four main forms of strategic voting. Some of them are specific to some electoral systems; others apply to all.

Article

Wouter van Atteveldt, Kasper Welbers, and Mariken van der Velden

Analyzing political text can answer many pressing questions in political science, from understanding political ideology to mapping the effects of censorship in authoritarian states. This makes the study of political text and speech an important part of the political science methodological toolbox. The confluence of increasing availability of large digital text collections, plentiful computational power, and methodological innovations has led to many researchers adopting techniques of automatic text analysis for coding and analyzing textual data. In what is sometimes termed the “text as data” approach, texts are converted to a numerical representation, and various techniques such as dictionary analysis, automatic scaling, topic modeling, and machine learning are used to find patterns in and test hypotheses on these data. These methods all make certain assumptions and need to be validated to assess their fitness for any particular task and domain.

Article

The “sunk costs fallacy” is a popular import into political science from organizational psychology and behavioral economics. The fallacy is classically defined as a situation in which decision-makers escalate commitment to an apparently failing project in order to “recoup” the costs they have already sunk into it. The phenomenon is often framed as a good example of how real decision-making departs from the assumption of forward-looking rationality which underpins traditional approaches to understanding politics. Researchers have proposed a number of different psychological drivers for the fallacy, such as cognitive dissonance reduction, and there is experimental and observational evidence that it accurately characterizes decision-making in certain contexts. However, there is significant skepticism about the fallacy in many social sciences, with critics arguing that there are better forward-looking rational explanations for decisions apparently driven by a desire to recoup sunk costs – among them reputational concerns, option values and agency problems. Critics have also noted that in practical situations sunk costs are informative both about decision makers’ intrinsic valuation for the issue and the prospects for success, making it hard to discern a separate role for sunk costs empirically. To address these concerns, empirical researchers have employed a number of strategies, especially leveraging natural experiments in certain non-political decision making contexts such as sports or business, in order to isolate the effects of sunk costs per se from other considerations. In doing so, they have found mixed support for the fallacy. Research has also shown that the prevalence of the sunk costs fallacy may be moderated by a number of factors, including the locus of decision-making, framing, and national context. These provide the basis for suggestions for future research.

Article

The field of political science is experiencing a new proliferation of experimental work, thanks to a growth in online experiments. Administering traditional experimental methods over the Internet allows for larger and more accessible samples, quick response times, and new methods for treating subjects and measuring outcomes. As we show in this chapter, a rapidly growing proportion of published experiments in political science take advantage of an array of sophisticated online tools. Indeed, during a relatively short period of time, political scientists have already made huge gains in the sophistication of what can be done with just a simple online survey experiment, particularly in realms of inquiry that have traditionally been logistically difficult to study. One such area is the important topic of social interaction. Whereas experimentalists once relied on resource- and labor-intensive face-to-face designs for manipulating social settings, creative online efforts and accessible platforms are making it increasingly easy for political scientists to study the influence of social settings and social interactions on political decision-making. In this chapter, we review the onset of online tools for carrying out experiments and we turn our focus toward cost-effective and user-friendly strategies that online experiments offer to scholars who wish to not only understand political decision-making in isolated settings but also in the company of others. We review existing work and provide guidance on how scholars with even limited resources and technical skills can exploit online settings to better understand how social factors change the way individuals think about politicians, politics, and policies.