Students of public opinion tend to focus on how exposure to political media, such as news coverage and political advertisements, influences the political choices that people make. However, the expansion of news and entertainment choices on television and via the Internet makes the decisions that people make about what to consume from various media outlets a political choice in its own right. While the current day hyperchoice media landscape opens new avenues of research, it also complicates how we should approach, conduct, and interpret this research. More choices means greater ability to choose media content based on one’s political preferences, exacerbating the severity of selection bias and endogeneity inherent in observational studies. Traditional randomized experiments offer compelling ways to obviate these challenges to making valid causal inferences, but at the cost of minimizing the role that agency plays in how people make media choices. Resent research modifies the traditional experimental design for studying media effects in ways that incorporate agency over media content. These modifications require researchers to consider different trade-offs when choosing among different design features, creating both advantages and disadvantages. Nonetheless, this emerging line of research offers a fresh perspective on how people’s media choices shapes their reaction to media content and political decisions.
Kevin Arceneaux and Martin Johnson
Process tracing is a research method for tracing causal mechanisms using detailed, within-case empirical analysis of how a causal process plays out in an actual case. Process tracing can be used both for case studies that aim to gain a greater understanding of the causal dynamics that produced the outcome of a particular historical case and to shed light on generalizable causal mechanisms linking causes and outcomes within a population of causally similar cases. This article breaks down process tracing as a method into its three core components: theorization about causal mechanisms linking causes and outcomes; the analysis of the observable empirical manifestations of the operation of theorized mechanisms; and the complementary use of comparative methods to enable generalizations of findings from single case studies to other causally similar cases. Three distinct variants of process tracing are developed, illustrated by examples from the literature.
Thomas J. Leeper
Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two. While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires. The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.
It can be difficult for political scientists and economists to know when to use laboratory experiments in their research programs. There are longstanding concerns in economics and political science about the external invalidity of laboratory results. Making matters worse, a number of prominent academics recommend using field experiments instead of laboratory experiments to learn about human behavior because field experiments do not have the same external invalidity problems that plague laboratory experiments. The criticisms of laboratory experiments as externally invalid, however, overlook the many advantages of laboratory experiments that derive from their external invalidity. Laboratory experiments are preferable to field experiments at examining hypothetical scenarios (e.g., When automated vehicles dominate the roadways, what principles do people want their automobiles to rely on?), at minimizing erroneous causal inferences (e.g., Did a treatment produce the reaction researchers are studying?), and at replicating and extending previous studies. Rather than being a technique that should be abandoned in favor of field experiments, political scientists and economists should embrace laboratory experiments when testing theoretically important but empirically unusual scenarios, tracing experimental processes, and reproducing and building on prior experiments.