Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two. While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires. The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.
Thomas J. Leeper
George Kwaku Ofosu
Political scientists are increasingly using experiments to study African politics. Experimental methods help scholars to overcome two central research challenges: potential bias in responses to survey questions (social desirability bias), and establishing the effect of X on Y (causality). Regarding survey response bias, experimental methods have been used to study sensitive topics such as ethnic favoritism, clientelism, corruption, and vote buying. In terms of causality, experiments have helped to estimate the effects of programs aimed at enhancing the quality of democracy or public service delivery. Identifying the causes of the political behavior is critical to understanding the “nuts and bolts” of African politics. For policymakers, knowledge of what works to promote democratic accountability ensures the efficient allocation of scarce resources.
It can be difficult for political scientists and economists to know when to use laboratory experiments in their research programs. There are longstanding concerns in economics and political science about the external invalidity of laboratory results. Making matters worse, a number of prominent academics recommend using field experiments instead of laboratory experiments to learn about human behavior because field experiments do not have the same external invalidity problems that plague laboratory experiments. The criticisms of laboratory experiments as externally invalid, however, overlook the many advantages of laboratory experiments that derive from their external invalidity. Laboratory experiments are preferable to field experiments at examining hypothetical scenarios (e.g., When automated vehicles dominate the roadways, what principles do people want their automobiles to rely on?), at minimizing erroneous causal inferences (e.g., Did a treatment produce the reaction researchers are studying?), and at replicating and extending previous studies. Rather than being a technique that should be abandoned in favor of field experiments, political scientists and economists should embrace laboratory experiments when testing theoretically important but empirically unusual scenarios, tracing experimental processes, and reproducing and building on prior experiments.
Field experiments allow researchers on political behavior to test causal relationships between mobilization and a range of outcomes, in particular, voter turnout. These studies have rapidly increased in number since 2000, many assessing the impact of nonpartisan Get-Out-the-Vote (GOTV) campaigns. A more recent wave of experiments assesses ways of persuading voters to change their choice of party or alter their social and political attitudes. Many studies reveal positive impacts for these interventions, especially for GOTV. However, there are far fewer trials carried out outside the United States, which means it is hard to confirm external validity beyond the U.S. context, even though many comparative experiments reproduce U.S. findings. Current studies, both in the United States and elsewhere, are growing in methodological sophistication and are leveraging new ways of measuring political behavior and attitudes.