Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS (oxfordre.com/politics). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy and Legal Notice).

date: 19 November 2019

The Search for Real-World Media Effects on Political Decision Making

Summary and Keywords

Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two.

While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires.

The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.

Keywords: media effects, media, causal inference, experiments, field experiments, quasi-experiments, surveys, media exposure, political decision making

Introduction

The study of media effects entails the empirical association of two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between the two—in tandem with assumptions about causal ordering and an absence of spuriousness—constitutes evidence of media effects. Social scientists are particularly interested in any such effects on the public’s perceptions of the social and political world, their knowledge or lack thereof about the same, their preferences over goods, candidates, or issues, and finally their behavior. The search for media effects takes many forms and this article focuses on the search for those effects outside the confines of experimental laboratories, in the buzzing, blooming confusion of everyday life.

At stake in the search are three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two. While measuring the outcomes potentially affected by media exposure is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the latter two aspects of studying media effects present nearly insurmountable empirical difficulties short of ambitious experimental design. Despite these challenges, media effects research has been preoccupied for much of its history by an effort to develop and apply survey-based measures of individual media exposure that serve as the empirical basis for studying media effects. Despite Prior’s (2013) call to arms that “developing better measures of media exposure is a pressing goal” (p. 621), the effort to do so has been a largely failed exercise that has left social scientists with little credible insight into media effects outside laboratory settings—precisely those locations where such effects matter the most. The effort to use survey-based measures to generate causal insight into media effects has ultimately distracted from the design of both causally credible methods such as field experiments and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite considerable time and effort.

Laboratory experiments have demonstrated causal possibilities, but can generalize weakly given the self-selected nature of media experiences (Arceneaux & Johnson, 2012; Gaines & Kuklinski, 2011; Leeper, 2017) and the arbitrary selection of treatments, outcomes, and samples in much experimental work (Druckman & Leeper, 2012b). Field experimental studies therefore present the best path forward for insights into media effects outside such settings because of their causal credibility and the advantage of true experiments—relative to so-called natural experiments (Sekhon & Titiunik, 2012)—at offering insight into anything beyond quirks of causality. But just as field experiments present an ideal path for obtaining credible and realistic insights into media effects, thick descriptive methods spanning the qualitative-quantitative divide present promising opportunities for studying media content and media experiences that are likely to generate far more useful insights than thin-descriptive survey measures of media exposure. Like the seminal use of in-depth interviews by Graber (1988), methods that go beyond “mere exposure” are vital for understanding the complexities of media experiences that might be the basis for “media effects.”

This article provides a discussion of the concept of “media effects” and the evidentiary standards necessary to establish that media have a causal effect on politically relevant outcomes. This includes the substantial limitations in the canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes. Instead, experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects. The article concludes with a discussion of how these methods—and others—might be fruitfully deployed moving forward.

What Are Media Effects and How Would We Know Them When We See Them?

Like any causal relationship, there are two ways to frame the question of media effects: either as a “backward causal question” emphasizing the role media variables—relative to many others—might play in the production of observed outcomes or as a “forward causal question” emphasizing how outcomes might differ across counterfactual values of media variables (Gelman & Imbens, 2013).1 The backward-looking approach takes outcomes as phenomena to be explained and seeks out explanations for what might have caused them, ultimately attempting to assess the absolute or relative size of the media contribution to those outcomes. These outcomes might be macro level like election outcomes or public discourse or they might be micro-level outcomes like individual beliefs, opinions, affect, cognition, physiology, or behaviors (Potter, 2011). To be less abstract, why did individuals vote for Donald Trump in the 2016 US presidential election? Why did Britain vote Leave in the 2016 referendum on European Union membership? “Media,” however defined and conceptualized, might be sought out as one among many possible causes of these outcomes. Media variables typically take the form of metrics of media content or metrics of individuals’ exposure to, attention to, or reception of said content.

The forward-looking approach instead takes media or a feature thereof to be a well-defined and perhaps manipulable variable that generates different outcomes across counterfactual values of the variable (Holland, 1986; Rubin, 1978). The outcomes of interest are the same, but the forward-looking approach attempts to reduce “media” as a concept to an isolatable event, experience, or exposure and assess how realized outcomes compare to counterfactual outcomes where media were different. For example, if the Hillary Clinton campaign had spent more on television advertising in swing states in the 2016 presidential election, would vote shares have been different? If media had covered the Leave campaign’s “£350 million per week for the NHS” claim differently, would vote intentions in the 2016 referendum have been different?

The phenomena and the causal relationships are the same, but the backward and forward framings of media effects steer attention to specific kinds of questions and specific kinds of research designs. In the backward-looking framing, research in search of media effects substantiates effects when variation in outcomes across variations in media variables persists once other explanations for that outcome variation have been considered and controlled for. In the forward-looking framing, research in search of media effects substantiates effects when variation in outcomes manifests in response to a real or approximate manipulation of a media variable. Backward causal questions are exploratory; forward causal questions are experiment-like. Whereas forward-looking questions generate a definitive statement about the direction(s) and size(s) of media influence on outcomes of interest, backward-looking questions lead only to further questions or new hypotheses.

Humans tend to think about causal effects—including those of media—in backward-looking terms, so we naturally gravitate toward trying to answer those questions directly. But as Gelman and Imbens (2013) argue, these questions never lead to clear answers because they ultimately generate correlational evidence influenced by unobserved or unobservable additional factors. In the words of Hovland (1959): “while the conceptualization of the survey researcher is often very valuable, his [sic] correlational research design leaves much to be desired” (p. 15). Instead, to understand media effects researchers need to transform backward-looking questions into forward-looking questions or treat the answers to backward-looking questions as exploratory steps that lead toward new forward-looking questions. A backward-looking question at best generates hypotheses about possible causes but does not rule out causes or definitively clarify the magnitude of causal effects because there can always be some other set unobserved factors that explain away any observed patterns, or mask causality under apparent non-correlation. The researcher must identify and measure not just the media variable but all other potential causes.2

Despite the difficulty of answering backward-looking causal questions about media effects, researchers continue to search for explanations of outcomes by associating those outcomes with measures of media variables controlling for a subset of other observable, measurable phenomena. Such analysis is facilitated by the frequent inclusion of coarse, unreliable, and error-prone survey-based measures of media exposure in nearly all election surveys and many public opinion polls. Despite the general, philosophical challenges to backward causal inference, much research continues to proceed from an assumption that not only is backward-looking causal inference possible but also that survey-based measures have any utility at all in causal inference. In an infamous example, Bartels (1993) regresses various election-related, individual-level political outcomes on two American national election studies items measuring television viewing and daily newspaper readership, controlling for party identification, age, education, and race. The results suggest larger than anticipated effects due to corrections for measurement error in the media measures providing an apparently substantial corrective to a field the author terms “one of the most notable embarrassments of modern social science” (p. 267).

Yet to answer forward-looking questions requires conceptualizations of and measures of media experiences that do not in any way resemble the survey-based measures in wide use in the late 20th and early 21st centuries. Rather than asking “how much did media matter in the last US presidential campaign?” and attempting to answer that question by regressing vote choice on a self-reported, survey-based measure of media exposure and some possible confounds, a more credible forward-looking approach attempts narrowly to understand whether and to what degree an isolatable media experience—such as viewing a debate, seeing a television advertisement, reading a particular news story—affected individual vote choice or aggregate election outcomes. Doing so requires both narrowness in research question but also attention to measurement of a specific event rather than abstract media experiences (Prior, 2007). For example, Fridkin, Kenny, Gershon, Shafer, and Woodall (2007) used randomized exposure to a presidential debate to understand what impact the debate had on a variety of outcomes. Rather than try to define and measure campaign effects broadly, they focus on an isolatable experience. Similarly, Albertson and Lawrence (2009) randomly encourage viewing of an educational television program to understand the effect of this specific event—rather than some abstract definition of television generally—on knowledge and attitudes. Before explaining why survey-based measures of media experience are particularly flawed for understanding media effects, it is important to see how these kinds of experimental approaches provide a straightforward design for answering forward causal questions but no method provides a straightforward design for answering backward causal questions about media effects.

Design Trumps Analysis in Studying Media Effects

A causal effect (of media) for an individual is conventionally understood as a difference between two or more potential outcomes that this individual might have expressed had they been exposed to varying values of a media variable (see Gerber & Green, 2012; Holland, 1986; Rosenbaum & Rubin, 1983). To take a canonical example, an individual’s opinion on whether to tolerate a rally by a hate group might be affected by different media portrayals of the issue (“framing”), such as coverage that emphasizes free speech considerations versus coverage that emphasizes public safety considerations (Nelson, Clawson, & Oxley, 1997). The individual-level causal effect, TE, is defined as the difference in the individual’s, i, potential opinion, Yit, when they are exposed to one or the other framing at a given point in time, t:

TEit=Y{it,FreeSpeech}Y{it,PublicSafety}

Defined in this way, these two media experiences constitute the complete set of possible media experiences available at time t.3 Because individuals can only experience one of these forms of media content, the individual-level media effect is unobservable due to the fundamental problem of causal inference (Holland, 1986).4 We can never know how media affect an individual without insight into unobservable counterfactuals where their contemporaneous media experiences are different.5

Causal inference, however, often proceeds from the idea that while individual-level effects are not observable, an average treatment effect across a population (or sample thereof) is observable and provides useful insight into the central tendency of the individual-level effect distribution. Because of the following equality:

ATE|=E(TEt)|=E(Y{t,FreeSpeech}Y{t,PublicSafety})|=E(Y{t,FreeSpeech})E(Y{t,PublicSafety})

we can reach inferences about the average effect of the free speech framing (relative to public safety framing) by comparing average outcomes among individuals exposed to each type of content, provided we are willing to assume that these individuals receive content independent of the values they would take for YFree Speech and YPublic Safety. In an experimental setting we can assume this independence by design because physical randomization of individuals to experiences operates without regard for each individual’s schedule of potential outcomes. In all other (observational) research designs, we must arrive upon that independence only conditionally by mathematically conditioning on factors that are theorized to influence both individuals’ exposure to particular media at a particular point in time and their potential outcomes. Fully identifying and measuring these other factors is daunting.

This essential difference between how we draw causal inferences in experimental and observational research—in the former case by design and in the latter case only by careful and complete measurement of confounding factors—highlights why experimental studies are seen to provide a “gold standard.” However, this particularly favorable standing among alternative research designs is not absolute. Indeed, experiments taking place in laboratory settings—including some of the earliest experimental research in media effects by Iyengar and Kinder (1987)—are seen as particularly limited in value. Yet as Hovland (1959) argued decades ago, integrating observational (survey) and (laboratory) experimental methods “will require on the part of the experimentalist an awareness of the narrowness of the laboratory in interpreting the larger and more comprehensive effects of communication. It will require on the part of the survey researcher a greater awareness of the limitations of the correlational method as a basis for establishing causal relationships” (p. 14). Experimental designs for studying media effects provide the possibility of clear inference about average media effects, but they do not necessarily do so in a way that is consistently useful, and the reliance upon experimental manipulation limits the degree to which experimentation explains mediatized phenomena. Operating outside the laboratory and beyond the scope of survey-based measures of media exposure is likely to be particularly fruitful and is the focus of this article.

Strictly speaking, experiments (be they in a laboratory, survey, or field setting) provide insights into causal possibilities. Media effects experiments test whether a particular media variation can cause an outcome, within the implicit constraints of the sample, setting, and treatment used in the experiment (see Shadish, Cook, & Campbell, 2001, esp. ch. 13). Evidence that a given experience is, on average, effectual in a particular time and place for a particular set of individuals does not mean that the same results would be obtained elsewhere. Experimental evidence of media effects must always be read as “media can cause” not “media do cause.” But experiments are also limited by the prospective, forward-looking nature of the research design. Short of massive-scale, field-based interventions into everyday life, experiments also cannot generate inferences of the form “media did cause.” For example, Feezell (2017) demonstrates that social media can serve an agenda-setting function but not that they have in any particular instance outside the experimental context. Druckman, Levendusky, and McLain (2018) show that mediatized messages can further spread through interpersonal discussions but not that they have in any particular instance outside the experimental context. Searles, Fowler, Ridout, Strach, and Zuber (2017) show that male and female voices can be differentially effective in campaign advertising but not that they are in any particular instance outside the experimental context. Extrapolation beyond the experimental setting, sample, treatment, and outcome measures requires assumptions about or an explicit model for the transportability of the causal effects. This means experiments are typically powerless on their own to provide retrospective or historical insight and thus powerless to answer the kinds of backward-looking causal questions that social scientists frequently gravitate toward.

This is particularly worth in-depth consideration given that media experiences, the effects of which researchers might desire to know, are not commonly randomly assigned (Arceneaux & Johnson, 2012; Bennett & Iyengar, 2008; Hovland, 1959; Leeper, 2017), nor do they consist of strictly captive exposure to forced stimuli (Druckman, Fein, & Leeper, 2012). Media content and audience exposure to that content are anything but random.6 Experiments thus provide “gold standard” causal insight into experiences, but only to the extent that the variation introduced by experimental control resembles the real-world variation in media experiences that researchers might desire to understand and that such experiences are prone to be easily randomized.

On face value, then, observational methods of obtaining causal inference about media effects would seem to have some advantages over these narrow experimental approaches. For example, observational methods would allow a greater flexibility over the sample of individuals, settings, causes, and outcomes being studied given that the experiment-eligible populations of individuals, settings, causes, and outcomes are a non-representative subset of this hyperpopulation of interest. Similarly, observational methods may be deployed in service to retrospective questions that are impossible for prospective experimental techniques to answer. And observational methods rely upon naturalistic—rather than researcher-forced—variation in media experiences, minimizing concerns about the artifice of the experimental experience. But these apparent superiorities of observational approaches are frequently illusory. Single-study characteristics that imply generalizability, such as representative sampling of causes, outcomes, settings, and units, are only useful for learning about media effects to the extent those ostensibly more “general” research designs also offer causal identification. Frequently, they do not. Thus the oft-mentioned trade-off is not between internal validity and external validity but between clear identification of a possible causal effect and an alternative design that offers neither clarity about internal nor external validity.

An increasingly popular middle ground between experimental media effects research and observational media effects research are so-called quasi-experiments or natural experiments (Shadish, Cook, & Campbell, 2001). Unlike experiments, quasi-experiments do not involve the active intervention of the researcher but instead analysis of variation in outcomes across random or as-if-random interventions generated by other forces (such as temporal and geographical discontinuities, the random spread of technologies, weather patterns or geological factors, or lotteries administered for other purposes). Such designs might attempt to understand the direct effect of a randomized media phenomenon or to use randomization-like variation in something else to instrument for media coverage, access, or exposure. For example, researchers have studied how electoral outcomes vary geographically across areas affected early or late by the non-random but also not wholly systematic rollout of cable television, broadband Internet (Lelkes, Sood, & Iyengar, 2015) or Fox News (Clinton & Enamorado, 2014; DellaVigna & Kaplan, 2007). Other quasi-experimental approaches to media effects use discontinuities in radio or television signal strength in Russia (Enikolopov, Petrova, & Zhuravskaya, 2011) or Silvio Berlusconi’s Mediaset network in Italy (Durante, Pinotti, & Tesei, 2013), arbitrary channel positioning of Fox News across US cable providers (Martin & Yurukoglu, 2017), or the unintentional overlap of US competitive-state media markets into neighboring non-competitive districts (Huber & Arceneaux, 2007; Krasno & Green, 2008). A recent study of climate change messages took advantage of the existence of two cable TV systems in the same market, showing ads on one but not the other, then measuring attitudes toward global warming among subscribers of each system (Romero-Canyas et al., 2018).

Relying on strong assumptions about the randomness of these “natural” interventions, quasi-experiments provide an observational research design that generates more credible causal inference than traditional correlational designs given that areas affected and unaffected by such interventions are considered to be similar with the intervention occurring as-if-random (for methodological discussion, see Keele & Titiunik, 2014; Sovey & Green, 2011). The advantage of quasi-experiments over researcher-administered experiments is the ability to gain retrospective and historical insight into outcomes that might have been impacted by quasi-experimental intervention (see, for example, Voigtlaender & Voth, 2014). Even more so than experiments, however, they have a narrowness of scope—and thus a severe inferential localness—due to the rarity of naturally occurring randomization. True lotteries occur (e.g., with the Vietnam War draft lottery; Angrist, 1990; Erikson & Stoker, 2011), but most quasi-experiments leverage a strong assumption of randomness applied to a one-off media occurrence. Rather than being a middle ground between poorly identified observational methods and well-identified experimental methods, quasi-experiments, in the rare instances in which they occur, carry the strengths and limitations of both approaches in that they serve as very useful by providing a retrospective approach to answering a forward-looking causal question but may tend to cumulate poorly given the typical impossibility of replicating a given non-experimental intervention. But as we have seen, all causally oriented research suffers from some degree of localness.

Despite the promise, quasi-experiments provide useful but ultimately narrow historical insight into media effects. Experimental methods offer a superior alternative given the inherent repeatability of interventions (although not necessarily the settings in which they are randomized). Whereas both experiments and quasi-experiments offer a precise definition of a cause and precise statement of effects, observational methods for measuring media effects suffer from the lack of such precision because—despite decades of scholarly effort—we often cannot effectively conceptualize let alone measure variations in individual media experiences. If we do not know how to ensure that two media experiences differ in one and only one way, observational research leaves us with a “bundles of sticks” problem (Sen & Wasow, 2016) where we attribute differences in outcomes to “media” (vaguely defined and coarsely measured) without any clear insight into what part of “media” is producing effects. Observational methods for obtaining causal inference only achieve credibility when we can define, measure, and control for all other differences in experiences; this is something we cannot do. In essence, we learn nothing.

Measuring Media and Media Exposure

Such pessimism about observational approaches is warranted. While much social science research would seem to imply that researchers believe it is possible to measure media exposure using surveys, this is simply not the case. Though numerous scholars have advocated for improved measures of media exposure and attempted to expose the deficiencies or advantages of particular approaches (Althaus & Tewksbury, 2007; Bartels, 1993; Chaffee & Schleuder, 1986; de Vreese & Boomgaarden, 2006; Dilliplane, 2011; Dilliplane, Goldman, & Mutz, 2013; Eveland, Hayes, Shah, & Kwak, 2005; Eveland, Hutchens, & Shen, 2009; Eveland & Scheufele, 2000; Eveland, Seo, & Marton, 2002; Freedman & Goldstein, 1999; Garrett, Carnahan, & Lynch, 2013; Goldman, Mutz, & Dilliplane, 2013; Guess, 2014; Jerit et al., 2016; Price & Zaller, 1993; Prior, 2003, 2009a, 2009b, 2013; Slater, Goodall, & Hayes, 2009; Tewksbury, Althaus, & Hibbing, 2011), this collective effort at obtaining complete measures of media exposure is fundamentally flawed. This goes beyond the use of such measures in causal inference.

Consider, for example, a few common ways of measuring media exposure using survey self-reports. We might ask individuals to report whether they have been exposed to (or attentive to) a particular source, a particular medium, or a particular event. Alternatively, we might ask for a ranking of the degree of attention to certain sources (like CNN or Fox News) or news content (e.g., about a particular piece of legislation or world event). Alternatively, we might ask for degree of attentiveness or ratings of intensity of use of various media (like television or Internet news). Alternatively, we might ask for time-based or frequency-based measures that count the number of days, hours, or minutes spent with media. All these approaches might vary in their source-specificity from an abstract medium (e.g., television and newspapers) to specific sources (e.g., World News Tonight on ABC), and vary in their content-specificity from abstract topics (e.g., news about politics and international affairs) to specific facts (e.g., news of Donald Trump’s alleged affair with pornographic actress Stormy Daniels during the pregnancy of his third wife, Melania Trump). And each can vary in the granularity of time used to measure such exposure or to rank exposure to media alternatives: we might ask about typical behavior, behavior the previous week, behavior that day, or even hours, minutes, or seconds of time use. These measures tell us what people believe their media experiences are, but only coarsely and with substantial error. The public substantially over-report media attention for a variety of reasons (Prior, 2013), and efforts at improved measures (like those just alluded to) have not produced any degree of consensus on how to improve media use measures.

The challenges discussed in the literature are, quite superficial, however. These tend to include measurement error, over-reporting, lack of over-time reliability, and social desirability biases. But if one wants to understand media effects apart from understanding media use, then a larger epistemological issue is whether it makes sense to talk about effects of ill-defined causes. If we imbue an error-prone, biased survey-based measure of media attention with causal meaning, what kinds of claims can we generate? Consider, for example, a claim by Kull, Ramsay, and Lewis (2003) in their study of misperceptions related to the Iraq War that in a regression analysis of misperceptions controlling for demographics, “the respondent’s primary source of news is still a strong and significant factor; indeed, it was one of the most powerful factors predicting misperception” (p. 587). In other words, Fox News had the effect of leading the public to be misinformed about the Iraq War. Setting aside the low credibility of causal ordering in this design, what does the “Fox News effect” mean here?7

Rubin (1990) describes how causal inference in the potential outcomes framework requires a stable unit treatment value assumption (SUTVA). While most research focuses on the non-interference part of this assumption, SUTVA also requires that there is only version or form of the treatment (see Sinclair, 2012). This treatment homogeneity is perhaps the most overlooked assumption of causal inference. If a person is characterized as having an identical value of treatment (e.g., FoxViewer = 1) then the assumption is that this person’s treatment is the same as another person coded the same way. Fox News is Fox News. But is it? There is reason to be skeptical. When someone broadly reports that she views Fox News, that measure says little about what stimuli—that is, what causes—that person actually encountered during such viewing. Two people reporting viewing Fox News report only “nominally identical” experiences that might in practice vary systematically (Rubin, 1990, p. 475). Did they see a particular claim about the Iraq War? Did they see particular on-screen visuals? Did they hear particular arguments about the war? These are components of media that laboratory-based media effects research typical finds to be causally relevant. The self-reported exposure to Fox News does not tell us any of this without assumptions about what that viewership entailed (e.g., in terms of timing and duration) and assumptions about what content that time-specific viewership offered (in terms of information, arguments, issue emphasis, visuals, etc.). Without these assumptions that experiences are homogeneous (even if viewing Fox News means different things to different people), it is impossible to draw out causal insight from such self-reported exposure. If we want to understand the causal influence of media we have to understand particularistic media experiences, exposure, or non-exposure which would generate counterfactual outcomes. Exposure self-reports gloss over this in hopes that a time- or source-based measure proxies for particularistic exposure, yet these measures—despite decades of effort—continue to be coarse, unreliable, and frequently invalid and they regularly become out of date as media landscapes change. Goldman et al. (2013) argue that despite difficulty of measuring exposure per se and responding to such over-time changes, “measurement consists of the best one can do at any given point in history; we must make do with what is on offer” (p. 651). But there is simply no reason to settle when the entire enterprise is flawed per se.

The Challenges of Measuring Particularistic Media Experiences

Ultimately, what we mean by “media” and “media exposure” and what we can learn about the effects thereof are therefore tied up with the measures we can use to summarize the high dimensionality of “media” as a concept. Experimental and quasi-experimental approaches to studying media effects avoid these challenges by defining media effects narrowly and studying the effects of isolatable experiences. Media effects in an observational approach, however, typically define effects more broadly in terms of outcome variation across degrees, amounts, or types of content exposure. This canonical approach poses two insurmountable problems. First, media experiences are infinitely complex and the effort to measure those experiences reduces that complexity in seemingly useful but ultimately futile ways. Our sense of what variation there is in media experience is thus entirely defined by the granularity of our measurement devices. Finer granularity, rather than assuring depth of understanding, simply reveals the further complexity of these more granular slices of media experiences. As new measures identify and attempt to resolve the deficiencies of previous approaches, researchers typically reveal not merely weaknesses in measures—like coarseness—but also inadequate conceptualization—like low dimensionality—of media experiences in a way that continuously expands the relevant set of complexities that must thereafter be measured. Media effects cannot progress if effects are defined in terms of such complex bundles of causes. Second, though this ever-expanding complexity of conceptualization and measurement might be downplayed in order to obtain partial understandings of media experiences, the complexity and constantly emerging new forms of complexity reveal that media diets involve noncomparable experiences across modes, geographies, time periods, and persons that ultimately limit the extent to which particular simplifications of media complexity that are acceptable in one context can be considered acceptable elsewhere. These problems can be termed, respectively, the “complexity problem” and the “incommensurability problem” in media measurement.

First, to assess media effects, media as a cause must be reducible to a well-defined set of counterfactual experiences that can be measured with minimal bias and maximal precision. But media experiences are not so easily summarized unless experimental methods are deployed. Take, for instance, the hate rally framing example. In that design, media experience is defined narrowly as a single exposure to a single, well-defined message mutually exclusive of exposure to another well-defined message, all else constant. While the broader media landscape is complicated by numerous mediums, numerous sources therein, and an overwhelming abundance of content, “media” becomes narrowly defined so that its effect(s) can be identified. The value of the forward-looking style of causal inference should be immediately obvious because randomization of these alternative experiences and measurement of any relevant outcomes give immediate and uncomplicated insight into the possible effect(s) of this message. Identifying that effect using observational data is far more challenging. What counts as exposure? Does it have to happen on a particular medium or channel? For how long? How often? How would we know if a given individual saw the message? Would we ask that person? What if the individual has forgotten? It might require a measure of time-specific exposure, plus content analysis of all possible channels or sources through which the information might have been transmitted. What is the universe of such sources? How would we find and categorize them all? Could we track the individual’s behavior online using a browser plug-in? What if the individual opts out? Could we use Nielsen boxes? Yes, if the individual is already impaneled? Even then, how do we know that person was actually attentive? What about radio exposure, or second-hand exposure via social media or interpersonal discussions? Does that count? If it does, how would we know if it occurred? As media landscapes grow more diverse and more numerous, the complexity becomes overwhelming. Like a fractal diagram, the closer we look, the more there is to see. Whatever ruler we hold up to the world reveals that a more precise ruler might grant superior precision and an ultimately substantively distinct insight. And the answers to these questions about the seemingly infinite complexity of the media landscape only address the challenge of scoring individuals on whether or not they were exposed to a message; we haven’t even thought about outcome measurement yet, or holding all else constant.

Researchers have acknowledged this complexity and responded to it by generating measures that respond to previously ignored sources of complexity. As cable and satellite television emerged in the United States, surveys increasingly measured respondents’ access to, subscription to, and use of these sources. Similarly as media landscapes diversified from the 1990s onward and as the Internet emerged as a key source of information, survey-based measures of media exposure were similarly updated to measure these new sources of landscape complexity. At the same time, the coarseness of self-reported exposure measures also generated innovation in techniques aimed at better capturing complexity, like Nielsen boxes that record television viewing, radio listening devices, media journaling, and passive tracking of web usage. Yet all the research deploying these more granular measures of media exposure reveal that survey-based measures of media exposure gloss over immense variation in individual media experiences facilitated by the fractionalized and segmented media landscape of the 21st century. That Internet users vary in their exposure to political content all the way from zero news stories per campaign to dozens reflects that whatever simplifying value is afforded by coarse, survey-based measures inhibits the use of such measures for meaningful causal inferences. When we attempt to understand the causal effect of broadly defined “media” and measure the putatively causal factor using traditionally coarse measures, we lose insight into precisely the causally interesting within-medium variation. Immense, increasing, and fractal complexity means that “media” defined according to broad notions of “exposure” is an ill-conceived foundation for generating causal inference.

This complexity understandably leads to the kinds of oversimplifications that characterize the survey-based, observational media effects literature. Relying on time use measures and coarse summaries of viewership reduces that complexity to continuous measures of time and artificially discrete measures of audience segmentation. As such, it becomes possible to quantitatively process measures of multiple media. Newspaper readership, television viewing, and radio listening, treated as measures of time, can simply be summed. Membership in distinct audiences can be handled with Boolean algebra to reduce complexity further into categories like “online news user” or “like-minded news viewer.” The audience for Fox in 1998 can be compared to the audience for Fox in 2018. Broadsheet readers in Norway can be compared to broadsheet readers in the United States.

This leads to the second insurmountable challenge. By reducing the complexity of the source experiences, comparability is seemingly simple. But media experiences across sources, mediums, times, and geographies are fundamentally incommensurable. Reading the New York Times on September 10, 2001, is different from reading the New York Times on September 12, 2001. While the editorial line, overall ideology, and issue coverage might typically be stable day to day, the content is ultimately different. If we define media effects as differences in potential outcomes in response to well-defined differences in media experiences, then any effect of the New York Times today is definitionally distinct from any effect of the New York Times tomorrow. The treatment is heterogeneous, but we can pretend that it is not because the label for the two treatments is a constant, a result of vagueness rather than comparability. This false equivalence becomes even more obvious when we compare media experiences at the level of medium or the level of time units. An hour of television news may have once meant a relatively homogeneous experience, but that is no longer the case in any locale with more than one dominant news source. An hour of Internet use might have once conveyed a certain kind of experience, but commensurability across people in a specific context is diminished in even a modestly complex media landscape, let alone comparability across time and/or geography. Even comparisons of equivalent time usage of an identical media service, like Netflix or Twitter or Google News, suffer the same problem as interfaces are localized and algorithmically personalized. An hour of this and an hour of that is apples and oranges.

We cannot define, let alone speak of, the so-called effect of such experiences because they are not in fact a singular media experience. The issue is massive variation in the content to which individuals might be exposed if they spend a similar amount of time on even quite similar media outlets.8 Ultimately we cannot learn about the effects of media unless we can precisely define what we mean by a particular media experience and have measurement tools capable of accurately and precisely measuring whether a given individual had that experience. A possible response is that combinations of survey-based measures of media exposure in tandem with content analysis of media sources might allow for a reduction of complexity by tracing similar experiences across the apparent complexity of sources and exposure patterns. While sidestepping issues of incommensurability by focusing in on a single dimension or feature of media, such approaches multiply rather than reduce apparent complexity by requiring not only precise and unbiased measures of exposure but also precise and unbiased measures of content. It may be that such a mixed-method approach will facilitate observational causal inference because this at least reduces complexity and steers researchers toward definition and measurement of a singular, potentially causal media experience, but more work is needed in this area.

Distinguishing Research Goals to Open Multiple Paths Forward

A pessimist might read this whole line of argument as a strong case against studying media and media effects at all. The intention, however, is quite different. Rather than abandoning the research goals of understanding the role of media in society and politics and rather than abandoning observational methods entirely, social scientists must instead come to terms with the reality that multiple goals of research—and thus plural methods for obtaining those distinct goals—are coeval. The goal of obtaining causal inference is something best left to the methods most capable of credibly achieving it.9 Observational approaches are, by default, inappropriate tools for studying media effects without strong, typically insatiable assumptions. Yet experiments are also limited to providing prospective insight into narrow causal possibilities. Experiments are thus typically deployed inappropriately if in service to any other research goal. And these other research goals are just as, if not more, important. Take, for instance, the goal of obtaining thorough descriptive insights into the abundant content of the ever-expanding media landscape and fine-grained characterizations of the media diets of human populations. This goal is best tackled with methods suited to the purpose. Experimental techniques are not the suitable methods.

Narrowly defining media effects research as an enterprise of causal inference, as is done here, is meant to highlight that one tool should be primarily deployed in service to that goal. Other closely aligned goals are well served by alternative approaches. This ideal might be challenging as the social sciences have hit a distinctly confirmatory moment in the history of methodology. The “credibility revolution” has meant that observational methods have almost disappeared from policy evaluation and political economy research in high-profile journals; the fields of political communication and public opinion, which were already heavily experimental, appear to have become even more so. Experiments are no mere fad and the push for trial-like preregistration of analysis plans has pushed the social sciences into even more confirmatory ways of thinking about research methods and the goals of research. Even as qualitative methods have been prominently showcased in recent high-profile work on political behavior and political communication (e.g., Cramer, 2016; Nielsen, 2012), exploratory, inductive, and thick-descriptive research goals seem to have fallen out of favor in prominent disciplinary outlets. The dominance of the experimental approach has been good for confirmatory research, but it has diverted attention from the multiple, equally valid objectives of social science—that is, to describe and to explain.

Yet the changing media landscapes that have characterized the period from the 1980s (with the advent of cable television) through the 1990s (with the arrival of the Web) through to the 2000s (with the emergence of web 2.0 technologies and social media) are a period more in demand of thick description than almost any period in the history of social science. A century ago, the Princeton Radio Research Project and later Columbia School of Social Research aptly understood the societal changes that widely available radio and television would bring and studied the phenomenon in multiple ways including panel surveys (e.g., Berelson, Lazarsfeld, & McPhee, 1954; Katz & Lazarsfeld, 1960; Lazarsfeld, 1940b), content analysis (Cantril, 1940) and ethnographic work (e.g., Lerner 1958). Just as Lazarsfeld’s collaborators were pioneering methods of causal inference about media effects (e.g., Lazarsfeld, 1940a; Lazarsfeld & Fiske, 1938), they were also engaging in deep, descriptive, and exploratory research into media content and media experiences. Now, the set of media has evolved from mere print and broadcast to new digital media that mix textual and audiovisual experiences and is layered by phenomena like dual screening and the complexly mediated two-step flows of content through social media. Alongside this seismic shift in the diversity of mediums has come a massive escalation in the variety of media alternatives available to the public. The number of alternative media has increased, the number of specific sources has increased, and the sheer volume of content has increased. Description of these changes requires not only cataloging the content of each new outlet using the metrics of more traditional media but also changing the ways that media are conceptualized and measured. Measures of whether people read a national newspaper or view the evening news intend to capture some metric of political engagement with national politics due to the consistent, measurable, and perhaps predictable content of such outlets. But the question of whether people view YouTube or read news on Facebook communicates almost nothing about what they have experienced. The measures of old media landscapes are inapplicable to new, richer, denser, and more complex media landscapes. Our descriptions grow relatively and ever thinner.

As the complexity of media increases and the rate of change therein accelerates, such thin description leaves ever-greater portions of the political and social world hidden from scholarly attention. Without thick and thorough understanding of this complexity, the hypotheses tested by identification-oriented research will constitute a smaller and smaller portion of the interesting variation in media experiences. What can be done to provide more depth? Both quantitative and qualitative methods are the answer. Computational methods to gather and characterize especially online media content offer the prospect of characterizing media at a scale previously unimaginable, as well as assessing the diversity of media content across times, platforms, geographies, and individuals. Web-tracking methods offer the possibility of studying media exposure with a very high degree of granularity that might be able to meaningfully separate exposure from attention and quantify depth of engagement with particular sources, articles, or issues. Ethnographic, qualitative interviewing, and diary methods offer a quite distinct form of thick description. Just as the production of media has rapidly evolved and changed, necessitating sophisticated approaches to map and characterize the media landscape, the consumption of media necessitates in-depth insight into how citizens feel and think about media. A compelling example of this can be found in the use of in-depth interviews (Toff & Nielsen, 2018) to understand how the seemingly inattentive segments of society understand their own learning processes. Cramer and Toff (2017) use similar methods to demonstrate that what the public considers to be politically consequential knowledge often differs from the type that is measured on political surveys. This form of evidence gathering provides highly useful hypothesis generation. Although these kinds of inductive, qualitative methods—like their massive-scale quantitative analogues—will not credibly identify media effects, that is not their ambition. This kind of thick description and exploration is needed more than ever.

Thus the effort here to make a strong case for limiting the study of media effects to experimental approaches must be read as part of a larger advocacy for a more pluralistic social science that is diverse not only in its methods but also in its questions. This idea that different methods suit different goals is familiar, but appeals to mixed methods are typically made with an implicit or explicit goal of triangulation: that is, arriving upon similar answers from different methods (for some discussion, see Seawright, 2016). As comparative studies of observational and experimental methods quite consistently show, different methods applied even to identical research questions tend to diverge somewhat in their conclusions (Arceneaux, Gerber, & Green, 2006; Lalonde, 1986). Triangulation is a flawed way forward. Mixing methods to arrive at different answers to different questions, by contrast, will provide a richness of understanding of media and its effects that no method alone and no method deployed in service to triangulation can provide. Rather than debating whether different methods can triangulate on causal inferences (thus implicitly limiting research attention to confirmatory research), a more fruitful path is discriminating about methods while retaining plurality with respect to research questions.

Conclusion

This article has discussed what media effects are and how they might be studied outside a laboratory-experimental setting, focusing on two major challenges posed by the study of “real-world” media effects—the challenging of operationalizing media and the difficulty of credibly claiming linkages between media and outcomes of interest. Even so, experimental and quasi-experimental research designs have been effectively used to draw causal inferences about media effects. While the article might suggest a degree of fatalism about the media effects literature, all hope is not lost. Indeed, if readers take away one message from this article it should be that the question of media effects is too important to be lost in dead ends. Greater reliance on survey and laboratory experiments can be used to understand the mechanisms of media influence, heterogeneity in media effects, and the variety of possible effects media may have. Similarly, field experimental and quasi-experimental designs can be used to understand the direction and magnitude of media influence in real-world settings. Ultimately, though, we do not need survey-based measures of media exposure to understand media effects, so we should spend much less time, effort, and resources improving them. The opportunity cost is too great and there is too much that we do not yet know.

Yet this is not to diminish observational research in service to closely aligned but qualitatively distinct research objectives. More than ever exploratory, observational research is needed for its own sake, to serve the distinct goals of describing media and media experiences and generating theoretical insights that might be tested experimentally. Understanding media through these tools must be a complement to credible methods for obtaining the distinct goal of causal inference. Computational methods, in particular, offer the prospect of systematically characterizing textual media content at a scale previously unimaginable. Digital trace data provided by social media APIs (application programming interfaces) and web-tracking software promises to provide insight into individual-level online experiences that can hardly be understood using aggregated user counts or web traffic statistics or the kinds of survey self-reports that have dominated media research (Barberá, Jost, Nagler, Tucker, & Bonneay, 2015; Lazer & Radford, 2017; Menchen-Trevino & Karr, 2012). At the same time, ethnographic approaches and interview-based methods seem capable of exposing how people think and talk about their experiences of media using language and concepts that cannot necessarily be captured by closed-ended time use questions on survey questionnaires. Discourse analytic and content analysis methods can serve to critique understandings of and interpretations of media content. These objectives are well served by methods other than experimentation.

But an even more difficult conclusion than this relates to what can be learned from experimental methods. While observational methods, by definition, cannot generally provide insight into the causal effects of media and furthermore experimental techniques are uniquely capable in this respect, the reality is that experimental methods are also deficient. They can only provide insight into causal possibilities and only demonstrate or explain phenomena to the extent that they entail experiences, treatments, outcomes, and participants reflective of those of broad interest. They might generalize, but it is hard to know how far without extensive, multistudy programs of research. In the end, an individual experiment—regardless of the size and scope of the intervention or the number of participants involved—is never going to be able to comprehensively and generalizably describe the effects of media. But that is an unobtainable ideal that no single instance of any method can obtain. We should learn what we can from experiments—namely, about the possible effects of media—and similarly learn what we can from observational methods—namely, about patterns of media content and experience; all of this while acknowledging the fundamental limits to what is knowable and acknowledging that any understanding of media or its effects is prone to be immediately out of date.

Acknowledgments

This article benefited from discussion and feedback from participants at the University of Southern California in November 2017.

References

Albertson, B. L., & Lawrence, A. (2009). After the credits roll: The long-term effects of educational television on public knowledge and attitudes. American Politics Research, 37(2), 275–300.Find this resource:

Althaus, S. L., & Tewksbury, D. (2007). Toward a new generation of media use measures for the ANES report to the Board of Overseers (Technical report). American National Election Studies Board of Governors.Find this resource:

Angrist, J. D. (1990). Lifetime earnings and the Vietnam draft lottery: Evidence from Social Security administrative records. American Economic Review, 80(3), 313–336.Find this resource:

Arceneaux, K., Gerber, A. S., & Green, D. P. (2006). Comparing experimental and matching methods using a large-scale field experiment on voter mobilization. Political Analysis, 14(1), 37–62.Find this resource:

Arceneaux, K., & Johnson, M. (2012). Changing minds or changing channels? Media effects in the era of viewer choice. Chicago, IL: University of Chicago Press.Find this resource:

Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science, 26(10), 1531–1542.Find this resource:

Bartels, L. M. (1993). Messages received: The political impact of media exposure. American Political Science Review, 87(2), 267–285.Find this resource:

Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political communication. Journal of Communication, 58, 707–731.Find this resource:

Berelson, B. R., Lazarsfeld, P. F., & McPhee, W. N. (1954). Voting: A study of opinion formation in a presidential campaign. Chicago, IL: University of Chicago Press.Find this resource:

Cantril, H. (1940). Gauging public opinion. Princeton, NJ: Princeton University Press.Find this resource:

Chaffee, S. H., & Schleuder, J. (1986). Measurement and effects of attention to media news. Human Communication Research, 13(1), 76–107.Find this resource:

Clinton, J. D., & Enamorado, T. (2014). The national news media’s effect on Congress: How Fox News affected elites in Congress. Journal of Politics, 76(4), 928–943.Find this resource:

Cramer, K. J. (2016). Politics of resentment. Chicago, IL: University of Chicago Press.Find this resource:

Cramer, K. J., & Toff, B. J. (2017). The fact of experience: Rethinking political knowledge and civic competence. Perspectives on Politics, 15(3), 754–770.Find this resource:

DellaVigna, S., & Kaplan, E. (2007). The Fox News effect: Media bias and voting. Quarterly Journal of Economics, 122(3), 1187–1234.Find this resource:

Dilliplane, S. (2011). All the news you want to hear: The impact of partisan news exposure on political participation. Public Opinion Quarterly, 75(2), 287–316.Find this resource:

Dilliplane, S., Goldman, S. K., & Mutz, D. C. (2013). Televised exposure to politics: New measures for a fragmented media environment. American Journal of Political Science, 57(1), 236–248.Find this resource:

Druckman, J. N., Fein, J., & Leeper, T. J. (2012). A source of bias in public opinion stability. American Political Science Review, 106(2), 430–454.Find this resource:

Druckman, J. N., & Leeper, T. J. (2012a). Is public opinion stable? Resolving the micro/macro disconnect in studies of public opinion. Daedalus, 141(4), 50–68.Find this resource:

Druckman, J. N., & Leeper, T. J. (2012b). Learning more from political communication experiments: Pretreatment and its effects. American Journal of Political Science, 56(4), 875–896.Find this resource:

Druckman, J. N., Levendusky, M. S., & McLain, A. (2018). No need to watch: How the effects of partisan media can spread via interpersonal discussions. American Journal of Political Science, 61(1), 99–112.Find this resource:

Durante, R., Pinotti, P., & Tesei, A. (2013). Voting alone? The political and cultural consequences of commercial TV. Paolo Baffi Centre Research Paper No. 2013-137.Find this resource:

Enikolopov, R., Petrova, M., & Zhuravskaya, E. (2011). Media and political persuasion: Evidence from Russia. American Economic Review, 101(7), 3253–3285.Find this resource:

Erikson, R. S., & Stoker, L. (2011). Caught in the draft: The effects of Vietnam draft lottery status on political attitudes. American Political Science Review, 105(2), 1–17.Find this resource:

Eveland, W. P., Hayes, A. F., Shah, D. V., & Kwak, N. (2005). Understanding the relationship between communication and political knowledge: A model comparison approach using panel data. Political Communication, 22(4), 423–446.Find this resource:

Eveland, W. P., Hutchens, M. J., & Shen, F. (2009). Exposure, attention, or “use” of news? Assessing aspects of the reliability and validity of a central concept in political communication research. Communication Methods and Measures, 3(4), 223–244.Find this resource:

Eveland, W. P., & Scheufele, D. A. (2000). Connecting news media use with gaps in knowledge and participation. Political Communication, 17(3), 215–237.Find this resource:

Eveland, W. P., Seo, M., & Marton, K. (2002). Learning from the news in campaign 2000: An experimental comparison of TV news, newspapers, and online news. Media Psychology, 4(4), 353–378.Find this resource:

Feezell, J. T. 2017. Agenda setting through social media: The importance of incidental news exposure and social filtering in the digital era. Political Research Quarterly, 71(2), 482–494.Find this resource:

Freedman, P., & Goldstein, K. (1999). Measuring media exposure and the effects of negative campaign ads. American Journal of Political Science, 43(4), 1189–1208.Find this resource:

Fridkin, K. L., Kenney, P. J., Gershon, S. A., Shafer, K., & Woodall, G. S. (2007). Capturing the power of a campaign event: The 2004 presidential debate in Tempe. Journal of Politics, 69(3), 770–785.Find this resource:

Gaines, B. J., & Kuklinski, J. H. (2011). Experimental estimation of heterogeneous treatment effects related to self-selection. American Journal of Political Science, 55(3), 724–736.Find this resource:

Garrett, R. K., Carnahan, D., & Lynch, E. K. (2013). A turn toward avoidance? Selective exposure to online political information, 2004–2008. Political Behavior, 35(1), 113–134.Find this resource:

Gelman, A., & Imbens. G. W. (2013). Why ask why? Forward causal inference and reverse causal questions. NBER Working Paper 19614.Find this resource:

Gerber, A. S., Gimpel, J. G., Green, D. P., & Shaw, D. R. (2011). How large and long-lasting are the persuasive effects of televised campaign ads? Results from a large scale randomized experiment. American Political Science Review, 105(1), 135–150.Find this resource:

Gerber, A. S., & Green, D. P. (2012). Field experiments: Design, analysis, and interpretation. New York: W. W. Norton.Find this resource:

Goldman, S. K., Mutz, D. C., & Dilliplane, S. (2013). All virtue is relative: A response to prior. Political Communication, 30(4), 635–653.Find this resource:

Graber, D. A. (1988). Processing the news: How people tame the information tide. New York, NY: Guilford Press.Find this resource:

Guess, A. M. (2014). Measure for measure: An experimental test of online political media exposure. Political Analysis, 23(1), 59–75.Find this resource:

Hayes, D., & Turgeon, M. (2009). A matter of distinction: Candidate polarization and information processing in election campaigns. American Politics Research, 38(1), 165–192.Find this resource:

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.Find this resource:

Hovland, C. I. (1959). Reconciling conflicting results derived from experimental and survey studies of attitude change. American Psychologist, 14(1), 8–17.Find this resource:

Huber, G. A., & Arceneaux, K. (2007). Identifying the persuasive effects of presidential advertising. American Journal of Political Science, 51(4), 957–977.Find this resource:

Iyengar, S., & Kinder, D. R. (1987). News that matters: Television and American opinion. Chicago, IL: University of Chicago Press.Find this resource:

Jerit, J., Barabas, J., Pollock, W., Banducci, S., Stevens, D., & Schoonvelde, M. (2016). Manipulated vs. measured: Using an experimental benchmark to investigate the performance of self-reported media exposure. Communication Methods and Measures, 10(2–3), 99–114.Find this resource:

Katz, E., & Lazarsfeld, P. F. ([1960] 2005). Personal influence: The part played by people in the flow of mass communication. Brunswick, NJ: Transaction.Find this resource:

Keele, L. J., & Titiunik, R. (2014). Geographic boundaries as regression discontinuities. Political Analysis, 23(1), 127–155.Find this resource:

King, G., Schneer, B., & White, W. (2017). How the news media activate public expression and influence national agendas. Science, 358(6364), 776–780.Find this resource:

Krasno, J. S., & Green, D. P. (2008). Do televised presidential ads increase voter turnout? Evidence from a natural experiment. Journal of Politics, 70(1), 245–261.Find this resource:

Kull, S., Ramsay, C., & Lewis, E. (2003). Misperceptions, the media, and the Iraq war. Political Science Quarterly, 118(4), 569–598.Find this resource:

LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. American Economic Review, 76(4), 604–620.Find this resource:

Lazarsfeld, P. F. (1940a). “Panel” studies. Public Opinion Quarterly, 4(1), 122–128.Find this resource:

Lazarsfeld, P. F. (1940b). Radio and the printed page. New York, NY: Duell, Sloan & Pierce.Find this resource:

Lazarsfeld, P. F., & Fiske, M. (1938). The “panel” as a new tool for measuring opinion. Public Opinion Quarterly, 2(4), 596–612.Find this resource:

Lazer, D., & Radford, J. (2017). Data ex machina: Introduction to big data. Annual Review of Sociology, 43(1), 19–39.Find this resource:

Leeper, T. J. (2017). How does treatment self-selection affect inferences about political communication? Journal of Experimental Political Science, 4(1), 21–33.Find this resource:

Lelkes, Y., Sood, G., & Iyengar, S. (2015). The hostile audience: The effect of access to broadband Internet on partisan affect. Unpublished manuscript, University of Amsterdam.Find this resource:

Lerner, D. (1958). The passing of traditional society: Modernizing the Middle East. Glencoe, IL: Free Press of Glencoe.Find this resource:

Martin, G. J., & Yurukoglu, A. (2017). Bias in cable news: Persuasion and polarization. American Economic Review, 107(9), 2565–2599.Find this resource:

Menchen-Trevino, E., & Karr, C. (2012). Researching real-world web use with Roxy: Collecting observational web data with informed consent. Journal of Information Technology & Politics, 9(3), 254–268.Find this resource:

Morgan, S. L., & Winship, C. (2015). Counterfactuals and causal inference: Methods and principles for social research. 2nd ed. New York, NY: Cambridge University Press.Find this resource:

Nelson, T. E., Clawson, R. A., & Oxley, Z. M. (1997). Media framing of a civil liberties conflict and its effect on tolerance. American Political Science Review, 91(3), 567–583.Find this resource:

Nielsen, R. K. (2012). Ground wars: Personalized communication in political campaigns. Princeton, NJ: Princeton University Press.Find this resource:

Paluck, E. Levy, Lagunes, P., Green, D. P., Vavreck, L., Peer, L., & Gomila, R. (2015). Does product placement change television viewers’ social behavior? PLOS ONE, 10(9), e0138610.Find this resource:

Panagopoulos, C., & Green, D. P. (2008). Field experiments testing the impact of radio advertisements on electoral competition. American Journal of Political Science, 52(1), 156–168.Find this resource:

Potter, W. J. (2011). Conceptualizing mass media effect. Journal of Communication, 61(5), 896–915.Find this resource:

Price, V., & Zaller, J. (1993). Who gets the news? Alternative measures of news reception and their implications for research. Public Opinion Quarterly, 57(2), 133–164.Find this resource:

Prior, M. (2003). Any good news in soft news? The impact of soft news preference on political knowledge. Political Communication, 20(2), 149–171.Find this resource:

Prior, M. (2007). Post-broadcast democracy: How media choice increases inequality in political involvement and polarizes elections. New York, NY: Cambridge University Press.Find this resource:

Prior, M. (2009a). The immensely inflated news audience: Assessing bias in self-reported news exposure. Public Opinion Quarterly, 73(1), 130–143.Find this resource:

Prior, M. (2009b). Improving media effects research through better measurement of news exposure. Journal of Politics, 71(3), 893–908.Find this resource:

Prior, M. (2013). The challenge of measuring media exposure: Reply to Dilliplane, Goldman, and Mutz. Political Communication, 30(4), 620–634.Find this resource:

Romero-Canyas, R., Larson-Konar, D., Redlawsk, D. P., Borie-Holtz, D., Gaby, K., Langer, S., & Schneider, B. (2018). Bringing the heat home: Television spots about local impacts reduce global warming denialism. Environmental Communication, 5(3), 1–21.Find this resource:

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.Find this resource:

Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6(1), 34–58.Find this resource:

Rubin, D. B. (1990). Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science, 5(4), 472–480.Find this resource:

Searles, K., Fowler, E. F., Ridout, T. N., Strach, P., & Zuber, K. (2017). The effects of men’s and women’s voices in political advertising. Journal of Political Marketing, 1–29.Find this resource:

Seawright, J. (2016). Multi-method social science: Combining qualitative and quantitative tools. Cambridge, U.K.: Cambridge University Press.Find this resource:

Sekhon, J. S., & Titiunik, R. (2012). When natural experiments are neither natural nor experiments. American Political Science Review, 106(1), 1–23.Find this resource:

Sen, M., & Wasow, O. (2016). Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science, 19(1), 499–522.Find this resource:

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton-Mifflin.Find this resource:

Sinclair, B. (2012). The social citizen: Peer networks and political behavior. Chicago, IL: University of Chicago Press.Find this resource:

Slater, M. D., Goodall, C. E., & Hayes, A. F. (2009). Self-reported news attention does assess differential processing of media content: An experiment on risk perceptions utilizing a random sample of U.S. local crime and accident news. Journal of Communication, 59(1), 117–134.Find this resource:

Slothuus, R. (2015). Assessing the influence of political parties on public opinion: The challenge from pretreatment effects. Political Communication, 33(2), 1–26.Find this resource:

Sovey, A. J., & Green, D. P. (2011). Instrumental variables estimation in political science: A readers’ guide. American Journal of Political Science, 55(1), 188–200.Find this resource:

Tewksbury, D., Althaus, S. L., & Hibbing, M. V. (2011). Estimating self-reported news exposure across and within typical days: Should surveys use more refined measures? Communication Methods and Measures, 5(4), 311–328.Find this resource:

Toff, B., & Nielsen, R. K. (2018). I just Google it”: Folk theories of distributed discovery. Journal of Communication, 68(3), 636–657.Find this resource:

Voigtlaender, N., & Voth, H.-J. (2014). Highway to Hitler. NBER Working Paper 20150.Find this resource:

de Vreese, C. H., & Boomgaarden, H. G. (2006). Media message flows and interpersonal communication: The conditional nature of effects on public opinion. Communication Research, 33(1), 19–37.Find this resource:

Notes:

(1.) The study of media can also view media as an outcome to be explained—either at the macro level from a supply-side perspective or at the micro level in terms of determinants of individual-level demand or exposure—but I set aside these questions for the purposes of this article.

(2.) A critique sometimes raised at this point is that even though observational methods risk being subject to unobserved confounding, they at least provide “externally valid” or “generalizable” insights. This is, however, a canard. Unless an observational method satisfies assumptions that enable causal inference, any supposed “effect” that is identified (e.g., via a regression coefficient) is not generalizable because it is not a valid causal effect estimate to begin with (Morgan & Winship, 2015).

(3.) Traditional potential outcomes notation typically omits time subscripts, which can be incorrectly interpreted to mean that within-person, over-time variations in a cause provide direct insight into causal effects. That can be true, but only under a strong assumption that potential outcomes are independent of earlier potential outcomes (in other words, that treatments are non-cumulative). In light of strong evidence of pretreatment dynamics (Druckman & Leeper, 2012a; Slothuus, 2015), it is not generally appropriate to make that assumption.

(4.) We might define x differently, allowing for a variety of other experiences such as non-exposure or exposure to varying mixes of public safety and free speech content, but effects would be defined similarly: the causal effect of any particular kind of coverage would be a difference in individual potential outcomes relative to some specified alternative.

(5.) This is a strict definition of “media effect.” Researchers might also be interested in various descriptives such as individuals’ interpretations of media, reflections upon media expressed in lay causal language, or subjective perceptions of first- or third-person media effects, but we should not consider those subjective interpretations to be media effect in a strict sense.

(6.) Of course, some media content is actually random. But short of published evaluations of these tests (e.g., Gerber, Gimpel, Green, & Shaw, 2011; King, Schneer, & White, 2017; Paluck et al., 2015; Panagopoulos & Green, 2008), it would be hard to know what is random and what is not.

(7.) There is no reason to believe, without further assumptions, that individuals attentive to different news sources had similar levels of misperceptions in the absence of any news exposure. Self-selection into media and effects of media are empirically indistinguishable in cross-sectional, observational research.

(8.) While researchers have classically distinguished mere exposure from more in-depth attention or reception of content, media effects can occur in the absence of more in-depth engagement and processing of content. What matters then is not what citizens can recall about their experience but the content of those experiences per se. Media effects are defined by stimuli or inputs, not recollection thereof.

(9.) Recognizing, of course, that what is considered the most credible method will no doubt change in the future.