Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two. While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires. The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.
Thomas J. Leeper
Storytelling is a common and pervasive practice across human history, which some have argued is a fundamental part of human understanding. Storytelling and narratives are a very human way of understanding the world, as well as events, and can serve as key tools for crisis and disaster studies and practice. They play a tremendously important role in planning, policy, education, the public sphere, advocacy, training, and community recovery. In the context of crises and disasters, stories are a means by which information is transmitted across generations, a key strategy for survival from non-routine and infrequent events. In fact, the field of disaster studies has long relied on narratives as primary source material, as a means of understanding individual experiences of phenomena as well as critiquing policies and understanding the role of history in 21st-century levels of vulnerability. Over the past several decades, practitioners and educators in the field have sought to use stories and narratives more purposefully to build resilience and pass on tacit knowledge.