Scientific Uncertainty in Health and Risk Messaging
Scientific Uncertainty in Health and Risk Messaging
- Stephen ZehrStephen ZehrDepartment of Sociology, University of Southern Indiana
Summary
Expressions of scientific uncertainty are normal features of scientific articles and professional presentations. Journal articles typically include research questions at the beginning, probabilistic accounts of findings in the middle, and new research questions at the end. These uncertainty claims are used to construct clear boundaries between uncertain and certain scientific knowledge. Interesting questions emerge, however, when scientific uncertainty is communicated in occasions for public science (e.g., newspaper accounts of science, scientific expertise in political deliberations, science in stakeholder claims directed to the public, and so forth). Scientific uncertainty is especially important in the communication of environmental and health risks where public action is expected despite uncertain knowledge. Public science contexts are made more complex by the presence of multiple actors such as citizen-scientists, journalists, stakeholders, social movement actors, politicians, and so on who perform important functions in the communication and interpretation of scientific information and bring in diverse norms and values.
A past assumption among researchers was that scientists would deemphasize or ignore uncertainties in these situations to better match their claims with a public perception of science as an objective, truth-building institution. However, more recent research indicates variability in the likelihood that scientists communicate uncertainties and in the public reception and use of uncertainty claims. Many scientists still believe that scientific uncertainty will be misunderstood by the public and misused by interest groups involved with an issue, while others recognize a need to clearly translate what is known and not known.
Much social science analysis of scientific uncertainty in public science views it as a socially constructed phenomenon, where it depends less upon a particular state of scientific research (what scientists are certain and uncertain of) and more upon contextual factors, the actors involved, and the meanings attached to scientific claims. Scientific uncertainty is often emergent in public science, both in the sense that the boundary between what is certain and uncertain can be managed and manipulated by powerful actors and in the sense that as scientific knowledge confronts diverse public norms, values, local knowledges, and interests new areas of uncertainty emerge. Scientific uncertainty may emerge as a consequence of social conflict rather than being its cause. In public science scientific uncertainty can be interpreted as a normal state of affairs and, in the long run, may not be that detrimental to solving societal problems if it opens up new avenues and pathways for thinking about solutions. Of course, the presence of scientific uncertainty can also be used to legitimate inaction.
Keywords
Subjects
- Communication and Technology
- Health and Risk Communication
- Organizational Communication
The Challenge of Communicating Scientific Uncertainty
Despite a still commonly accepted perception that science is a producer of truths about the world, scientists fully acknowledge, and often openly celebrate, science as also an uncertainty-producing activity. Successful scientists generate uncertainty by raising researchable questions. Answers to these questions themselves may generate further uncertainty when reported in probabilistic terms, as they often are, rather than indicative of a fully certain and definitive state of the world. This tentativeness is expected of scientists. In a well-known commentary on science, Robert Merton identified “organized skepticism” as one of four norms of science (Merton, 1942). Scientists are expected to question currently accepted knowledge and remain skeptical about new claims to truth. They gain visibility and rewards within the scientific community at both ends of the scientific process—both in generating interesting, researchable questions and in reporting research results that themselves often generate further questions.
The fact that uncertainty is endemic to successful science is less interesting, however, than questions about how scientific uncertainty is communicated, interpreted, and used both inside and outside of the scientific community. Is scientific uncertainty easily communicated? Do scientists deemphasize or ignore the importance of uncertainty when they communicate knowledge to non-scientists, especially in high stakes arenas such as public or political deliberations involving issues such as climate change, impacts of genetically modified organisms (GMOs), the likelihood of an infectious disease becoming a pandemic, or the effectiveness of a treatment for a medical condition? How might the presence of scientific uncertainty be used by stakeholders who desire a certain outcome to a public controversy about a scientific related matter such as the safety of an oil pipeline, health impacts of exposure to second-hand smoke, or carcinogenicity of an herbicide?
Practices and standards for communicating and interpreting uncertainty vary significantly depending upon whether communication is intended for practicing scientists or for nonscientists, who may be less versed in scientific practice or hold interests or ideologies that are supported and legitimated by either certain or uncertain science. Whereas practices for expressing uncertainty are mostly standardized within the scientific community and built into the socialization process, they are much less so in arenas where scientists communicate to nonscientists. Whereas the meaning attached to a confidence interval is quite clear in a scientific publication, it may be less so in “public science” (Turner, 1980) arenas, where confidence intervals are not understood, or in political deliberations where decisions need to be made. To complicate matters further, public science has become increasingly important to science in the late 20th and early 21st centuries in an era of “Mode II science” (Gibbons, Limoges, Nowotny, Schwartzman, Scott, & Trow, 1994) or “post-normal science” (Funtowicz & Ravetz, 1990), where discovery of knowledge and its application to public problems are more closely coupled in time and space and decision stakes, systems uncertainties, and demands for public engagement and accountability higher. The peer community for judging the quality of knowledge has been broadened as science is closely tied to public issues such as ozone depletion, climate change, use of GMOs, spread of viruses, and so on. How do scientists communicate uncertainty and ignorance in public contexts such as these where science is still viewed by many as an objective and definitive truth-producing activity?
The literature on the communication of uncertainty and ignorance fluctuates between analyzing them as an objective vacuum in the science—where the question then is whether and how to accurately and effectively represent the knowledge gap—versus taking a constructionist approach (e.g., Stocking & Holstein, 1993; Zehr, 1999), where one brackets out and remains agnostic about a true state of scientific knowledge or non-knowledge. The former “realist” approach leads to research on the degree to which scientists and spokespeople for science (e.g., journalists) accurately represent uncertainties and their implications for resolving public issues and the perceived authority of science. The constructionist approach involves analysis of uncertainty or ignorance as contextually produced by scientists or spokespeople for science. Questions of accurately depicting a state of knowledge or non-knowledge are less relevant than analyzing how and why uncertainty is constructed as it is. The advantage of the constructionist approach is that it remains open to the idea, and in fact may anticipate, that scientific uncertainty or ignorance is emergent within public contexts. This point is developed below.
Studies of the construction of uncertainty and ignorance are sometimes referred to as the study of “agnotology,” where their naturalness is questioned by looking into their causes and distribution (Pinto, 2015; Proctor, 2008; Stocking & Holstein, 1993). Uncertainty and ignorance become increasingly important as science becomes privatized where a state of knowledge or non-knowledge is considered useful for economic, political, or social reasons. Pinto (2015) observes that there are two broad categories of studies on the construction of uncertainty and ignorance. One set of studies focuses on the systemic construction of uncertainty or ignorance where they get passively constructed as certain institutional mechanisms support the construction of some types of knowledge while suppressing others. For example, Frickel, Gibbon, Howard, Kempner, Ottinger, and Hess (2010) discuss examples of what they refer to as “undone science” (e.g., areas of untested soils for contaminants in New Orleans following Hurricane Katrina) created through institutionalized rules, decisions, and methodologies. The second category involves studies of the deliberate construction of uncertainty or ignorance (e.g., by the military, government, or private sector) to reinforce relations of power or for economic gain (Pinto, 2015). Oreskes and Conway’s (2010) study of the manipulation of scientific uncertainty on the health risks of tobacco and on the existence of human-caused climate change serves as an example.
This review is organized around two arenas for the production and communication of scientific uncertainty. The first arena involves professional contexts such as scientific articles and professional meetings, where the communication of uncertainty is mostly standardized. The second arena involves the production and communication of scientific uncertainty in public science where science is brought to non-scientists, often to address issues of public concern. These occasions are particularly important to scientists for their contributions to the resolution of important public problems, as well as for the maintenance of cognitive authority and the public and private patronage that goes with it. Several examples are discussed in this second arena.
Communication of Scientific Uncertainty in Professional Contexts
Moral Context for Uncertainty Claims
Expressions of uncertainty are a normative feature of scientific communication in scientific contexts such as the professional meeting or the peer-reviewed scientific article. Honesty in science is situated within the norms of disinterestedness and organized skepticism that Robert Merton (1942) identified as two of the four norms that comprise the ethos of science. Since the early stages of modern science, scientists have thrived under the moral imperative to tell the truth as they see it (Shapin, 1994, 2008). In early modern science, the social status of gentleman served as sufficient confirmation of trustworthiness. Over time science transitioned to standardized methodology and technology with their universalizing properties that leave truth-claims more open to broader scientific inspection. However, despite this transition, according to Shapin, science has never lost this moral foundation. The scientific community still depends upon a strongly socialized norm of telling the truth when reporting research results. Strong sanctions in cases of misrepresentation in science both illustrate and reinforce this norm. Within this moral context, communicating uncertainty becomes part of what it means to speak as a scientist—so much so, that other scientists may hesitate to accept truth-claims that aren’t accompanied by some hesitancy by the author or qualification that calls attention to gaps in knowledge and the need for further research. Such a speaker may be judged as overconfident, pretentious, lacking humility, or as insufficiently socialized into the scientific community. Of course, communicating uncertainties also performs the important function of clarifying boundaries, as the speaker sees them, between what is known and not known.
Uncertainty/Ignorance Distinction
The concept of scientific uncertainty is often applied to a broad set of situations for non-knowledge. Smithson (1989) has usefully distinguished among ignorance, indeterminacy, and uncertainty in knowledge. With scientific ignorance, we don’t know what we don’t know, or what is popularly referred to as “unknown unknowns.” With indeterminacy, we know what we don’t know, but have no parameters around what is not known. In the early stages of climate change science, scientists were confident that the world would warm but were indeterminant about how much. With scientific uncertainty, we know what we don’t know and hold parameters around it. Thus, for instance, when scientists present a confidence interval around a predicted outcome, they are expressing scientific uncertainty. Within the scientific community, ignorance is rarely explored in formal communication, even though scientists acknowledge its presence; but there are relatively standardized ways of expressing indeterminacy and uncertainty. Where this distinction is especially consequential is in public science, where distinctions among the three become murkier in communication, and where uncertainty or ignorance in scientific knowledge is supplemented by “decision-making uncertainty” (Lehmkuhl & Peters, 2016).
Representation of Uncertainty in Scientific Communication
Rhetorically, scientific articles and presentations are typically constructed to open a window of uncertainty or indeterminacy early on, which is subsequently closed, at least partially, by their completion. Past research is procedurally reviewed so that it generates a linked but open space where research questions can be formulated. This open space is sometimes referred to as “specified ignorance” (Merton, 1987) or a “knowledge gap,” indicating an area that is promising for research. Rarely are past researchers blamed for the unanswered questions. Rather, the cited literature is typically commended for what it has accomplished and its creation of this new knowledge gap. The knowledge gap is generally framed as a domain of uncertainty, rather than indeterminacy, with its guiding parameters consisting of its placement within prior research. At that point in the article or presentation readers or listeners are expected to have logically reached the same conclusion as the authors, but like the cited literature, remain faultless for not having already answered these questions. After all, some readers or listeners have conducted the research necessary to reach this new knowledge gap.
From there, scientific papers and presentations proceed through procedures used to collect and analyze data, presentation of data, and discussion of the data’s implications for the knowledge gap and research questions previously opened. Much of this discussion uses what Gilbert and Mulkay (1984) refer to as an “empiricist repertoire,” where claims appear to be emerging directly from the natural or social world itself rather than the contingencies of the laboratory or other socio-cultural factors. Further uncertainty is constructed in the discussion of results. Caveats or modalities may be added to knowledge-claims that place limits on their perceived definitiveness or factualness. Data are often represented in probability statements or confidence intervals that quantify the level of uncertainty or hesitancy that one should maintain about the connection between reported results and external reality.
At the end of the article or presentation, scientists may add further layers of uncertainty, and occasionally indeterminacy, by opening up new research questions that were unanswered by, but may have emerged from, the reported research. These questions are represented similarly to those at the end of the literature review section. In fact, they may be questions that the authors themselves are currently researching. But the conclusion section also may construct scientific indeterminacy when questions are posed that lack clear answerable pathways.
Throughout this style of presentation, scientists implicitly represent themselves as following a normative script where the truth is being told. This moral positioning is reinforced by the uncertainty claims, especially as data are presented, which present authors as humble and, more importantly, aware of the importance of truthfulness and the boundary between knowledge and uncertainty. Readers and listeners may disagree with and eventually alter the authors’ positioning of knowledge and uncertainty, but if performed reasonably following convention, the moral credibility of the authors as scientists is not challenged.
Social Basis for the Production of Ignorance or Uncertainty
The above discussion largely assumes that uncertainty is a naturally occurring gap within the normal course of scientific research. Scientists pose questions and remain uncertain until research results fill them in, and they represent this process accordingly in professional presentations and articles. Some recent research challenges this image of uncertainty by treating the gaps as socially produced since scientific practice is mediated by economic interests, organizational forms or practices, regulatory practices, or other social forces (e.g., Frickel et al., 2010; Frickel & Vincent, 2007; Kleinman & Suryanarayanan, 2012; McGoey, 2012; Suryanarayanan & Kleinman, 2013). This research builds upon an extensive body of science, technology, and society (STS) research recognizing the close relationship between societal context and knowledge production, both in the sense of the social shaping of scientific knowledge (e.g., Collins, 1985; Latour & Woolgar, 1979; Pickering, 1984; Pinch, 1981) and in the co-construction and ordering of knowledge and society (e.g., Jasanoff, 2004; Latour, 1987). The general starting point of research on knowledge gaps is the observation that scientists need to look somewhere when conducting research, thus making epistemic choices that simultaneously generate both knowledge and non-knowledge. These choices are socially influenced by external interests and values (e.g., MacKenzie, 1981; Pielke, 2007; Sarewitz, 1996) or by internal social factors such as the dominance of a theoretical framework, epistemic culture, or rhetorical style (e.g., Gilbert & Mulkay, 1984; Knorr Cetina, 1999; Pickering, 1984). Whereas in many domains of scientific research this social influence may be hard to detect or is less salient, in controversial areas of science it is made more visible.
Socially produced uncertainty or ignorance can take several forms (Böschen, Kastenhofer, Rust, Soentgen, & Wehling, 2010; Kleinman & Suryanarayanan, 2012). Some research may not get done because existing epistemic cultures direct research elsewhere; certain questions may not get raised, leading to misleading knowledge; or inconclusive knowledge may not be taken seriously. Kleinman and Suryanarayanan’s (2012) study of honeybee colony collapse disorder illustrates how epistemic practices privileged some approaches to the study of pesticides on bees while ignoring others, leading to the construction of these three forms of ignorance. As they describe, studies of pesticides on bee mortality tended toward a treatment-response laboratory approach that isolated the effects of individual pesticides on individualized bees. Consequentially, these studies ignored the real world conditions for hives of honeybees that potentially face the interactive effects of multiple chemicals, pathogens, and parasites over generations. As a result of laboratory (and field) studies, a particular pesticide may be deemed non-lethal, but if examined in combination with other interactive effects over generations of bees it may contribute to colony collapse. The key idea here is that a particular socially shaped epistemic approach led to some questions not being raised and some research not getting done. Furthermore, as Kleinman and Suryanarayanan (2012) (see also Suryanarayanan & Kleinman, 2011) describe, studies addressing toxicity of pesticides on bees exhibited preference for a 95% confidence level, thus increasing the likelihood of and preference for a false negative over a false positive. Type II errors were preferred over Type I errors, potentially leading to a situation where inconclusive knowledge was not taken seriously. Similar biases were built into the U.S. Environmental Protection Agency’s (EPA) decisions about resampling soil sediment following Hurricane Katrina. EPA’s focus was primarily at sites where previous contamination was detected, potentially generating knowledge about false positives, rather than at sites where previous contamination had not been detected, ignoring the possibility of false negatives (Frickel & Vincent, 2007).
The Frickel and Vincent (2007) study also draws attention to the limits of disciplinary and regulatory based testing for environmental contaminants. As the authors describe, scientific tests are snapshots, at a particular point in time, that generally ignore a history and a future. For example, following Hurricane Katrina, EPA tests for lead in soils showed no greater levels than those found in soils in major cities in the U.S. However, these tests ignored the different history of how lead got there (automobile emissions versus lead paint) and bracketed out the question of different futures for people more and less likely to be exposed to it. Due to legalities and the regulatory framework, Frickel and Vincent also found that soil sampling only occurred on public property and isolated out toxicities under regulation, thus generating uncertainty or ignorance about the ecologies faced in the daily lives of residents.
As these studies imply, the social production of uncertainty and ignorance increasingly has relevance not just to scientists, who revel in the opening up of new research areas, but also to non-scientists who must make choices and live within situations of non-knowledge. Key questions then arise around the communication and travels of scientific uncertainty, indeterminacy, and ignorance as they leave the pages of scientific journals and other professional discussions. How do contexts for public science alter how scientists represent scientific uncertainty? How do different expectations and diverse audiences alter whether or how uncertainty gets constructed? How do public science arenas generate new forms of uncertainty and ignorance?
Scientific Uncertainty in Public Science
Public science cannot be interpreted or analyzed, as it once was, as merely the popularization of science, though this label may still be used occasionally as a rhetorical strategy for describing some forms (Hilgartner, 1990). Bensaude-Vincent (2009) observes that the concept of popularization is historically contingent, specific to 19th and 20th century developments that isolated science and constructed a “public” as passive spectators or users of its products. Bensaude-Vincent argues that this conceptualization of popularization is not relevant to a time period prior to the 19th century, or to today. Rather, public science is commonly understood as another forum for doing science (Shinn & Whitley, 1985) as it co-produces social, political, economic, or other arrangements (Jasanoff, 2004). This does not mean that scientists necessarily hold positive attitudes about an activity that is increasingly considered part of their job. Scientists express concern about whether the public will sufficiently understand science, especially when called upon to judge science-based policy issues (Besley & Nisbet, 2013). Besley (2015) finds that scientists are more likely to engage in public science activity (measured here by online participation) when they feel they hold sufficient time and efficacy and when they perceive that it is an important activity for science.
Increasingly, arenas for public science have expanded beyond traditional places such as television, newspapers, Internet, policy proceedings, courtroom, and the literature of stakeholder organizations to include various forums for citizen-science and public engagement. Within these forums, traditional one-directional communication between scientists and public is normatively and empirically challenged (Irwin & Wynne, 1996; Wynne, 1995). The notion of “the public” is also challenged as we increasingly recognize diversity in the public sphere. This diversity moves beyond the distinction of proponents and opponents to include diversity along lines of gender, race, ethnicity, social class, urban/rural, inner city/suburban, young/old, stakeholder/non-stakeholder, and so on. Public science is really about multiple publics. The notion of categorical and reified publics in some sort of natural state is also challenged with the idea that publics are emergent through public issues, participatory devices, and public interventions (Chilvers & Kearnes, 2016). However, this understanding of publics, as reflected in the public understanding of science literature, has not necessarily been followed in national efforts to increase and improve public engagement with science and technology (Davies, 2013).
The instability in the notion of a “public” in public science also brings instability in “certain” or “uncertain” science. What might seem like certain science to scientists, once the uncertainties of laboratory science are managed and order is produced (e.g., Collins, 1985; Latour, 1987; Lynch, 1985), could easily transition to uncertain science as various public concerns, values, interests, and knowledge mediate its translation in public science. Scientific knowledge potentially becomes destabilized in the messiness or even orderliness of public science. This does not mean, necessarily, that scientific certainty is being purposely polluted to appear as uncertain science for political gain (e.g., Oreskes & Conway, 2010). Rather, to the extent that public science becomes more like public engagement, different norms, concerns, and local knowledges become relevant, potentially transitioning stable and definitive scientific knowledge into new sets of questions and situations of relevance that may lead to uncertainties. These uncertainties could be analytically treated as an emergent blackbox, as stable as the original scientific knowledge claims, or as socially constructed and therefore contingent to the relevant local domain.
To Express or Not Express Scientific Uncertainty
One set of studies addresses the related questions of whether scientists do in fact tend to communicate scientific uncertainties in public science, the challenges of doing so, and the implications for public understanding of and respect for science. These studies start with the assumption that scientific uncertainty emerges within scientific activity, rather than in public contexts as discussed above. Scientific uncertainty is treated as an objective feature of science. The concern of this research is whether and how it is expressed in public domains. As discussed below, this starting assumption increasingly needs to be questioned and challenged, especially in areas of science closely associated with societal concerns such as environmental or health-related risks. However, the value of these studies lies in their connection to the moral basis for communicating uncertainty discussed above. Scientists prefer to tell the truth as they see it, including telling the truth about what they do not know or are uncertain of. So why might they choose not to, and what are the implications of transparency about uncertainties?
It is from this standpoint that an editorial (2005) in Nature poses the question of how scientists should address scientific uncertainty when communicating with journalists. Rather than hiding it, under the assumption that the public won’t understand how science works, the editorial recommends being transparent about uncertainties as scientific evidence is presented. This might mean incorporating discrepant or minority views, potentially elevating maverick science, but the editorial goes on to recommend contextualizing them within the body of research developed by a learned society. The editorial also advocates further development of Science Media Centres (now existing in several nations) to serve as intermediators between science, journalists, and the public. The development of Science Media Centres assumes that journalists need help in their coverage of science beyond simply interviewing scientific experts. Not everyone agrees with this assumption. A participant observation study of an “experts and journalists workshop” found, for instance, that the participating journalists held a nuanced understanding of their relationship to scientific sources (Schneider, 2010). For example, they were aware that scientists varied their discussion of uncertainty across different contexts.
Scientists’ Views
Studies that have built upon this editorial have considered such research questions as scientists’ views about communicating uncertainty, the public’s perception of science and scientists when uncertainty is expressed, and the implications of scientific uncertainty on public knowledge and beliefs. Interview and survey data suggest that scientists are indeed concerned about public understanding of scientific uncertainty. In a study of food risk communication, Cope, Frewer, Houghton, Rowe, Fischer, and de Jonge, (2010) find that scientific experts feel that uncertainties associated with food risks should not be communicated to the public because most people would misunderstand their implications. Food scientists also feel the public desires conclusive evidence, so expressions of uncertainty might undermine the authority of science (Frewer, Hunt, Brennan, Kuznescof, Ness, & Ritson, 2003). The researchers find that food scientists still hold a “deficit model” of the communication of science, where the public lacks information that scientists can provide one-directionally. Within this model, avoiding uncertainties might reinforce for scientists a desired social distance between expert and non-expert groups.
A concern among scientists for clarity and utility to publics may lead them to avoid scientific uncertainties, focusing instead on what they perceive as clear, simple, consensus-based knowledge for public concerns. Research by Bell (2006) on the Wentworth Group of Concerned Scientists in Australia provides a good example of this approach. The group was formed in 2002 as an expert voice of reason in response to public debate about “drought-proofing” the continent. As Bell reports, their manner of engagement with policy and the public involved simple scientific communication that kept uncertainty and scientific debate out of public view. This approach was perceived as pragmatic, focusing on solutions rather than problems as the group successfully placed their consensus science within political agendas rather than confronting them. The group assumed that the public would not understand uncertainties and debates as a normal feature of science, interpreting them instead as incompetent science or exhibiting vested interests as being bought off or seeking research funding (Bell, 2006).
Scientists may wish to communicate uncertainties, but they may be concerned about how to do so in meaningful ways while avoiding the perceived misuse of uncertainty claims by interest groups or the public. This concern has been particularly salient in the case of climate change, where scientists fear that expressions of scientific uncertainty will be interpreted as lack of consensus about the existence of human-caused global warming and will be used by interest groups to help defer political action. This concern exists even outside the United States, which contains a well-established opposition to climate change policy. In a survey of German climate scientists, for instance, Post (2016) finds that the more climate scientists are engaged with the media, the less they intend to identify uncertainties in climate change knowledge and the more they confirm a consensus position. One reason they express for avoiding uncertainties is that interest groups may use them for their own purposes. Even the Intergovernmental Panel on Climate Change (IPCC) has been criticized for failing to acknowledge areas of scientific uncertainty in their consensus documents. Although this criticism is not entirely accurate, Jones (2011) notes how official IPCC uncertainty guidance to authors is best suited to communicating uncertainty in the natural sciences and natural hazard risks and less suited to communicating uncertainty in social science findings, complex risks, and policy.
In some arenas, especially involving human health, communicating uncertainties may pose an ethical dilemma for scientists and health professionals, especially when they relate to health consequences. In their review of research on communicating results of genetics and neuroimaging studies to subjects, Morello-Frosch, Varshavsky, Liboiron, Brown, and Brody (2015) identify ethical challenges in uncertainty communication faced by environmental health scientists due to tensions between the right of study participants to know exposure levels or susceptibility to a disease and their capacity or right to act. For instance, research may indicate a link between a gene and a disease, but such traits may only affect a small portion of the population. How do researchers report uncertainties to research participants when there is elevated risk, but with limited ability to act? Morello-Frosch et al. (2015) find that research participants generally desire to receive these individual results, and they sometimes serve as a motivating factor for enrolling in the studies. The authors’ solution to this ethical dilemma involves addressing the issues related to uncertainty communication within the broader context of the study participants’ community and reaching solutions there. A similar argument is developed by Fortun (2004), who notes that the goal of risk communication post-Bhopal has been to put out as much information as possible within a public right-to-know ethic, the assumption being that this information overcomes existing scientific uncertainties. Instead, Fortun (2004) suggests the development of a model of risk communication where subjects are engaged with uncertainty rather than protected from it. The communication of uncertainty in health-risk related information is complicated, however, in the sense that scientific uncertainty often crosses two or more scientific communities, such as epidemiology and toxicology (Driedger, Eyles, Elliott, & Cole, 2002). Thus, it may involve bridging knowledge and uncertainties from two disciplines, which is not easy to do in public science.
The Public Views
How does the communication of uncertainty actually affect public perceptions of science? Research results are mixed. An experimental study that examines subjects’ reactions to scientific uncertainty both within and across media reports finds that it did not change beliefs about the nature of science, interest in science, or trust of scientists (Retzbach & Maier, 2015). Other research, however, indicates that public reactions to uncertainty vary from issue to issue and cross-nationally. In the food risk research mentioned above, van Dijk et al. (2008) find that communication of scientific uncertainty on food risk management quality (FRMQ) had negative impacts on perceptions of FRMQ in the United Kingdom and Norway, positive impacts in Germany, and no significant impact in Greece. Uncertainty also led to increased desire for information about regulatory enforcement. Expressions of scientific uncertainty in risk management cases like these might be interpreted as increased transparency or as scientific agencies not doing an adequate job to protect public health. The type of issue under consideration also seems to matter. Jensen & Hurley (2010) find experimental evidence of differences in perceptions of scientists’ credibility across issues. In their research, they gave subjects divergent accounts of the reintroduction of gray wolves to populated areas and of dioxin in sewage sludge. Scientific uncertainty about the safety of gray wolves lowered subjects’ perception of scientists’ credibility, but on the issue of dioxin in sewage sludge, uncertainty raised their credibility.
In summary, much research operates under the assumption that scientific uncertainty is a natural category within science, so the questions become whether scientists communicate uncertainty to the public and, if so, what impacts it has. This research indicates that scientists and publics have mixed perceptions about the implications of scientific uncertainty. The scientific community, on the one hand, still mostly holds a deficit model of public understanding of scientific uncertainty, assuming that people desire clear and pragmatic information and that detailed discussion of scientific uncertainty or debate would confuse the issue and open science up to misuse by interest groups. A decision whether to communicate uncertainties is also complicated in health-related cases by an ethic of the public right-to-know. Research looking at public reception of scientific uncertainty finds that, in general, it does not reduce perceptions of scientific credibility, but the reaction varies cross-nationally and across issues. What is missing from many of these studies, however, is a broader analytical frame for how scientific uncertainty is constructed and played out in public science. This is where the review turns next.
Scientific Uncertainty as Contingent and Emergent in Public Science
A different approach to analyzing scientific uncertainty in public science involves treating it as more contingent and emergent rather than as a stable category emanating from science. In this approach, the boundary between certainty and uncertainty is highly contingent, open to redrawing and subversion as science engages with public concerns. As scientific knowledge (and non-knowledge, uncertainty, ignorance) confronts diverse values, norms, interests, local knowledges, power arrangements, and social movements, it is opened to new configurations of what is known and not known. Scientific knowledge and uncertainty are subject to their local construction and relevance.
Management of Scientific Uncertainty
One area of research from this standpoint focuses on how scientific uncertainty is managed in public science by those possessing sufficient power to do so. Early analyses assumed that scientists would desire to suppress existing scientific uncertainties in public science for fear that it would diverge too far from popular images of science as a truth-producing machine and would potentially undermine the authority of science. These studies focused on how scientists softened, deemphasized, or ignored uncertainties in the knowledge-production process, presenting a more certain version of knowledge for public consumption (e.g., Gilbert & Mulkay, 1984; Nelkin, 1975; Pinch, 1981). In the same way that local uncertainties were transformed into more universal certainties in scientific articles and other professional contexts, early empirical evidence seemed to indicate that similar certainty-building activity occurred in public science.
This understanding was challenged by Campbell (1985) and others who recognized that uncertainty could be carefully managed in public science and used to achieve particular goals. In examining the Mackenzie Valley Pipeline Inquiry involving the social, economic, and environmental impacts of pipelines to carry natural gas in the Canadian western Arctic, Campbell noticed that experts on one side of the issue used claims about scientific uncertainty to support and legitimate their position. These claims were accepted as authoritative scientific statements in themselves despite the fact that their content alluded to a lack of scientific knowledge. Similarly, some scientists and representatives of the fossil fuel industry used scientific uncertainties for political advantage in policy deliberations over U.S. acid rain legislation in the 1980s (Zehr, 1994). The presence of scientific uncertainties, it was claimed, provided evidence that further scientific research was necessary before costly, and potentially unnecessary, sulfur dioxide and nitrogen oxide regulations were passed. These claims were persuasive in delaying acid rain legislation until the early 1990s. In a more recent analysis, Cordner (2015) describes how stakeholders in the controversial use of some chemicals as flame-retardants in consumer products selectively used claims about scientific uncertainty (as well as more certain claims) to politically advantage their position.
One form of management involves transforming scientific ignorance or indeterminacy into scientific uncertainty, thus rendering as manageable. Drawing upon Smithson’s (1989) distinction among ignorance, indeterminacy, and uncertainty discussed above, Shackley and Wynne (1996) show how deeper ignorance and indeterminacies in public climate change discussions during the 1990s were transformed into scientific uncertainties in knowledge that made climate change appear more manageable. Scientists were indeterminate in the 1990s about future levels of global warming. However, when a guess came forward of a range from 1.5 °C to 4.5 °C, they show how, over time, it became more or less transformed into a confidence interval, with some accounts settling on a mean of 3.0. Thus, what was essentially indeterminate in the scientific community suddenly had parameters in public science making knowledge about future warming seem more manageable by and within the scientific community.
Explanations of the management of uncertainty in this line of research typically focus on the power structure surrounding a public issue. In this line of analysis, powerful agents shape knowledge and non-knowledge, providing resources, expertise, and so on to highlight distinct areas of scientific certainty and uncertainty. The storyline in this research is typically that industry or government develops one domain of knowledge and non-knowledge, while civil society actors engaged in citizen science challenge the boundary and, in some cases, successfully create new knowledge where none previously existed.
The management of scientific uncertainty and controversy in this way is not new, as illustrated by Brandt’s (2012) analysis of tobacco industry tactics in the 1950s. The industry hired the public relations firm Hill & Knowlton to counter scientific studies linking smoking to health effects. Rather than object to scientific research on the subject, Hill & Knowlton developed a strategy that called for more science rather than less, soliciting and supporting the views of skeptics of a causal relationship, and building and broadcasting scientific controversy. The strategy included sponsoring medical research, thus generating the perception that research on the topic was far from complete (Brandt, 2012; see also Oreskes & Conway, 2010).
Other examples abound. In an analysis of pharmaceutical controversies in general and the Merck Vioxx case in particular, McGoey (2009) introduces the concept of “capitalized uncertainty” to refer to a strategy of purposely exploiting scientific doubts over risks of a commercial product or course of action for political or economic gain. Scientific uncertainty had value for those positioned to exploit it, both in the sense that it created a demand for solutions to the ambiguity and for the fact that, from an expert standpoint, uncertainty is unchallengeable, whereas certain knowledge is open to challenge. In these cases, McGoey (2009) finds that individuals needed social capital in the form of degrees, positions, etc., to successfully express “legitimate doubt.”
In another study, Decoteau and Underman (2015) examine a court case called the Omnibus Autism Proceedings, in which 5,600 families of children diagnosed with autism filed claims with the National Vaccine Injury Compensation Program. The court selected test cases and consolidated them. Decoteau and Underman (2015) argue that the greater social capital possessed by defendants effectively enabled them to police the boundary of scientific certainty and undone (uncertain) science placing it in genetics rather than environmental (one being vaccines) causes. According to the defendants, scientific uncertainties did not lie within environmental causes of autism (i.e., scientists were certain that environment wasn’t the cause) but in genetics where it was assumed that the real causes lie. Decoteau and Underman (2015) imply in their analysis that potential environmental causes are under-researched.
In another case briefly discussed above, Frickel and Vincent (2007) argue that selective EPA sampling procedures following Hurricane Katrina led to undone science. Later, as court cases were settled and government support intervened, EPA’s approach was challenged by civil society actors, leading to shifting boundaries between what was known and not-known, undermining conclusions reached by the EPA about areas of New Orleans safe to occupy.
A key point to be drawn from these case studies is how the scientific certainty/uncertainty boundary shifts as the issue is brought into the domain of public science. In examining the details of the case, the analyst focuses on the activity of powerful agents, which lead to the dominance of a distinctly located boundary, and how civil society activity challenges and decenters it. Scientific uncertainty within these analyses is treated as manageable and useful for particular stakeholders.
A question, and perhaps a challenge, for this line of research is whether researchers successfully remain agnostic about the ontological claims of both those possessing social capital (e.g., scientists, government, corporations) and those challenging the claims (e.g., social movement actors, public), and whether they desire to do so. A social constructivist approach to scientific knowledge has always emphasized that “knowledge could be otherwise,” which one observes in these cases as power arrangements are challenged (e.g., the Katrina case). Some of these studies, however, come closer to an alternative “knowledge is otherwise” (see Woolgar & Lezaun, 2015, for a discussion of this point), where the redrawing of the certainty/uncertainty boundary by civil society actors is given a privileged ontological position.
Scientific Uncertainty as Constitutive of Social Conflict
Analyses of the management of scientific uncertainty in public science assume an apriori state of uncertainty, which is then contingently managed and redrawn to achieve particular objectives. Social conflict may be generated through this process as powerful agents attempt to enforce their views of the certainty/uncertainty boundary and its implications. A different line of research emphasizes how diverse norms, values, and knowledges within public contexts generate scientific uncertainties that didn’t previously exist in the scientific community. In these analyses, scientific uncertainty is constitutive of social conflict rather than its cause. The scientific uncertainty that emerges cannot then be appropriately described as simply a scientific knowledge gap, but rather as a function of social and cultural factors. Scientific uncertainty becomes produced within a practical context of application (Levidow, 2001; Wynne, 1992) and can be theoretically explained as a normal feature of “post-normal science” (Funtowicz & Ravetz, 1990; Petersen, Cath, Hage, Kunseler, & van der Sluijs, 2011).
This view of “scientific” uncertainty in public science implies that generally accepted scientific knowledge is always potentially unsettled when assumptions are challenged or new domains of relevancy open up. Assumptions about the universality of science are called into question, resulting in instability that can be disconcerting for some scientists who are challenged on grounds they thought were quite stable. These challenges may not be due to scientific misunderstanding or the dominance of a particular epistemic culture, though occasionally that might be the case, but as an emergent feature of the new social and cultural contexts where science finds itself. Scientific certainty and uncertainty and social and cultural order get co-produced (Jasanoff, 2004). For example, in a case study of potential health effects of mobile phone usage, Stilgoe (2007) reports how experts’ assessment of knowledge and uncertainties about risks also constructed areas of public concern. Interviews with the U.K. National Radiological Protection Board (NRPB) revealed that their claims about the scientifically rational conclusions drawn about mobile phone safety and areas of uncertainty also rhetorically co-constructed a public that was prone to irrational conclusions due to their lack of understanding of the relevant science (Stilgoe, 2007). The NRPB retained scientific uncertainty within the domain of expertise while constructing a public that was “homogeneous, cognitively deficient, and passive, demanding reassurance rather than engagement” (Stilgoe, 2007, p. 56).
Other examples that illustrate this understanding of scientific uncertainty abound. Much research has addressed the different perceptions of risks in the United States and the European Union associated with the use of genetically modified organisms (GMOs) in agriculture (e.g., Priest, 2001). Different risk cultures exist across national contexts (Jasanoff, 2005), yielding different approaches to interpreting, analyzing, managing, and communicating risks, sometimes leading to a perception of scientific uncertainty in one context and scientific certainty in the other.
Levidow (2001) illustrates this point well in an analysis of the application of the precautionary principle in EU policy debates about GMOs. The traditional interpretation of the precautionary principle, dating back to the 1980s, is that it provides an effective policy tool under situations of scientific uncertainty. Its use in the development of the Montreal Protocol to address ozone depletion is the standard example and, in fact, the place where the tool and discourse got its start. However, as Levidow explains, in European debates on GMOs during the 1990s, applications of the precautionary principle actually opened up scientific uncertainty in areas where there was prior scientific certainty. Social conflict resulting from different visions of the future of agriculture among scientists, regulators, and different publics led to the questioning of assumptions used in scientific assessments of risk and pathways of potential harm and different norms regarding the vision of large-scale versus small-scale agriculture. As Levidow (2001) argues, scientific uncertainty about GMO safety was an outcome of this social conflict rather than a cause.
In another example, Stone (2011) discusses a case in rural India involving cotton growers. In an effort to improve production and well being of farmers, an international organization developed a project (“e-Sagu” project) to bring scientific expertise and information and communication technologies to aid farmers. e-Sagu did not just provide information that farmers could adapt or reject, but also advice that tended to lean on a generalized philosophy. According to Stone’s argument, cotton farmers had developed what he terms “agricultural skilling” consisting of a dynamic “performance” involving multiple and hybrid factors that combined technical, ecological, economic, and social dimensions. The generalized philosophy behind e-Sagu was not well adapted to the situational factors of daily farming. As a result, most of the farmers rejected the advice in favor of more traditional agricultural skilling (Stone, 2011). The relevance of the case is the sense of how scientific uncertainty emerged in the interplay of expert advice based on more universal principles and the diverse skills and knowledge of daily farming. Scientific certainties confronted the dynamics of local agricultural skills, generating social conflict that made scientific certainties look more uncertain.
Once generated, scientific uncertainties are flexibly used in some local contexts to accomplish what needs to be done with available resources and knowledge. In an ethnographic study of a hospital in Papua, New Guinea, Street (2011) analyzes uncertainty as an emergent form of expertise that involves giving up control, understanding local medical practice that does not privilege biomedical knowledge, and working within contingencies imposed by technological limits and local knowledge. Unlike western medical practice, Street finds that uncertainty in medical diagnosis was a resource for opening up action within localized conditions of limited resources and lack of scientific authority. Uncertainties were communicated in medical charts that failed to center on a clear diagnostic label. The chart did not serve the purpose of moving medical work toward diagnostic closure, but instead held biomedical knowledge still, opening up multiple pathways for action (Street, 2011). This example illustrates not only the emergence, but also the functionality, of scientific uncertainty in the sense of making biomedical knowledge more adaptable to local culture and contingencies.
Though it may be discomforting to acknowledge it due to the scale and politics of the issue, similar analyses of scientific uncertainty related to global climate change are relevant (e.g., Hulme, 2009; Pielke, 2007; Rayner, 2010). Climate change scientists would be the first to acknowledge that many uncertainties remain about different dimensions of global climate change. However, the vast majority of mainstream scientists agree—sometimes by vote—that the basic science of human-caused global warming and climate change is settled. It is certain science. However, as climate change science moves into the arena of public science—where it has been for some time—challenges to the settled nature of this science emerge. These challenges can be interpreted and analyzed merely as efforts of the fossil fuel industry, political conservatives, and their hired scientific experts to protect a traditional way of life and profits (e.g., McCright & Dunlap, 2000, 2003, 2010; Oreskes & Conway, 2010). This analytical approach reproduces and defends the scientific certainty boundary. Alternatively, one can explore the different values, norms, assumptions, and questions held by different publics, and by some scientists, that sometimes muddy the boundary between scientific certainty and uncertainty (Hulme, 2009). Uncertainties may emerge from questioning whether climate models sufficiently capture the physical and chemical complexities of the atmosphere (this question receives traction in public contexts), whether global climate change is really of the order of significance of other global problems (e.g., Lomborg, 2001), whether global climate change is relevant to most people’s day-to-day lives, or from other values that question human capability to engineer global environmental change (see Hulme, 2009 for elaboration). Questions are also raised about the integrity of climate change research and climate change scientists, which calls attention to potential errors in key studies or the work of the IPCC, and to the vested interests contained in funding streams directed to climate change research. However one personally perceives the current state of scientific knowledge on climate change, as with the case of risks associated with GMOs, the social, cultural, and political conflicts and economic interests in public contexts have destabilized scientific certainties and generated uncertainties.
The Normalcy of Scientific Uncertainty in Public Science
As the previous discussion implies, one can begin to conceptualize scientific uncertainty as a normal feature and outcome of public science. This section explores this sense of normalcy. Though there are some one-off episodes of public science (e.g., a specially arranged public engagement event, a public or political hearing on a more obscure topic, a special museum exhibit), most occasions involve a series of events and media coverage of a particular topic. The series may be orchestrated by scientists or, more often, by journalists, bloggers, social movement actors, and so on who mediate between scientists and publics or serve as scientific agents. The series of events occur over time and distance, have a history, and can be self-referential. Within this framework, scientific uncertainty becomes a normal feature as scientists protect their cognitive authority and careers through caveats and hesitancy in talk, and as accounts of science vary over time and distance. Uncertainty can emerge in areas of science that might appear quite certain within the scientific community.
Several dimensions can be identified. At an individual level, scientists may use caveats when describing the results of their research to avoid a perception that they’re overconfident or unaware of the finding’s limits. This practice is especially prevalent in highly salient research areas where public stakes are high. These caveats in public science become discursive modalities indicating some degree of hesitancy in the factualness of the claims (Gilbert & Mulkay, 1984).
Some scientists also prefer discussing their own current research in public science contexts, rather than summarizing a particular state of science. Journalists as well are more intrigued with new research projects because it adds novelty to the news cycle. Journalists and scientists often connect new projects to ongoing public issues to highlight their significance (e.g., heart risk of eating a certain type of food, health effects of electromagnetic radiation), potentially generating a perception that the issue is unresolved. For example, during the 1990s, coverage of global climate change in the popular press was replete with stories focusing on new scientific research projects all tied to their potential significance for solving major questions about climate change (Zehr, 2000). Any one of those studies, of course, likely turned out to be inconsequential for the larger issue, but from readers’ standpoints they served as indications that the larger questions were unanswered. That is, the scientific community was uncertain.
Journalists may contribute to the construction of scientific uncertainty by following the norm of journalistic balancing where different views of an issue receive attention (Boykoff & Boykoff, 2004, 2007; Dearing, 1995; Stocking, 1999; Zehr, 2000). From a journalistic standpoint, balancing indicates objectivity, where the journalist is not taking a stand on a definitive account of scientific knowledge. From the reader’s or viewer’s standpoint, however, the different scientific views may be interpreted as scientific uncertainty. Much has been written about this practice in the global climate change case, though Boykoff (2007) notes a decline in the practice since 2002. In the contemporary media context, journalistic balancing may be less of an issue when one considers the ideological choices for news from television, newspapers, radio, and web. Each source often draws upon its own science to legitimate a position. One public may be quite certain of its science on a particular issue, but that view may be 180 degrees different from the view of another public. In addition, a “weight-of-evidence” reporting strategy may attenuate public perceptions of uncertainty, even while adhering to journalistic norms of presenting different truth claims (Kohl et al., 2016).
Scientific uncertainty also may be constructed across different scientists’ accounts when each individually may be appear certain. This may occur with journalistic balancing, of course, but also across incommensurable scientific accounts of an issue. With the global climate change case, for example, global warming predictions may be quite different between climate modelers and atmospheric physicists, even though both sets of scientists may hold much certainty in their claims. Within the scientific community, the differences can often be understood and explained on epistemological grounds. In public science, on the other hand, science may be perceived as more unified, thus these different predictions appear indicative of uncertainty.
Television and other media shows oriented around science may generate suspense by building up scientific uncertainty that often is resolved later in the show. The media may create what Collins (1987) refers to as a “window of uncertainty,” used to mimic the scientific questioning process and pique the interest of viewers. Long-running science shows in the United States, like Nova, commonly use this strategy. A sense of scientific controversy may be generated when different scientists’ views are presented. Questions are raised, but they are carefully managed so that a clear scientific resolution lies on the horizon. When depicting a past controversy, questions are typically resolved by the end of the show. If a current controversy, usually a clear path to scientific resolution will be laid out into the future. Scientific uncertainty is depicted as a normal feature of science in these shows rather than something that disrupts the orderly production of scientific knowledge.
Studies of the framing of science on television commonly identify frames that center on scientific uncertainty and controversy. For example, research on stories about molecular medicine on German television shows identifies four main frames, two of which focus on uncertainty—scientific uncertainty and controversy and conflicting scientific evidence (Ruhrmann, Guenther, & Kessler, 2015). Overall, only 25% of the programs examined in this study dealt with scientific certainty, while 63% dealt with uncertainty.
Conclusion: More Effective Forms of Communicating Scientific Uncertainty?
This article has explored some of the literature on how scientific uncertainty (and related forms of ignorance, non-knowledge, knowledge-gaps, etc.) are constructed and communicated in scientific contexts and in public science. Communicative conventions dominate how uncertainty is represented in professional contexts such as a scientific article or presentation to other scientists. In public science, however, scientific uncertainty may be less conventionally constructed, though it is open to being managed and used for particular purposes. Research on whether scientists willingly express uncertainties in public science indicates that they are still wary about it, fearing that it may be misunderstood or used by stakeholders to block needed action. This research also indicates, however, that the public generally desires this information even in health related circumstances where decision-making and action are encumbered. Communication of uncertainty also doesn’t seem to alter public perceptions of scientific authority, though more research needs to be done on this question.
A different line of research explores scientific uncertainty as an emergent phenomenon in public science, even under conditions where scientists may be individually quite certain. Scientific uncertainty can be constructed and managed to serve vested interests and become a normal outcome of the interaction between scientific knowledge and the values, norms, local knowledges, and interests of diverse publics. In this sense, scientific uncertainty can be an outcome of social conflict rather than a cause. In this latter area of research, normative questions of how to better communicate and increase transparency of scientific uncertainty are raised. The scientific community is challenged to improve understandings of scientific uncertainties in public science domains where they are emergent from social conflict. Natural scientists tend to draw upon the language of mathematics for expressing scientific uncertainty, placing it in probabilistic terms. When communicating with non-scientists, however, they recognize the need to use other language but find it difficult to do so while maintaining scientific legitimacy (Landström, Hauxwell-Baldwin, Lorenzoni, & Rogers-Hayden, 2015). Most natural scientists also feel that policymakers and the public have a poor understanding of scientific uncertainty, sometimes interpreting it as an indication of ignorance (Landström et al., 2015).
Part of the problem may be located in the dominant view of the purpose of scientific expertise as something that should close down debate, leading to rational policies and decisions. A solution may lie in an altered view of scientific expertise, where scientists serve the purpose of what Stirling, Leach, and colleagues at the University of Sussex (e.g., Leach, Scoones, & Stirling, 2010) refer to as the “opening up” of scientific expertise, or what Pielke (2007) refers to as performing the role of “honest broker.” The general point here is that scientists would be encouraged to proactively open up scientific uncertainties involving complex issues, recognizing their social, cultural, and economic dimensions, rather than closing them down or ignoring them. In Pielke’s analysis, the honest broker opens up policy alternatives, openly recognizing the scientific uncertainties built within each. In Stirling and Gee’s (2002) words, it involves advocating precaution in the face of uncertainties in scientific knowledge, becoming more humble about the ability of science to address public issues, recognizing more completely the complexities and indeterminacies of each policy path, and encouraging participation in decision making by the full range of interested and affected parties. What is lost with this approach are quick scientific solutions to public issues, as well as, perhaps, further erosion of a public myth of science as an authoritative and definitive knowledge system. What might be gained is more honesty about uncertainties and ignorance in public science, potentially undercutting the effectiveness of some interest groups’ strategy of claiming, conspiratorially or not, that scientists are hiding uncertainty and ignorance and silencing maverick views.
Further Reading
- Chilvers, J., & Kearnes, M. (Eds.), (2016). Remaking participation: Science, environment and emergent publics. New York: Routledge.
- Davies, S. R. (2013). Constituting public engagement: Meanings and genealogies of PEST in two U.K. studies. Science Communication, 35, 687–707.
- Frickel, S., Gibbon, S., Howard, J., Kempner, J., Ottinger, G., & Hess, D. J. (2010). Undone science: Charting social movements and civil society challenges to research agenda setting. Science, Technology & Human Values, 35, 444–473.
- Friedman, S. M., Dunwoody, S., & Rogers, C. L. (Eds.), (1999). Communicating uncertainty: Media coverage of new and controversial science. Mahwah, NJ: Lawrence Erlbaum.
- Gross, M., & McGoey, L. (Eds.), (2015). Routledge international handbook of ignorance studies. New York: Routledge.
- Hess, D. J. (2016). Undone science: Social movements, mobilized publics, and industrial transitions. Cambridge, MA: MIT.
- McGoey, L. (Eds.). (2014). An introduction to the sociology of ignorance: Essays on the limits of knowing. New York: Routledge.
- Peters, H. P., & Dunwoody, S. (Eds.). (2016). Scientific uncertainty in the media [Special issue]. Public Understanding of Science, 25, 893–1013.
References
- Bell, S. (2006). Concerned scientists, pragmatic politics, and Australia’s green drought. Science and Public Policy, 33, 561–570.
- Bensaude-Vincent, B. (2009). A historical perspective on science and its “others.” Isis, 100, 359–368.
- Besley, J. C. (2015). What do scientists think about the public and does it matter to their online engagement? Science and Public Policy, 42, 201–214.
- Besley, J. C., & Nisbet, M. C. (2013). How scientists view the public, the media, and the political process. Public Understanding of Science, 22, 644–659.
- Böschen, S., Kastenhofer, K., Rust, I., Soentgen, J., & Wehling, P. (2010). Scientific nonknowledge and its political dynamics: The cases of agri-biotechnology and mobile phoning. Science, Technology, & Human Values, 35, 783–811.
- Boykoff, M. T. (2007). Flogging a dead norm? Newspaper coverage of anthropogenic climate change in the United States and United Kingdom from 2003 to 2006. Area, 39, 470–481.
- Boykoff, M. T., & Boykoff, J. (2004). Balance as bias: Global warming and the US prestige press. Global Environmental Change, 14, 125–136.
- Boykoff, M. T., & Boykoff, J. (2007). Climate change and journalistic norms: A case study of US mass-media coverage. Geoforum, 38, 1190–1204.
- Brandt, A. M. (2012). Inventing conflicts of interest: A history of tobacco industry tactics. American Journal of Public Health, 102, 63–71.
- Campbell, B. L. (1985). Uncertainty as symbolic action in disputes among experts. Social Studies of Science, 15, 429–453.
- Chilvers, J., & Kearnes, M. (2016). Science, democracy, and emergent publics. In J. Chilvers & M. Kearnes (Eds.), Remaking participation: Science, environment, and emergent publics. New York: Routledge.
- Collins, H. M. (1985). Changing order: Replication and induction in scientific practice. Newbury Park, CA: SAGE.
- Collins, H. M. (1987). Certainty and the public understanding of science: Science on television. Social Studies of Science, 17, 689–713.
- Cope, S., Frewer, L. J., Houghton, J., Rowe, G., Fischer, A. R. H., & de Jonge, J. (2010). Consumer perceptions of best practice in food risk communication and management: Implications for risk analysis policy. Food Policy, 35, 349–357.
- Cordner, A. (2015). Strategic science translation and environmental controversies. Science, Technology, & Human Values, 40, 915–938.
- Davies, S. R. (2013). Constituting public engagement: Meanings and genealogies of PEST in two U.K. studies. Science Communication, 35, 687–707.
- Dearing, J. W. (1995). Newspaper coverage of maverick science: Creating controversy through balancing. Public Understanding of Science, 4, 341–361.
- Decoteau, C. L., & Underman, K. (2015). Adjudicating non-knowledge in the Omnibus Autism Proceedings. Social Studies of Science, 45, 471–500.
- Driedger, S. M., Eyles, J., Elliott, S. D., & Cole, D. C. (2002). Constructing scientific authorities: Issue framing of chlorinated disinfection byproducts in public health. Risk Analysis, 22, 789–802.
- Fortun, K. (2004). From Bhopal to the informating of environmentalism: Risk communication in historical perspective. Osiris, 19, 283–296.
- Frewer, L. J., Hunt, S., Brennan, M., Kuznescof, S., Ness, M., & Ritson, C. (2003). The views of scientific experts on how the public conceptualize uncertainty. Journal of Risk Research, 6, 75–85.
- Frickel, S., Gibbon, S., Howard, J., Kempner, J., Ottinger, G., & Hess, D. J. (2010). Undone science: Charting social movements and civil society challenges to research agenda setting. Science, Technology, & Human Values, 35, 444–473.
- Frickel, S., & Vincent, M. B. (2007). Katrina, contamination, and the unintended organization of ignorance. Technology in Society, 29, 181–188.
- Funtowicz, S. O., & Ravetz, J. R. (1990). Uncertainty and quality in science for policy. Dordrecht, Netherlands: Kluwer Academic.
- Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. Thousand Oaks, CA: SAGE.
- Gilbert, G. N., & Mulkay, M. (1984). Opening Pandora’s box. Cambridge, U.K.: Cambridge University Press.
- Hilgartner, S. (1990). The dominant view of popularization: Conceptual problems, political uses. Social Studies of Science, 20, 519–539.
- Hulme, M. (2009). Why we disagree about climate change: Understanding controversy, inaction, and opportunity. New York: Cambridge University Press.
- Irwin, A., & Wynne, B. (1996). Misunderstanding science? The public reconstruction of science and technology. Cambridge, U.K.: Cambridge University Press.
- Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and social order. New York: Routledge.
- Jasanoff, S. (2005). Designs on nature: Science and democracy in Europe and the United States. Princeton, NJ: Princeton University Press.
- Jensen, J. D., & Hurley, R. J. (2010). Conflicting stories about public scientific controversies: Effects of news coverage and divergence on scientists’ credibility. Public Understanding of Science, 21, 689–704.
- Jones, R. N. (2011). The latest iteration of IPCC uncertainty guidance: An author perspective. Climatic Change, 108, 733–743.
- Kleinman, D. L., & Suryanarayanan, S. (2012). Dying bees and the social production of ignorance. Science, Technology, & Human Values, 38, 492–517.
- Kohl, P. A., Kim, S. Y., Peng, Y., Akin, H., Koh, E. J., Howell, A., et al. (2016). The influence of weight-of-evidence strategies on audience perceptions of (un)certainty when media cover contested science. Public Understanding of Science, 25, 976–991.
- Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA: Harvard University Press.
- Landström, C., Hauxwell-Baldwin, R., Lorenzoni, I., & Rogers-Hayden, T. (2015). The (mis)understanding of scientific uncertainty? How experts view policy-makers, the media and publics. Science as Culture, 24, 276–298.
- Latour, B. (1987). Science in action. Cambridge, MA: Harvard University Press.
- Latour, B., & Woolgar, S. (1979). Laboratory life: The social construction of scientific facts. Beverly Hills, CA: SAGE.
- Leach, M., Scoones, I., & Stirling, A. (2010). Dynamic sustainabilities: Technology, environment, social justice. London: Earthscan.
- Lehmkuhl, M., & Peters, H. M. (2016). Constructing (un-)certainty: An exploration of journalistic decision-making in the reporting of neuroscience. Public Understanding of Science, 25, 909–926.
- Levidow, L. (2001). Precautionary uncertainty: Regulating GM crops in Europe. Social Studies of Science, 31, 842–874.
- Lomborg, B. (2001). The skeptical environmentalist. Cambridge, U.K.: Cambridge University Press.
- Lynch, M. (1985). Art and artifact in laboratory science: A study of shop work and shop talk in a research laboratory. London: Routledge & Kegan Paul.
- MacKenzie, D. A. (1981). Statistics in Britain, 1865–1930: The social construction of scientific knowledge. Edinburgh: Edinburgh University Press.
- McCright, A. M., & Dunlap, R. E. (2000). Challenging global warming as a social problem. Social Problems, 47, 499–522.
- McCright, A. M., & Dunlap, R. E. (2003). Defeating Kyoto: The conservative movement’s impact on US climate change policy. Social Problems, 50, 348–373.
- McCright, A. M., & Dunlap, R. E. (2010). Anti-reflexivity: The American conservative movement’s success in undermining climate science and policy. Theory, Culture, & Society, 27, 100–133.
- McGoey, L. (2009). Pharmaceutical controversies and the performative value of uncertainty. Science as Culture, 18, 151–164.
- McGoey, L. (2012). Strategic unknowns: Towards a sociology of ignorance. Economy and Society, 41, 1–16.
- Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.
- Merton, R. K. (1987). Three fragments from a sociologist’s notebook: Establishing the phenomenon, specified ignorance, and strategic research materials. Annual Review of Sociology, 13, 1–28.
- Morello-Frosch, R., Varshavsky, J., Liboiron, M., Brown, P., & Brody, J. G. (2015). Communicating results in post-Belmont era biomonitoring studies: Lessons from genetics and neuroimaging research. Environmental Research, 136, 363–372.
- Editorial: Responding to uncertainty. (2005). Nature, 437, 1.
- Nelkin, D. (1975). The political impact of technical expertise. Social Studies of Science, 5, 35–54.
- Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. New York: Bloomsbury Press.
- Petersen, A. C., Cath, A., Hage, M., Kunseler, E., & van der Sluijs, J. P. (2011). Post-normal science practice at the Netherlands Environmental Assessment Agency. Science, Technology, & Human Values, 36, 362–388.
- Pickering, A. (1984). Constructing quarks: A sociological history of particle physics. Chicago: University of Chicago Press.
- Pielke, R., Jr. (2007). The honest broker: Making sense of science in policy and politics. New York: Cambridge University Press.
- Pinch, T. (1981). The sun-set: The presentation of certainty in scientific life. Social Studies of Science, 11, 131–158.
- Pinto, M. F. (2015). Tensions in agnotology: Normativity in the studies of commercially driven ignorance. Social Studies of Science, 45, 294–315.
- Post, S. (2016). Communicating science in public controversies: Strategic considerations of the German climate scientists. Public Understanding of Science, 25, 61–70.
- Priest, S. H. (2001). A grain of truth: The media, the public, and biotechnology. Lanham, MD: Rowman & Littlefield.
- Proctor, R. N. (2008). Agnotology: A missing term to describe the cultural production of ignorance (and its study). In R. N. Proctor & L. Schiebinger (Eds.), Agnotology: The making and unmaking of ignorance. Stanford, CA: Stanford University Press.
- Rayner, S. (2010). How to eat an elephant: A bottom-up approach to climate policy. Climate Policy, 10, 615–621.
- Retzbach, A., & Maier, M. (2015). Communicating scientific uncertainty: Media effects on public engagement with science. Communication Research, 42, 429–456.
- Ruhrmann, G., Guenther, L., & Kessler, S. H. (2015). Frames of scientific evidence: How journalists represent the (un)certainty of molecular medicine in science television programs. Public Understanding of Science, 24, 681–696.
- Sarewitz, D. (1996). Frontiers of illusion. Philadelphia: Temple University Press.
- Schneider, J. (2010). Making spaces for the “nuances of truth”: Communication and uncertainty at an environmental journalists’ workshop. Science Communication, 32, 171–201.
- Shackley, S., & Wynne, B. (1996). Representing uncertainty in global climate change science and policy: Boundary-ordering devices and authority. Science, Technology, & Human Values, 21, 275–302.
- Shapin, S. (1994). A social history of truth: Civility and science in seventeenth-century England. Chicago: University of Chicago Press.
- Shapin, S. (2008). The scientific life: A moral history of a late modern vocation. Chicago: University of Chicago Press.
- Shinn, T., & Whitley, R. (Eds.). (1985). Expository science: Forms and functions of popularization. Sociology of the sciences yearbook. Boston: Reidel.
- Smithson, M. (1989). Ignorance and uncertainty: Emerging paradigms. New York: Springer-Verlag.
- Stilgoe, J. (2007). The (co-)production of public uncertainty: UK scientific advice on mobile phone health risks. Public Understanding of Science, 16, 45–61.
- Stirling, A., & Gee, D. (2002). Science, precaution, and practice. Public Health Reports, 117 (November–December), 521–533.
- Stocking, S. H. (1999). How journalists deal with scientific uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (Eds.), Communicating uncertainty: Media coverage of new and controversial science. Mahwah, NJ: Lawrence Erlbaum.
- Stocking, S. H., & Holstein, L. W. (1993). Constructing and reconstructing scientific ignorance: Ignorance claims in science and journalism. Knowledge: Creation, diffusion, utilization, 15, 186–210.
- Stone, G. D. (2011). Contradictions in the last mile: Suicide, culture, and e-agriculture in rural India. Science, Technology & Human Values, 36, 759–790.
- Street, A. (2011). Artefacts of not-knowing: The medical record, the diagnosis, and the production of uncertainty in Papua New Guinean biomedicine. Social Studies of Science, 41, 815–834.
- Suryanarayanan, S., & Kleinman, D. L. (2011). Disappearing bees and reluctant regulators. Issues in Science and Technology, 27, 31–36.
- Suryanarayanan, S., & Kleinman, D. L. (2013). Be(e)coming experts: The controversy over insecticides in the honey bee colony collapse disorder. Social Studies of Science, 43, 215–240.
- Turner, F. M. (1980). Public science in Britain, 1880–1919. Isis, 71, 589–608.
- van Dijk, H., Houghton, J., van Kleef, E., van der Lans, I., Rowe, G., & Frewer, L. (2008). Consumer responses to communication about food risk management. Appetite, 50, 340–352.
- Woolgar, S., & Lezaun, J. (2015). Missing the (question) mark? What is a turn to ontology? Social Studies of Science, 45, 462–467.
- Wynne, B. (1992). Uncertainty and environmental learning: Reconceiving science and policy in the preventive paradigm. Global Environmental Change, 2, 111–127.
- Wynne, B. (1995). Public understanding of science. In S. Jasanoff, G. E. Markle, J. C. Petersen, & T. Pinch (Eds.), Handbook of science and technology studies. Thousand Oaks, CA: SAGE.
- Zehr, S. C. (1994). The centrality of scientists and the translation of interests in the U.S. acid rain controversy. Canadian Review of Sociology and Anthropology, 31, 325–353.
- Zehr, S. C. (1999). Scientists’ representations of uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (Eds.), Communicating uncertainty: Media coverage of new and controversial science. Mahwah, NJ: Lawrence Erlbaum.
- Zehr, S. C. (2000). Public representations of scientific uncertainty about global climate change. Public Understanding of Science, 9, 85–103.
Related Articles
- Science and Communication
- The Politics of Scientific Knowledge
- Types of Explanations in Health and Risk Messaging
- Using Quotations in Health and Risk Message Design
- Health and Risk Policymaking, the Precautionary Principle, and Policy Advocacy
- Conflicting Information and Message Competition in Health and Risk Messaging