Conflicting Information and Message Competition in Health and Risk Messaging
Summary and Keywords
Clinicians, medical and public health researchers, and communication scholars alike have long been concerned about the effects of conflicting health messages in the broader public information environment. Not only have these messages been referred to in many ways (e.g., “competing,” “contradictory,” “inconsistent,” “mixed,” “divergent”), but they have been conceptualized in distinct ways as well—perhaps because they have been the subject of study across health, science, and political communication domains. Regardless of specific terminology and definitions, the concerns have been consistent throughout: conflicting health messages exist in the broader environment, they are noticed by the public, and they impact public understanding and health behavior. Yet until recently, the scientific evidence base to substantiate these concerns has been remarkably thin. In the past few years, there has been a growing body of rigorous empirical research documenting the prevalence of conflicting health messages in the media environment. There is also increasing evidence that people perceive conflict and controversy about several health topics, including nutrition and cancer screening. Although historically most studies have stopped short of systematically capturing exposure to conflicting health messages—which is the all-important first step in demonstrating effects—there have been some recent efforts here. Taken together, a set of qualitative (focus group) and quantitative (observational survey and experimental) studies, guided by diverse theoretical frameworks, now provides compelling evidence that there are adverse outcomes of exposure to conflicting health information. The origins of such information vary, but understanding epidemiology and the nature of scientific discovery—as well as how science and health news is produced and understood by the public—helps to shed light on how conflicting health messages arise. As evidence of the effects of conflicting messages accumulates, it is important to consider not just the implications of such messages for health and risk communication, but also whether and how we can intervene to address the effects of exposure to message conflict.
Conceptualizing Conflicting, Contradictory, or Competing Health Messages
Communication scholars have long been concerned about the influence of conflicting, contradictory, or competing information on the public. Historically, academic attention to this concern has been concentrated in the subfield of political communication. The study of competing information flows can be traced to voting studies in the 1940s. Lazarsfeld, Berelson, and Caudet described “cross-pressures” as “conflicts or inconsistencies … which influence vote decision” (1968, p. 53). This research referred to cross-pressures or competing information flows in the interpersonal environment; such cross-cutting social networks remain an important area of study, with studies on the effects of heterogeneous social environments on outcomes including political participation and engagement (e.g., Mutz, 2002; Huckfeldt, Mendez, & Osborn, 2004) and partisan commitment (e.g., Knoke, 1990). Additionally, increasing attention is being devoted to competing message flows in a mediated environment—whether about political candidates in election campaigns (e.g., Nir & Druckman, 2008; Dalton, Beck, & Huckfeldt, 1998; Zaller, 1996) or policy issues such as war (e.g., Zaller, 1992)—and the effects of such information flows on public opinion or other outcomes. Yet despite this foundational work in the political communication space, only recently have health communication scholars begun conducting rigorous theory-driven research on conflicting information and its effects.
Conflicting health messages, which have also been referred to as “competing,” “contradictory,” “inconsistent,” “mixed,” and “divergent” messages, have been conceptualized in several ways. The varied terminology and distinct conceptualizations may be due, at least in part, to the fact that these messages have been the subject of study across health, science, and political communication domains. First, conflicting messages have been conceptualized as messages that are two-sided. Historically, this conceptualization has been the most dominant, although scholars have not always explicitly described these messages using terms such as conflicting or contradictory. Two-sided messages have been defined as messages that provide both supporting (or positive) and opposing (or negative) information about a particular issue, in contrast to one-sided messages, which provide only a single point of view (often supporting information only; O’Keefe, 1999). For example, Nan and Daily (2015) examined the effects of mixed online information about the human papillomavirus (HPV) vaccine—specifically, information found in user-generated blogs—on perceived vaccine efficacy and safety. Participants were randomly assigned either to a no-blog control or two-blog treatment (one positive and one negative blog featuring opposing views about the vaccine). In another recent example, Chang (2013, 2015) randomly assigned Taiwanese participants to read either a one-sided news story that discussed positive research findings about or positive health outcomes associated with a food or supplement (e.g., tofu, vitamin B6, milk), or a two-sided story that provided both positive and negative findings or outcomes associated with the food or supplement.
Conceptualizing conflicting health messages as those that are two-sided is perhaps most comparable to the conceptualization of competing frames. Competitive framing is a growing area of research within political and science communication that examines how divergent perspectives on an issue may influence public opinion (Borah, 2011; Chong & Druckman, 2007; Niederdeppe, Gollust, & Barry, 2014; Nisbet, Hart, Myers, & Ellithorpe, 2013; Van Klingeren, Boomgaarden, & De Vreese, 2017; Wise & Brewer, 2010). In their seminal article calling for increased study of competitive framing effects, Chong and Druckman (2007) refer to competitive frames—which are either “dual (exposure to both frames in equal quantities)” or “asymmetric (exposure to both frames in unequal quantities)”—as two-sided, in contrast to asymmetric one-sided frames, which involve exposure to only one frame (p. 103). Yet similar to many two-sided health message studies, typically competitive framing effects studies do not explicitly describe these frames as offering conflicting or contradictory information.
Although defining message conflict in terms of sidedness is not inaccurate, there are nuances that might not be captured in this overarching definition. Nagler (2010) laid out three possible conceptualizations of conflicting health messages (Figure 1) and, in so doing, highlights some of these nuances and distinctions across potential definitions. First, conflicting messages could be defined as those that provide information about two distinct behaviors and their effects on the same outcome (Figure 1A). For example, someone might come across a message that links a first behavior (like running five times a week for 30 minutes) to improved heart health. Subsequently, he or she might be exposed to a different message, this one linking a second behavior (like walking five times a week for 60 minutes) to improved heart health. The first message emphasizes that running yields greater cardiac benefits than other forms of cardiovascular exercise; the second makes the same argument for walking. Running and walking are certainly related behaviors—they both fall under the larger behavioral category of exercise—but they are nonetheless distinct. If someone were exposed to both messages, he or she might wonder whether running or walking is preferable, if the goal is optimizing heart health.
Second, conflicting messages could be conceptualized as those that offer information about the same behavior producing two distinct outcomes (Figure 1B). Here, someone might read or hear different messages about wine consumption (a single behavior) being linked to both heart health (outcome #1) and increased breast cancer risk (outcome #2). This conceptualization is not altogether distinct from the two-sided definition advanced in other studies—certainly, in the wine example provided, one would be exposed to both positive (supporting) and negative (opposing) information about the health effects of wine—but it provides greater specificity. It suggests that people might be confronted by decisional conflict: the information itself might not conflict (in fact, wine and other forms of alcohol have legitimately been linked with both heart health and cancer risk), but one might perceive conflict and therefore question whether he or she ought to drink wine and, if so, how much. To date, this conceptualization of conflicting health messages has guided several studies, which have focused on conflicting or contradictory nutrition information (Lee, Nagler, & Wang, 2017; Nagler & Hornik, 2012; Nagler, 2014).
The scenario in Figure 1A includes messages about competing behaviors and their effects on a single outcome; in contrast, the scenario in Figure 1B includes messages about a single behavior producing two distinct outcomes. Last, Figure 1C presents a third possible scenario: messages that provide competing claims about a particular behavior resulting in a particular health outcome. Assume the behavior in question is milk consumption. Claim #1 might say that drinking organic milk is better for you than non-organic milk because it reduces the risk of cancer, while claim #2 might advocate drinking non-organic milk over organic milk, arguing that there is no increased risk of cancer. This sort of message might be encountered most often in food-related advertising (i.e., organic and non-organic milk producers vying for consumers). However, it is also consistent with a recent definition of conflicting information proposed by Carpenter and colleagues (2016): “two or more health-related propositions [statements or assertions about a health-related issue] that are logically inconsistent with one another” (p. 1175). Whereas the conceptualization offered in Figure 1B reflects decisional conflict, Carpenter et al.’s conceptualization reflects informational conflict. Here, people are confronted by two or more distinct propositions that they cannot simultaneously engage in or believe. For example, faced with ongoing expert disagreement about the age at and frequency with which women should begin mammography screening for breast cancer, a woman cannot decide to initiate screening at age 40, age 45, and age 50. These are logically inconsistent (and thus conflicting) recommendations (or claims, per Figure 1C) issued by different clinical and professional organizations. A woman, in conversations with her provider, must choose which recommendation to follow. Future research will need to provide greater clarity on these distinct conceptualizations and, perhaps more important, assess whether the effects of exposure to conflicting messages vary across conceptualizations.
It is worth noting that, across conceptual definitions, there are two ways in which conflicting information can take shape: either as “messages about contradiction” or as “contradictory messages” (Nagler, 2010). “Messages about contradiction” refer to discrete units of content that contain contradictory or conflicting information. For example, drawing on the wine example used to illustrate Figure 1B, a message about contradiction would involve a journalist describing both the positive and negative health effects of wine consumption in a single news story. A content analysis of nutrition media messages showed that journalists often underscore such contradiction for the reader by using specific terms (e.g., “conflicting findings,” “flip-flop advice”) and, at times, tongue-in-cheek editorializing (e.g., “How often do we read that what was once ‘bad’ is good again?,” “Is this yet another one of those eggs-are-good-for-you-eggs-are-bad-for-you routines?”) (Nagler, 2010, pp. 79–80).
In contrast, “contradictory messages” refers to units of content that feature information or claims that do not specifically underscore contradiction, but which, when part of a broader media diet, could lead the audience to infer contradiction. Using the wine example, one news story might report on the health benefits of wine consumption, whereas another story might report on the risks of wine consumption; neither story presents both the risks and benefits, nor do they underscore contradiction for the audience. Rather, contradiction must be inferred by someone who sees the two (or more) stories. Content analyses have shown that such messages exist in the news media across a range of health topics. In the nutrition context, Greiner, Smith, and Guallar (2010) identified news stories that discussed the health benefits or health risks of fish consumption. They found that both types of stories were prevalent, although risk messages outweighed benefit messages four to one. In the tobacco space, Durrant and colleagues (2003) found that while some Australian news stories featured a positive slant toward tobacco control, other stories contained a negative slant; members of this team found similar results in U.S. news coverage (Smith, Wakefield, & Edsall, 2006). Someone who is exposed to both positive and negative news coverage of tobacco control could perceive conflicting or competing claims about tobacco control objectives, which could, in turn, influence subsequent outcomes (e.g., support for tobacco control policy). Ultimately, discrete messages about contradiction are likely to be less prevalent than contradictory messages, but they could be more powerful, because they do not depend on the audience coming across competing information in different news stories or advertisements and inferring conflict (Nagler, 2010).
Origins of Conflicting Information in the Health Domain
Before assessing the prevalence and effects of conflicting messages, it is useful first to understand how these messages arise. Considering epidemiology and the nature of scientific discovery is central to understanding the origins of conflicting messages; so, too, is considering research in science communication and public understanding of science.
A primary objective of epidemiology is to identify the etiology or cause of disease, including a disease’s associated risk factors. Epidemiologists use a number of study designs to assess disease etiology, and these designs have distinct implications for causal inference—and, in turn, help us to understand how seemingly conflicting health messages can arise. For example, case-control studies and cohort studies, both observational studies, are central epidemiologic approaches. Sample selection is the central difference between the two study designs. In case-control studies, subjects with disease (cases) are compared to those without the disease (controls). In cohort studies, subjects who are exposed are compared with subjects who are not exposed. Just as the study designs differ, so, too, do the outcome measurements. In case-control studies, an odds-ratio is calculated—the odds that a case was exposed or not exposed—whereas in cohort studies, the incidence of the disease in exposed people is compared to the incidence of disease in non-exposed people (or the odds that the disease will develop in exposed people is compared to the odds it will develop in non-exposed people). Case-control studies are useful when the disease is rare, and this study design requires few subjects; however, recall bias is a limitation. Cohort studies necessitate relatively large samples that are followed over time and are impractical for rare diseases. Because the exposure precedes the outcome, temporal order is easier to establish than it is for case-control studies.
An inherent limitation to case-control and cohort study designs is the inability to establish causality, which experiments address through random assignment. Thus, to assess the efficacy of preventive and therapeutic measures, scientists use the randomized control trial. Observational and experimental study designs have their strengths and limitations. Both have their place in medical research, and triangulation of research findings is important here just as it is in social science. Ultimately, though, there are often discrepancies between observational and experimental study results—discrepancies that can arise for several reasons. For example, observational studies cannot completely account for spuriousness. Potential confounders are statistically adjusted for, but the threat of unmeasured confounders persists. In addition, observational studies and experimental studies may ask slightly different research questions, and thus may contribute to ostensibly contradictory findings. Last, accurate measurement is challenging, particularly for some exposures or risk factors. A useful example is the difference between measuring cigarette smoking exposure versus nutritional exposures. Gathering data on smoking exposure has proved to be fairly accurate. Individuals are able to provide detailed accounts of the number of cigarettes smoked per day, what brand they smoke, and changes in their smoking patterns. In contrast, nutritional exposures are much more difficult to measure because one’s diet involves an interrelated set of exposures. Also, people are often unaware of the content of the food consumed, or their behavior can subtly change over time.
One can therefore imagine how contradictory findings might arise: a rigorous randomized trial that makes stronger causal claims may reach different conclusions from earlier observational work; different studies may vary in how they measure exposure (and some may be more prone to measurement error than others); and studies may seem to be asking the same question, but a nuanced reading of the study designs indicates otherwise. Importantly, epidemiologists, other scientists, and clinicians are aware of these issues, and thus they are typically well-equipped to make sense of seemingly conflicting findings. Specifically, the medical community bases its judgements on the “totality of the data” available across study designs (Taubes, 2007, para. 18). Sir Austin Bradford Hill (Hill, 1971) and the U.S. Preventive Services Task Force (USPSTF, 1989), among others, have established criteria for causal inference in epidemiology. Some epidemiologists have argued that applying these criteria to specific nutritional epidemiologic questions, for example, would help to clarify findings that seem contradictory (Kushi, 1999).
While researchers and clinicians understand the nature of scientific discovery, the public may be less able to reconcile seemingly conflicting study results and recommendations. This may stem from two factors: (1) inadequacies in how health and science information is communicated to the public and (2) inadequacies in the public’s limited understanding of scientific research. First, both clinicians and communication scholars have been critical of media coverage of science and health. In various medical journal editorials, clinical researchers have raised concerns about how journalists interpret research for the public (e.g., Angell & Kassirer, 1994, p. 189; Schwartz & Woloshin, 2004). However, some have acknowledged that the medical community should play a more active role in communicating scientific information to the public (e.g., Shuchman & Wilkes, 1997; Dentzer, 2009). Journalists often are informed of new scientific findings through press releases from research institutions, which generally do not contextualize the particular study in a broader research area or underscore the limitations of the study. These omissions occur even though scientific studies include limitations or caveat sections for this exact purpose (Rier, 1999).
What the medical community may consider to be inaccurate reporting of research findings is, in part, a byproduct of the normative influences that dominate the fields of medicine and journalism (Nelkin, 1996). Scientists consider research to be newsworthy once its reliability has been established through replication and endorsement by experts in the field. In contrast, “new” findings, even when tentative, inherently fit journalists’ definition of “newsworthiness” (Nelkin, 1996). Competitive pressures, such as competition for the front page, or even practical concerns, such as space limitations, may lead to exaggeration of research findings’ importance or the exclusion of methodological and contextual information. Another normative difference lies in objectivity and the need for balance. Journalists are trained to provide more than one viewpoint on any given issue, but this can prove problematic in public health (Southwell, Reynolds & Fowlie, 2013). Specifically, there may be two sides to a story—for example, claims for and against an autism–vaccine link—but providing these sides with equal weight, even though scientific consensus is squarely on one side, can be misleading. Dixon and Clarke (2013) found that “falsely balanced” reporting on the autism–vaccine controversy adversely affected public perceptions of vaccine safety and vaccination intentions.
Science communication scholars also have been critical of journalistic practice for its failure to provide sufficient context for study results (although there is some academic debate about the appropriate roles and responsibilities of science and health journalists in public communication; Amend & Secko, 2012; Hallin & Briggs, 2015). Uncertainty is inherent in scientific research (Friedman, Dunwoody & Rogers, 1999), and the accumulation of scientific knowledge hinges on replication. Yet researchers have found that science news coverage is rarely hedged, often omitting key methodological and contextual information (e.g., Pellechia, 1997; Evans, Krippendorf, Yoon, Posluszny, & Thomas, 1990; Tankard & Ryan, 1974; Nelkin, 1995). In failing to provide the necessary context for study findings, news coverage may make research seem more certain than it is (Stocking, 1999). Moreover, several scholars have examined the potential effects of providing context (e.g., Jensen, 2008; Corbett & Durfee, 2004). For example, Jensen (2008) showed that failing to hedge may have adverse effects on public perceptions of trustworthiness of journalists and scientists alike. The failure to provide context—together with incidents of inaccuracy (Tankard & Ryan, 1974) and sensationalism (e.g., Glynn, 1985; Glynn & Tims, 1982)—may make it difficult for the public to make sense of conflicting findings. That said, there are important structural constraints that can give rise to such failures. For example, health journalists increasingly face tight deadline pressures, which can force them to rely on a single source (Southwell, Reynolds & Fowlie, 2013). A majority of health journalists do not have a background in health or life sciences (Viswanath et al., 2008) and thus rely on sources—including researchers and institutional press releases—for methodological and contextual information. To the extent that journalists are unable to connect with the researcher who led a study, or are forced to rely on an inadequate press release, there may be omissions or inaccuracies in reporting (Woloshin, Schwartz, Casella, Kennedy, & Larson, 2009; Brechman, Lee & Cappella, 2009).
Even if journalists provided greater context when reporting scientific research, the public’s understanding of scientific research, though improving, may not be sufficient to digest this information. U.S. adults who have at least a “minimal level of understanding of the meaning of scientific study” has grown from 12% in 1957 to 29% in 2007. Data from 1993–2007 showed that Americans who had a basic understanding of experimentation increased from 22% to 50% (Miller, 2010). However, “minimal levels of understanding” and “basic understanding” do not necessarily mean an adequate level of knowledge to accurately interpret scientific research information. Perhaps demonstrating this lack of understanding are the results from a recent Pew study, which showed that the public perceives little scientific consensus on topics that are largely agreed upon by scientists. Sixty-seven percent of those surveyed believe that scientists do not have a clear understanding of the health effects of genetically modified organisms (GMOs), 52% believe scientists are divided on “the Big Bang” theory, and 37% believe scientists do not agree on climate change (Pew Research Center, 2015).
Some communication scholars have pointed out that the necessary tools to accurately interpret scientific findings may be out of reach for the general public. In a content analysis of news coverage of science over a 30-year period, Pellechia (1997) found that even her trained coders had difficulty recognizing methodological descriptions of research design. Such issues led her to question “whether such descriptions would be recognized by the average reader who has received no such training” (Pellechia, 1997, p. 60). Additionally, qualitative interviews with epidemiologists led Rier (1999) to conclude that researchers often use the limitations and caveats sections in journal articles to protect against the inappropriate use of their research by nonscientists (including the media). However, the epidemiologists believed that these caveats were largely ineffective here, because they felt the media and public were unable to understand the content of these sections.
Such data underscore that simply providing greater context in media coverage of science and health (e.g., explicitly stating that the current study conflicts with earlier results because of X, Y, and Z reasons) will only be helpful to the extent that the public understands what these inconsistencies mean. If this level of comprehension does not exist, then the cumulative message might still be one of conflict and contradiction. This line of thinking also assumes that the public views the entire article. In fact, many may read only the headline and lede or, if they do read the full article, they may recall only the lede. Thus, even if more context were provided, and even if readers were to understand this information, they might never encounter or process it. Ultimately, then, providing more context in science and medical reporting may or may not be a panacea.
Evidence That Conflicting Health Messages Exist
The aforementioned medical journal editorials, in which clinical researchers are critical of media coverage of science and health, often make a fundamental assumption: conflicting health messages exist in the news media, they are noticed by the public, and they impact public understanding and health behavior. For many years, these assumptions rested mostly on imperfect data, grounded in anecdotal accounts or market research findings. Currently, there is a growing body of rigorous empirical research that is documenting the prevalence of conflicting health messages in the media environment. There is also increasing evidence that people perceive conflict and controversy about certain health topics. However, most studies have stopped short of systematically capturing exposure to conflicting health messages—which is the all-important first step in demonstrating effects.
To date, several content analyses have found evidence of conflicting health messages in the media. In general, researchers have found that the mainstream media are more likely to cover scientific results that either support or contradict existing knowledge, rather than findings that do not overturn or support current medical knowledge (Stryker, 2002). Not surprisingly, then, this orientation can give rise to conflicting messages in the media environment. Many of these messages are cancer-related, and thus several studies have focused on this topic in particular. In their study of cancer information published in highly circulating magazines in the United States and Canada, Clarke and Everest (2006) note that “contradictions and confusion and a consequent sense of uncertainty are evident both within and between articles” (p. 2596). For example, this includes articles that discuss the fact that there is no evidence for the efficacy of self-breast exams, but then later state that women often find breast lumps on their own. Common features in these cancer-related content analyses include discussing a new study on cancer and how it conflicts with previous research, but failing to contextualize the conflicting results (e.g., goals of the studies, sample size); offering advice that is inconsistent with screening guidelines; or lacking clear behavioral directives in light of new findings (Houn et al., 1995; Fowler & Gollust, 2015; Nagler, Fowler, & Gollust, 2015; Niederdeppe, Lee et al., 2014; Smith, Kromm, & Klassen, 2010).
Other content analyses have looked at topics including fish consumption and male eating disorders. Greiner, Smith, and Guallar (2010) identified news stories about the health benefits and risks of fish consumption and found that risk messages outweighed benefit messages four to one. They also found that 26% of stories “conveyed a sense of uncertainty about the benefits and/or risks of fish consumption,” although whether this included mentions of contradictory or conflicting findings and recommendations is not clear (Greiner, Smith, & Guallar, 2010, p. 1,791). In a content analysis of news coverage of eating disorders, Sweeting and colleagues (Sweeting et al., 2015) found evidence of conflicting statistics in news coverage of male eating disorder rates. Many articles either did not mention male incidence rates at all, or inaccurately reported male incidence rates (i.e., lower rates than reported in medical journal articles).
Overall, these content analyses support the contention that conflicting health information exists in the media environment. However, many of these studies do not fully explicate their sampling procedures, and some used qualitative methods to analyze media texts, making it difficult to generalize findings. Additionally, while many of the researchers used “conflicting” or “contradictory” information as descriptors to interpret their results, most did not explicitly operationalize or code for either of these constructs. Nagler (2010), in unpublished dissertation work, specifically operationalized and analyzed conflicting nutrition messages in the news media, but there have been few if any similar a priori efforts. The lack of systematic investigation prohibits any formal conclusions about the volume of conflicting information in the media environment; further systematic content analytic work is needed.
As far as public perceptions of conflicting health messages are concerned, most studies to date have focused on conflicting messages in the context of cancer screening, particularly in the context of mammography. During the 1990s and 2000s, three distinct controversies about mammography emerged. The first two, in 1993 and 1997, involved changes in the recommended age of screening initiation and the frequency with which screening should occur. These changes received media coverage, and researchers were interested in whether the conflicting recommendations created confusion about mammography screening and lowered screening intentions or behaviors. For example, one study asked nearly 1,000 Washington State women: “[Have you] ever received conflicting information about either the age at which [women] should begin having regular mammograms or how often women should get regular mammograms or both?” (Taplin, Urban, Taylor & Savarino, 1997, p. 90). Conflicting recommendations were perceived by 49% of women, but the sources of such information are not known. Perceived conflict and mammography use were not associated. Another study drew on a random sample of 1,300 women from 2,165 Blue Cross/Blue Shield of North Carolina members and explored whether women were confused about mammography screening guidelines and whether they were on schedule in their screening. Thirty percent of participants agreed with the statement that “there is so much different information about how often women should have mammograms that [I am] confused” (Rimer, Halabi, Strigo, Crawford, & Lipkus, 1999, p. 513). In addition, 35% indicated being off schedule with mammography screening, and confusion was a significant predictor of this outcome.
The third mammography controversy was in 2001, when the Lancet released a meta-analysis that argued there was insufficient evidence to make screening recommendations for women of any age. Meissner, Rimer, Davis, Eisner, and Siegler (2004) surveyed over 700 women about their awareness of the controversy and their level of confusion. Women were asked: “What, if anything, have you heard about mammograms lately—either from the news or from talking to others?” Eight answers to the question were provided, which included measures of perceived efficacy and safety of mammograms, whether mammograms are controversial, and whether mammography recommendations keep changing. Multiple responses were allowed, and roughly one-third reported hearing something related to the controversy; that said, the broader question was not designed specifically to capture exposure to conflicting messages about mammography. Additionally, although 22% of participants reported confusion about mammograms, the authors did not attempt to link this to awareness of controversy.
More recently, researchers have studied awareness of a 2009 mammography controversy, when the U.S. Preventive Services Task Force (USPSTF) recommended against routine screening for women aged 40–49. Several studies showed that women perceived conflict and controversy following the announcement (Squiers et al., 2011; Kiviniemi & Hay, 2012), and there were some reports of confusion about and backlash toward screening recommendations (Squiers et al., 2011; Kiviniemi & Hay, 2012; Davidson, Liao, & Magee, 2011). That said, there is some evidence that underserved women may be less aware of expert disagreement over screening recommendations (Allen et al., 2013; Nagler, Lueck, & Gray, 2016).
Other studies on mammography and gender-neutral cancers (e.g., colon, skin, lung) have attempted to measure perceptions of conflicting information, though these studies were not conducted following a specific cancer controversy. For example, in the mammography context, one study asked participants to compare their attitudes with those of another referent: “Another woman said she didn’t want a mammogram because she felt too confused about the contradictory recommendations she had read or heard about having a mammogram. Would you say this woman is …” (Han, Kobrin et al., 2007, p. 459). To complete this sentence, the participants were given a five-point Likert scale ranging from “just like you” to “not at all like you.” Just over 10% of the 3,700 respondents reported high ambiguity or confusion about screening (i.e., “just like you”). Another study measured perceived ambiguity about cancer prevention recommendations by asking participants to agree or disagree with the following statement: “There are so many different recommendations about preventing [colon/skin/lung] cancer, it is hard to know which one to follow” (Han, Moser, & Klein, 2007, p. 324). Fifty-four percent of the 4,070 participants reported high ambiguity about colon cancer prevention recommendations, 44.7% for skin cancer recommendations, and 44.3% for lung cancer recommendations.
Taken together, this set of studies indicates public confusion about or ambiguity toward cancer prevention and screening recommendations, and in some cases participants perceived conflict in recommendations. However, none of these studies systematically capture media or interpersonal exposure to conflicting messages about cancer prevention or screening—a central first step, if the goal is to assess the effects of exposure to conflicting health messages. Similar measurement constraints are found in early work studying nutrition information in the media. Several papers in the mid-1990s and early 2000s found evidence that participants believed the media to be a source of confusion; that they believed nutrition experts are likely to change their minds in the next five years about which foods are healthful; and that they have changed their diet habits due to conflicting nutrition information (ADA, 1995; Food Marketing Institute and Prevention Magazine, 1997; International Food Information Council [IFIC], 2002; “Study: Americans ‘Flip-Flop’ Over Confusing Nutrition Findings,” 2000). These studies have some methodological issues, particularly in the sampling of participants, but they nonetheless suggest perceived conflict and confusion about nutrition research and recommendations. Again, though, exposure is not directly measured in these studies.
More recently, several focus group studies found themes of inconsistent, contradictory, or conflicting nutrition information, and respondents largely attributed the source of this information to the media (e.g., Boyington, Schoster, Martin, Shreffler, & Callahan, 2009; Dorey & McCool, 2009; Dye & Cason, 2005). Another nutrition study attempted to look at the effects of “inconsistent and confusing diet and health messages,” but it did not actually measure exposure to any such messages (Patterson, Satia, Kristal, Neuhouser, & Drewnowski, 2001, p. 38). The authors hypothesized that conflicting messages would lead individuals to discount them entirely, leading to “nutrition backlash,” or “negative feelings about dietary recommendations” (Patterson et al., 2001, p. 38). Yet there was no mention of exposure. Rather, the authors measured nutrition backlash, and found only some evidence for it; they did, however, find that it was associated cross-sectionally with “less healthful diets” (e.g., less fruit and vegetable consumption; Patterson et al., 2001, p. 40).
Eventually, researchers began directly assessing media or interpersonal exposure, although at first efforts focused on general, health, and cancer-related media exposure. The goal was to see whether such exposure was linked to outcomes including perceived ambiguity about cancer prevention recommendations, cancer fatalism, and cancer information overload (Han et al., 2009; Niederdeppe, Fowler, Goldstein, & Pribble, 2010; Niederdeppe, Lee et al., 2014). Niederdeppe et al. (2010) argue that cancer news coverage’s overemphasis of new research findings and omission of follow-up details may engender fatalistic beliefs, while Han et al. (2009) and Niederdeppe, Lee et al. (2014) argue that conflicting health information in particular exists in cancer and health news coverage. That said, these studies stop short of actually measuring the prevalence of conflicting information in health or cancer news coverage, nor do they assess exposure to this content, specifically.
To date, only a handful of studies have directly measured self-reported exposure to conflicting health information. Nagler and Hornik (2012) tested four potential measures against a set of validity criteria, and found that asking participants how much conflicting or contradictory information they heard from the media (including television, radio, newspaper, magazines, and the Internet) in the past year about specific nutrition-related topics (i.e., red wine or other alcohol, fish, coffee, and vitamins or supplements) performed consistently better than three other candidate measures (Nagler & Hornik, 2012; Nagler, 2014). Tan, Lee, & Bigman (2015) adapted this measure to capture exposure to contradictory information about e-cigarettes. Their measure asks about specific media sources (e.g., online news, social media, television) and also assesses several interpersonal information sources (e.g., family, friends, doctor). Using these measures, Tan et al. (2015) found that the main sources of conflicting or contradictory information were “television, print newspaper or magazines, and interpersonal sources (family, friends, or co-workers)” (p. 270). It is worth noting that these approaches to measuring media exposure to conflicting health messages rely on self-reported data, which means they are vulnerable to response bias and inaccurate reporting.
Evidence for the Effects of Conflicting Health Messages
On balance, the evidence base documenting the existence of conflicting health messages is more substantial than the evidence base documenting their effects. That said, it is sometimes difficult to disentangle the two when reviewing past research. For example, the aforementioned mammography studies during the 1990s and early 2000s sometimes conflated exposure with outcomes (e.g., perceptions of conflict and confusion). In general, however, the past few years have witnessed increased direct attention to the effects question, with scholars using distinct theoretical and methodological approaches to assess potential consequences of exposure to conflicting health messages.
Some of these studies have used qualitative methods, enabling researchers to understand how the public makes sense of conflicting and often controversial health messages. In the cancer screening context, Allen and colleagues (Allen et al., 2013) conducted eight focus groups with diverse women aged 40 to 50 to assess women’s awareness of and reactions to the 2009 U.S. Preventive Services Task Force (USPSTF) change in mammography screening recommendations. They found evidence of disbelief and confusion, as well as mistrust of the reasons driving the change in recommendations, with many women suspicious that the change was a cost-cutting measure. More recently, Nagler, Lueck & Gray (2016) examined immigrant women’s perceptions of mammography controversy. Rather than focusing on one incident, like the 2009 USPSTF change in recommendations, they were interested in awareness of and reactions to mammography controversy more generally. This broader view took into account the fact that, for more than 30 years, there has been substantial expert disagreement over the age at and frequency with which women should be screened for breast cancer. Moreover, although prior studies found that general population women perceive conflict and controversy about mammography guidelines (e.g., Kiviniemi & Hay, 2012; Meissner, Rimer, Davis, Eisner, & Siegler, 2004; Squiers et al., 2011; Taplin et al., 1997), little is known about whether underserved women notice this information and, if so, how they react to it—a pressing question, given persistent cancer disparities and the potential for communication inequalities (defined as differences across social groups in the ability to access, attend to, process, retain, and act on health information; Viswanath et al., 2012). Allen et al. (2013) examined these questions using a purposive sample of predominantly English-speaking Caucasian and African American women. Nagler et al. (2016) focused specifically on immigrant women aged 35–55 from three communities—Somali, Hmong, and Latina—and, across six community-engaged focus groups, found little awareness of mammography controversy. Most women reported high intentions to be screened, even after learning about the controversy; however, in contrast to the Allen et al. findings, there was little evidence of confusion or mistrust.
Similar studies have been conducted in the nutrition context. In a focus group study exploring women’s perceptions of contradictory media messages about fish consumption, women were first shown articles that contained conflicting information about fish—specifically, information on the health benefits (e.g., omega-3 fatty acids) and risks (e.g., mercury exposure) of consumption—and then asked to comment on the articles and their intentions to consume fish (Vardeman & Aldoory, 2008). Across the six groups, women reported that they found conflicting information about fish consumption to be confusing, and some exhibited feelings similar to nutrition backlash (e.g., one woman noted that “everything is bad for you these days”; Vardeman & Aldoory, 2008, p. 286).
Quantitative studies of the effects of exposure to conflicting messages have used both experimental and observational survey designs. First, several experimental studies have found that exposure to two-sided or conflicting information produces negative cognitive and, in some cases, behavioral effects. In these studies, participants were deliberately exposed to news stories or other message-based stimuli that provided conflicting information about topics including nutrition (Chang, 2013, 2015), the HPV vaccine (Nan & Daily, 2015), vaccines and autism (Dixon & Clarke, 2012), mammography and prostate-specific antigen (PSA) testing (Marshall & Comello, 2016), and scientific controversies such as dioxin regulation (Jensen & Hurley, 2012). Results showed that exposure to conflicting messages produced greater uncertainty about the specific stimuli topic or health research in general; increased negative attitudes toward health research; decreased self-efficacy and response efficacy; and lowered perceptions of news and scientists’ credibility. Moreover, there is some evidence that the effects of conflicting information exposure may be more pronounced among certain subgroups. For example, Chang (2013) found that, compared to Taiwanese women, Taiwanese men experienced more negative cognitive and behavioral responses when exposed to news stories containing conflicting nutrition information. Nan and Daily (2015) found evidence of biased processing: when exposed to conflicting information about the HPV vaccine, those who believed strongly in vaccination in general perceived the HPV vaccine to be more effective, whereas those with lower general vaccination beliefs perceived the HPV vaccine to be slightly less effective. These biased processing effects were most pronounced among participants who were higher in “need for closure” (the desire for a firm answer and discomfort with ambiguity; Kruglanski & Webster, 1996).
Observational survey studies also contribute to the evidence base. One study linked self-reported exposure to conflicting information about nutrition with outcomes such as public confusion and decreased trust in nutrition recommendations (Nagler, 2014). Lee and colleagues subsequently replicated Nagler’s 2014 study using a three-wave longitudinal survey design, enabling stronger causal claims about observed associations; cross-sectional and longitudinal study results were consistent with one another (Lee, Nagler & Wang, 2017). In a set of studies conducted on conflicting medication information, researchers showed that exposure to conflicting information associated with lower medication adherence (Carpenter, Elstad, Blalock, & DeVellis, 2014; Carpenter et al., 2010) and use during pregnancy (Hämeen-Anttila et al., 2014), as well as increased anxiety among pregnant women (Hämeen-Anttila et al., 2014). There is also some evidence that exposure to conflict could drive behavioral responses. For example, Weeks, Friedenberg, Southwell, and Slater (2012) found that media exposure to the 2009 USPSTF mammography screening controversy strongly predicted information seeking about mammography. Gibson and colleagues (2016) found that, amid conflicting recommendations about PSA testing, information seeking may negatively influence prostate cancer screening decisions.
To date, a handful of studies have considered whether exposure to conflicting messages might influence subsequent unrelated behaviors about which there is scientific consensus, a phenomenon that has been referred to as “carryover” or “spillover” effects. Theoretically, such carryover effects could be explained via excitation transfer (Zillman, 1983) and priming (Roskos-Ewoldsen, Roskos-Ewoldsen, & Dillman Carpentier, 2009). As Nagler (2014) argued, to the extent that backlash is a form of negative affect, it might extend to other health recommendations about which little conflict or controversy exists, building over time via priming with each subsequent exposure to conflicting information. Data show mixed results. Nagler (2014) found that exposure to conflicting information about wine, fish, coffee, and vitamins or supplements was associated with confusion and decreased trust in nutrition recommendations—and these cognitions were, in turn, associated with lower intentions to engage in two behaviors about which there is little conflict: fruit and vegetable consumption and exercise. In a longitudinal study on the effects of exposure to contradictory nutrition information, Lee and colleagues (2017) found additional evidence of carryover effects on fruit and vegetable consumption. In contrast, Gollust and colleagues (2010) found that while exposure to medical and political disagreement about the HPV vaccine produced less public support for HPV vaccine mandates, this did not spill over and decrease support for immunizations in general.
Several studies have offered a theoretical rationale for why exposure to conflicting messages might lead to cognitive, affective, and behavioral effects. One explanation derives from decision theory and, more specifically, the concept of ambiguity. Decision theorist Daniel Ellsberg argued that a particularly important condition under which ambiguity may be high is “where there is conflicting opinion and evidence” (1961, p. 659). Thus, conflicting information exposure may give rise to perceived ambiguity, and this state of uncertainty is uncomfortable for many (though not all) people. Such discomfort has been called “ambiguity aversion” (Han, Reeve, Moser, & Klein, 2009; Han et al., 2014). In several studies, Han and colleagues demonstrated that such aversion can take the form of negative beliefs toward the subject of the ambiguity. For example, when people perceived ambiguity about cancer prevention recommendations, many interpreted those recommendations negatively—specifically, by reporting lower cancer preventability beliefs (Han, Moser, & Klein, 2007; Han, Kobrin et al., 2007). In the nutrition context, Nagler (2014) found that when people reported perceived ambiguity or confusion about nutrition recommendations, many also evaluated these recommendations pessimistically by reporting more negative beliefs about such recommendations and research (i.e., nutrition backlash).
Uncertainty management theory has also informed studies on the effects of conflicting information exposure. Decision theorists have generally described ambiguity as an undesirable state (although not universally, as some people are indifferent to ambiguity; Camerer & Weber, 1992; Han, 2013). In contrast, Brashers (2001) has argued that uncertainty is complex, insofar as it may be avoided or welcomed, depending on one’s needs and situational cues. For example, consistent with decision theory, uncertainty management theory suggests that uncertainty can contribute to negative states, such as backlash or cancer fatalism (Jensen et al., 2011). On the other hand, uncertainty might also contribute to positive states. A primary example in the health domain has been the effects of hedging in health news coverage—in other words, discussing limitations, an indicator of scientific uncertainty—which has been linked with positive outcomes, such as increased perceived credibility of both scientists and journalists (Jensen, 2008). While uncertainty exposure might produce positive effects, conflicting information exposure may be quite different, as there is evidence that exposure to such information has the opposite effect on perceived source credibility (Chang, 2015; Jensen & Hurley, 2012).
Implications of Conflicting Health Messages for Health and Risk Communication
The growing literature on conflicting health messages suggests not only that this information is prevalent in the media environment, but also that it can have deleterious effects on a range of cognitive, affective, and even behavioral outcomes. As evidence continues to accumulate, it is important to consider not just the implications of such messages for health and risk communication, but also whether and how we can intervene to address the effects of exposure to message conflict.
A central question is whether conflicting health messages in the broader public information environment could undermine the effects of strategic health communication messages. Our strategic messages do not occur in a vacuum, and this is true regardless of whether they take the form of traditional media campaigns, or whether they are a component of community-based or policy-level interventions. When people are exposed to campaigns that promote, for example, healthy eating or HPV vaccination, they process these messages within a broader information landscape that is increasingly characterized by conflicting and often controversial information about myriad health topics—sometimes about the very topics featured in the campaigns. It is therefore important to ask not just what the effects of exposure to conflicting messages are, but how this exposure might impact receptivity to subsequent and unrelated strategic health messages, particularly about topics for which the evidence is clear and consistent. Whether exposure to conflicting health messages actually undermines the effectiveness of media campaigns and other strategic messages is an empirical question, but if there is evidence of such effects, then it may be necessary to directly address conflict in strategic messages. For example, perhaps the latest fruit and vegetable consumption or skin cancer prevention campaign will need to explicitly acknowledge the presence of conflicting or competing health information in the broader environment, if campaign designers are to limit counter-arguing and promote message effectiveness.
A separate but equally important question is whether journalists and research institutions will work to stem the tide of conflicting messages in the public information environment. Watchdog organizations like Health News Review—which monitors and evaluates health news coverage—are working to improve the quality of health reporting by documenting shortcomings and issuing recommendations for improvement (Schwitzer, 2014). This includes calling on journalists to provide greater methodological and contextual information in their science and health reporting. It also might involve being more cognizant about sources. For example, in covering the HPV vaccine, what happens when a journalist not only cites a medical expert or researcher, but also includes a quote from a politician? Researchers have raised concerns about what happens when health issues become politicized (Fowler & Gollust, 2015; Nagler, Fowler, & Gollust, 2015), and this, too, has direct implications for health journalism. Furthermore, there have been calls for more responsible dissemination practices by research institutions (Woloshin, Schwartz, Casella, Kennedy, & Larson, 2009). Press releases issued by such institutions can be premature, as they sometimes highlight preliminary research, and scholars have observed quality issues with these releases—for example, they often omit key facts and fail to acknowledge important limitations (Woloshin et al., 2009; Brechman, Lee, & Cappella, 2009). Curtailing incomplete press releases about preliminary findings may slow a barrage of news stories that could increase opportunities for exposure to conflicting study results or recommendations.
To summarize, there are several likely points of intervention to address conflicting health messages. Science and health reporting could provide greater methodological and contextual information, and research institutions could improve as gatekeepers, publicizing research findings once there is a strong evidence base and ensuring that all necessary facts and limitations are clearly stated. Researchers also could assume greater responsibility in communicating findings to the public, whether by answering journalists’ calls for interviews or by using social media to responsibly communicate their work to policymakers and the public (Grande et al., 2014). In addition, continued efforts to improve science education and research literacy in the United States should help people to better understand methodological and contextual information in the media. Individually, these efforts may not be a solution, but taken together, journalists, researchers, and research and educational institutions may be able to reduce—or, at the very least, help the public to negotiate and understand—conflicting health messages in the public information environment. In the meantime, communication researchers and practitioners need to be aware that these messages exist, across many health topics and platforms, and have the potential to undermine the success of public health communication strategies.
Allen, J. D., Bluethmann, S. M., Sheets, M., Opdyke, K. M., Gates-Ferris, K., Hurlbert, M., & Harden, E. (2013). Women’s responses to changes in US Preventive Task Force’s mammography screening guidelines: Results of focus groups with ethnically diverse women. BMC Public Health, 13(1), 1.Find this resource:
Amend, E., & Secko, D. M. (2012). In the face of critique: A metasynthesis of the experiences of journalists covering health and science. Science Communication, 34(2), 241–282.Find this resource:
American Dietetic Association. (1995). Nutrition trends survey, 1995. Chicago: American Dietetic Association.Find this resource:
Angell, M., & Kassirer, J. P. (1994). Clinical research: What should the public believe? New England Journal of Medicine, 331, 189–190.Find this resource:
Borah, P. (2011). Conceptual issues in framing theory: A systematic examination of a decade’s literature. Journal of Communication, 61, 246–263.Find this resource:
Boyington, J. E. A., Schoster, B., Martin, K. R., Shreffler, J., & Callahan, L. F. (2009). Perceptions of individual and community environmental influences on fruit and vegetable intake, North Carolina, 2004. Preventing Chronic Disease, 6.Find this resource:
Brashers, D. E. (2001). Communication and uncertainty management. Journal of Communication, 51(3), 477–497.Find this resource:
Brechman, J., Lee, C. J., & Cappella, J. N. (2009). Lost in translation? A comparison of cancer-genetics reporting in the press release and its subsequent coverage in the press. Science Communication, 30(4), 453–474.Find this resource:
Camerer, C., & Weber, M. (1992). Recent developments in modeling preferences: Uncertainty and ambiguity. Journal of Risk and Uncertainty, 5, 325–370.Find this resource:
Carpenter, C. M., Geryk, L. L., Chen, A. T., Nagler, R. H., Dieckmann, N. F., & Han, P. K. (2016). Conflicting health information: A critical research need. Health Expectations, 19(6), 1173–1182.Find this resource:
Carpenter, D. M., DeVellis, R. F., Fisher, E. B., DeVellis, B. M., Hogan, S. L., & Jordan, J. M. (2010). The effect of conflicting medication information and physician support on medication adherence for chronically ill patients. Patient Education and Counseling, 81(2), 169–176.Find this resource:
Carpenter, D. M., Elstad, E. A., Blalock, S. J., & DeVellis, R. F. (2014). Conflicting medication information: Prevalence, sources, and relationship to medication adherence. Journal of Health Communication, 19, 67–81.Find this resource:
Chang, C. (2013). Men’s and women’s responses to two-sided health news coverage: A moderated mediation model. Journal of Health Communication, 18, 1326–1344.Find this resource:
Chang, C. (2015). Motivated processing: How people perceive news covering novel or contradictory health research findings. Science Communication, 37, 602–634.Find this resource:
Chong, D., & Druckman, J. N. (2007). A theory of framing and opinion formation in competitive elite environments. Journal of Communication, 57, 99–118.Find this resource:
Clarke, J. N., & Everest, M. M. (2006). Cancer in the mass print media: Fear, uncertainty and the medical model. Social Science & Medicine, 62(10), 2591–2600.Find this resource:
Corbett, J. B., & Durfee, J. L. (2004). Testing public (un)certainty of science: Media representations of global warming. Science Communication, 26(2), 129–151.Find this resource:
Dalton, R. J., Beck, P. A., & Huckfeldt, R. (1998). Partisan cues and the media: Information flows in the 1992 presidential election. American Political Science Review, 92(1), 111–126.Find this resource:
Davidson, A. S., Liao, X., & Magee, B. D. (2011). Attitudes of women in their forties toward the 2009 USPSTF mammogram guidelines: A randomized trial on the effects of media exposure. American Journal of Obstetrics and Gynecology, 205(1), 30-e1.Find this resource:
Dentzer, S. (2009). Communicating medical news: Pitfalls of health care journalism. New England Journal of Medicine, 360, 1–3.Find this resource:
Dixon, G. N., & Clarke, C. E. (2012). Heightening uncertainty around certain science: Media coverage, false balance, and the autism-vaccine controversy. Science Communication, 35, 358–382.Find this resource:
Dixon, G. N., & Clarke, C. E. (2013). The effect of falsely balanced reporting of the autism–vaccine controversy on vaccine safety perceptions and behavioral intentions. Health Education Research, 28(2), 352–359.Find this resource:
Dorey, E., & McCool, J. (2009). The role the media in influencing children’s nutritional perceptions. Qualitative Health Research, 18, 645–654.Find this resource:
Durrant, R., Wakefield, M., McLeod, K., Smith, K. C., & Chapman, S. (2003). Tobacco in the news: An analysis of newspaper coverage of tobacco issues in Australia, 2001. Tobacco Control, 12(Suppl. 2), ii75–ii81.Find this resource:
Dye, C. J., & Cason, K. L. (2005). Perceptions of older, low-income women about increasing intake of fruits and vegetables. Journal of Nutrition for the Elderly, 25, 21–41.Find this resource:
Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics, 75, 643–669.Find this resource:
Evans, W. A., Krippendorf, M., Yoon, J. H., Posluszny, P., & Thomas, S. (1990). Science in the prestige and national tabloid presses. Social Science Quarterly, 71, 105–117.Find this resource:
Food Marketing Institute and Prevention Magazine Report. (1997). Shopping for health. Washington, DC, and Emmaus, PA: Food Marketing Institute and Prevention Magazine.Find this resource:
Fowler, E. R., & Gollust, S. E. (2015). The content and effect of politicized health controversies. ANNALS of the American Academy of Political and Social Science, 658(1), 155–171.Find this resource:
Friedman, S., Dunwoody, S., & Rogers, C. (Eds.). (1999). Communicating uncertainty: Media coverage of new and controversial science. Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:
Funk, C., & Rainie, L. (2015, January 29). Public and scientists’ views on science and society. Pew Research Center.
Gibson, L., Tan, A. S. L., Freres, D., Lewis, N., Martinez, L., & Hornik, R. C. (2016). Nonmedical information seeking amid conflicting health information: Negative and positive effects on prostate cancer screening. Health Communication, 31(4), 417–424.Find this resource:
Glynn, C. J. (1985). Science reporters and their editors judge sensationalism. Newspaper Research Journal, 6, 69–74.Find this resource:
Glynn, C. J., & Tims, A. R. (1982). Sensationalism in science news: A case study. Journalism Quarterly, 59, 126–131.Find this resource:
Gollust, S. E., Dempsey, A. F., Lantz, P. M., Ubel, P. A., & Fowler, E. F. (2010). Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affairs, 29, 2041–2046.Find this resource:
Grande, D., Gollust, S. E., Pany, M., Seymour, J., Goss, A., Kilaru, A., & Meisel, Z. (2014). Translating research for health policy: researchers’ perceptions and use of social media. Health Affairs, 33(7), 1278–1285.Find this resource:
Greiner, A., Smith, K. C., & Guallar, E. (2010). Something fishy? News media presentation of complex health issues related to fish consumption guidelines. Public Health Nutrition, 13(11), 1786–1794.Find this resource:
Hallin, D. C., & Briggs, C. L. (2015). Transcending the medical/media opposition in research on news coverage of health and medicine. Media, Culture & Society, 37(1), 85–100.Find this resource:
Hämeen-Anttila, K., Nordeng, H., Kokki, E., Jyrkkä, J., Lupattelli, A., Vainio, K., & Enlund, H. (2014). Multiple information sources and consequences of conflicting information about medicine use during pregnancy: A multinational Internet-based survey. Journal of Medical Internet Research, 16(2), e60.Find this resource:
Han, P. K. J. (2013). Conceptual, methodological, and ethical problems in communicating uncertainty in clinical evidence. Medical Care Research and Review, 70(1), 14S–36S.Find this resource:
Han, P. K. J., Klein, W. M., Lehman, T. C., Massett, H., Lee, S. C., & Freedman, A. N. (2009). Laypersons’ responses to the communication of uncertainty regarding cancer risk estimates. Medical Decision Making, 29(3), 391–403.Find this resource:
Han, P. K. J., Kobrin, S. C., Klein, W. M. P., Davis, W. W., Stefanek, M., & Taplin, S. H. (2007). Perceived ambiguity about screening mammography recommendations: Association with future mammography uptake and perceptions. Cancer Epidemiology Biomarkers & Prevention, 16(3), 458–466.Find this resource:
Han, P. K. J., Moser, R. P., & Klein, W. M. (2007). Perceived ambiguity about cancer prevention recommendations: Associations with cancer-related perceptions and behaviours in a US population survey. Health Expectations, 10, 321–336.Find this resource:
Han, P. K. J., Moser, R. P., Klein, W. M. P., Beckjord, E. B., Dunlavy, A. C., & Hesse, B. W. (2009). Predictors of perceived ambiguity about cancer prevention recommendations: Sociodemographic factors and mass media exposures. Health Communication, 24, 764–772.Find this resource:
Han, P. K. J., Reeve, B. B., Moser, R. P., & Klein, W. M. P. (2009). Aversion to ambiguity regarding medical tests and treatments: Measurement, prevalence, and relationship to sociodemographic factors. Journal of Health Communication, 14(6), 556–572.Find this resource:
Han, P. K. J., Williams, A. E., Haskins, A., Gutheil, C., Lucas, F. L., Klein, W. M., & Mazor, K. M. (2014). Individual differences in aversion to ambiguity regarding medical tests and treatments: association with cancer screening cognitions. Cancer Epidemiology Biomarkers & Prevention, 23(12), 2916–2923.Find this resource:
Hill, A. B. (1971). Principles of medical statistics (9th ed.). New York: Oxford University Press.Find this resource:
Houn, F., Bober, M. A., Huerta, E. E., Hursting, S. D., Lemon, S., & Weed, D. L. (1995). The association between alcohol and breast cancer: Popular press coverage of research. American Journal of Public Health, 85, 1082–1086.Find this resource:
Huckfeldt, R., Mendez, J. M., & Osborn, T. (2004). Disagreement, ambivalence, and engagement: The political consequences of heterogeneous networks. Political Psychology, 25(1), 65–95.Find this resource:
International Food Information Council. (2002). How consumers feel about food and nutrition messages.
Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists’ and journalists’ credibility. Human Communication Research, 34(3), 34–369.Find this resource:
Jensen, J. D., Carcioppolo, N., King, A. J., Bernat, J. K., Davis, L., Yale, R., & Smith, J. (2011). Including limitations in news coverage of cancer research: Effects of news hedging on fatalism, medical skepticism, patient trust, and backlash. Journal of Health Communication, 16(5), 486–503.Find this resource:
Jensen, J. D., & Hurley, R. J. (2012). Conflicting stories about public scientific controversies: Effects of news convergence and divergence on scientists’ credibility. Public Understanding of Science, 21, 689–704.Find this resource:
Kiviniemi, M. T., & Hay, J. L. (2012). Awareness of the 2009 US Preventive Services Task Force recommended changes in mammography screening guidelines, accuracy of awareness, sources of knowledge about recommendations, and attitudes about updated screening guidelines in women ages 40–49 and 50+. BMC Public Health, 12(1), 1.Find this resource:
Knoke, D. (1990). Political networks: The structured perspective. New York: Cambridge University Press.Find this resource:
Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: Seizing and freezing. Psychological Review, 103(2), 263–283.Find this resource:
Kushi, L. H. (1999). Vitamin E and heart disease: A case study. American Journal of Clinical Nutrition, 69(Suppl.), 1322S–13295S.Find this resource:
Lazarsfeld, P. F., Berelson, B., & Caudet, H. (1968). The people’s choice (3d ed.). New York: Columbia University Press. First edition, 1944, Duell, Sloan, and Pearce.Find this resource:
Lee, C. J., Nagler, R. H., & Wang, N. (2017). Source-specific exposure to contradictory nutrition information: Documenting prevalence and effects on adverse cognitive and behavioral outcomes. Health Communication, 1–9.Find this resource:
Marshall, L. H., & Comello, M. L. G. (2016, August). Stymied by a wealth of health information: How viewing conflicting information online diminishes efficacy. Paper presented at the Association for Education in Journalism and Mass Communication, Minneapolis, MN.Find this resource:
Meissner, H. I., Rimer, B. K., Davis, W. W., Eisner, E. J., & Siegler, I. C. (2004). Another round in the mammography controversy. Journal of Women’s Health, 12, 261–276.Find this resource:
Miller, J. D. (2004). Public understanding of, and attitudes toward, scientific research: What we know and what we need to know. Public Understanding of Science, 13, 273–294.Find this resource:
Miller, J. D. (2010). Civic scientific literacy: The role of the media in the electronic era. Science and the Media, 44–63.Find this resource:
Mutz, D. C. (2002). The consequences of cross-cutting networks for political participation. American Journal of Political Science, 46(4), 838–855.Find this resource:
Nagler, R. H. (2010). Steady diet of confusion: Contradictory nutrition messages in the public information environment. Philadelphia, PA: Unpublished doctoral dissertation, Annenberg School for Communication, University of Pennsylvania.Find this resource:
Nagler, R. H. (2014). Adverse outcomes associated with media exposure to contradictory nutrition messages. Journal of Health Communication, 19, 24–40.Find this resource:
Nagler, R. H., Fowler, E. F., & Gollust, S. E. (2015). Covering controversy: What are the implications for women’s health? Women’s Health Issues, 25(4), 318–321.Find this resource:
Nagler, R. H., & Hornik, R. C. (2012). Measuring media exposure to contradictory health information: A comparative analysis of four potential measures. Communication Methods and Measures, 6, 56–75.Find this resource:
Nagler, R. H., Lueck, J. A., & Gray, L. S. (2016). Awareness of and reactions to mammography controversy among immigrant women. Health Expectations, 1–10.Find this resource:
Nan, X., & Daily, K. (2015). Biased assimilation and need for closure: Examining the effects of mixed blogs on vaccine-related beliefs. Journal of Health Communication, 20, 462–471.Find this resource:
Nelkin, D. (1995). Selling science: How the press covers science and technology. New York: W. H. Freeman.Find this resource:
Nelkin, D. (1996). An uneasy relationship: The tensions between medicine and the media. Lancet, 347, 1600–1603.Find this resource:
Niederdeppe, J., Fowler, E. F., Goldstein, K., & Pribble, J. (2010). Does local television news coverage cultivate fatalistic beliefs about cancer prevention? Journal of Communication, 60, 230–253.Find this resource:
Niederdeppe, J., Gollust, S. E., & Barry, C. L. (2014). Innoculation in competitive framing: Examining message effects on policy preferences. Public Opinion Quarterly, 78, 634–655.Find this resource:
Niederdeppe, J., Lee, T., Robbins, R., & Kim, H. K., Kresovich, A., Kirshenblat, D., et al. (2014). Content and effects of news stories about uncertain cancer causes and preventive behaviors. Health Communication, 29, 332–346.Find this resource:
Nir, L., & Druckman, J. N. (2008). Campaign mixed-message flows and timing of vote decision. International Journal of Public Opinion Research, 20(3), 326–346.Find this resource:
Nisbet, E. C., Hart, P. S., Myers, T., & Ellithorpe, M. (2013). Attitude change in competitive framing environments? Open-/closed-mindedness, framing effects, and climate change. Journal of Communication, 63, 766–785.Find this resource:
O’Keefe, D. J. (1999). How to handle opposing arguments in persuasive messages: A meta-analytic review of the effects of one-sided and two-sided messages. Annals of the International Communication Association, 22(1), 209–249.Find this resource:
Patterson, R. E., Satia, J. A., Kristal, A. R., Neuhouser, M. L., & Drewnowski, A. (2001). Is there a consumer backlash against the diet and health message? Journal of the American Dietetic Association, 101, 37–41.Find this resource:
Pellechia, M. G. (1997). Trends in science coverage: A content analysis of three U.S. newspapers. Public Understanding of Science, 6, 49–68.Find this resource:
Pew Research Center. (2015, January 29). Public and scientists’ views on science and society. Available online www.pewresearch.org.
Rier, D. A. (1999). The versatile “caveat” section of an epidemiology paper: Managing public and private risk. Science Communication, 21, 3–37.Find this resource:
Rimer, B. K., Halabi, S., Strigo, T. S., Crawford, Y., & Lipkus, I. M. (1999). Confusion about mammography: Prevalence and consequences. Journal of Women’s Health & Gender-Based Medicine, 8, 509–520.Find this resource:
Roskos-Ewoldsen, D. R., Roskos-Ewoldsen, B., & Dillman Carpentier, F. (2009). Media priming: An updated synthesis. In J. Bryant & M. B. Oliver (Eds.), Media effects: Advances in theory and research (3d ed.) (pp. 74–93). New York: Taylor & Francis.Find this resource:
Schwartz, L. M., & Woloshin, S. (2004). The media matter: A call for straightforward medical reporting. Annals of Internal Medicine, 140, 226–228.Find this resource:
Schwitzer, G. (2014). A guide to reading health care news stories. JAMA Internal Medicine, 174(7), 1183–1186.Find this resource:
Shuchman, M., & Wilkes, M. S. (1997). Medical scientists and health news reporting: A case of miscommunication. Annals of Internal Medicine, 126, 976–982.Find this resource:
Smith, K. C., Kromm, E. E., Klassen, A. C. (2010). Print news coverage of cancer: What prevention messages are conveyed when screening is newsworthy? Cancer Epidemiology, 34, 434–441.Find this resource:
Smith, K. C., Wakefield, M., & Edsall, E. (2006). The good news about smoking: How do U.S. newspapers cover tobacco issues? Journal of Public Health Policy, 27(2), 166–181.Find this resource:
Southwell, B. G., Reynolds, B. J., & Fowlie, K. (2013). Communication, media relations and infectious disease surveillance. In N. M’ikanatha, H. de Valk, R. Lynfield, & C. Van Benden (Eds.), Infectious disease surveillance (2d ed.) (pp. 607–617). Oxford: John Wiley.Find this resource:
Squires, L. B., Holden, D. J., Dolina, S. E., Kim, A. E., Bann, C. M., Renaud, J. M. (2011). The public’s response to the U.S. Preventive Services Task Force’s 2009 recommendations on mammography screening. American Journal of Preventive Medicine, 40(5), 497–504.Find this resource:
Stocking, S. H. (1999). How journalists deal with scientific uncertainty. In S. Friedman, S. Dunwoody, & C. Rogers (Eds.), Communicating uncertainty: Media coverage of new and controversial science (pp. 23–42). Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:
Stryker, J. E. (2002). Reporting medical information: Effects of press releases and newsworthiness on medical journal articles’ visibility in the news media. Preventive Medicine, 35, 519–530.Find this resource:
Study: Americans “Flip-Flop” Over Confusing Nutrition Findings. (2000, May). SeniorJournal.comFind this resource:
Sweeting, H., Walker, L., MacLean, A., Patterson, C., Raisanen, U., Hunt, K. (2015). Prevalence of eating disorders in males: A review reported in academic research and UK mass media. International Journal of Men’s Health, 14(2).Find this resource:
Tan, A. S. L., Lee, C. J., & Bigman, C. A. (2015). Public support for selected e-cigarette regulations and associations with overall information exposure and contradictory information exposure about e-cigarettes: Findings from a national survey of U.S. adults. Preventive Medicine, 81, 268–274.Find this resource:
Tankard, J. W., & Ryan, M. (1974). News source perceptions of accuracy of science coverage. Journalism Quarterly, 51, 219–225.Find this resource:
Taplin, S. H., Urban, N., Taylor, V. M., & Savarino, J. (1997). Conflicting national recommendations and the use of screening mammography: Does the physician’s recommendation matter? Journal of the American Board of Family Medicine, 10, 88–95.Find this resource:
Taubes, G. (2007, September 16). De we really know what makes us healthy? New York Times Magazine.Find this resource:
U.S. Preventive Services Task Force. (1989). Guide to clinical preventive services. An assessment of the effectiveness of 169 interventions. Baltimore, MD: William & Wilkins.Find this resource:
Van Klingeren, M., Boomgaarden, H. G., & De Vreese, C. H. (2017). Will conflict tear us apart? The effects of conflict and valenced media messages on polarizing attitudes toward EU immigration and border control. Public Opinion Quarterly, 1–21.Find this resource:
Vardeman, J. E., & Aldoory, L. (2008). A qualitative study of how women make meaning of contradictory media messages about the risks of eating fish. Health Communication, 23, 282–291.Find this resource:
Viswanath, K., Blake, K. D., Meissner, H. I., Saiontz, N. G., Mull, C., Freeman, C. S., et al. (2008). Occupational practices and the making of health news: A national survey of U.S. health and medical science journalists. Journal of Health Communication, 13(8), 759–777.Find this resource:
Viswanath, K., Nagler, R. H., Bigman-Galimore, C. A., McCauley, M. P., Jung, M., & Ramanadhan, S. (2012). The communications revolution and health inequalities in the 21st century: Implications for cancer control. Cancer Epidemiology, Biomarkers, & Prevention, 21, 1701–1708.Find this resource:
Weeks, B. E., Friedenberg, L. M., Southwell, B. G., & Slater, J. S. (2012). Behavioral consequences of conflict-oriented health news coverage: The 2009 mammography guideline controversy and online information seeking. Health Communication, 27(2), 158–166.Find this resource:
Wise, D., & Brewer, P. R. (2010). Competing frames for a public health issue and their effects on public opinion. Mass Communication and Society, 13, 435–457.Find this resource:
Woloshin, S., Schwartz, L. M., Casella, S. L., Kennedy, A. T., & Larson, R. J. (2009). Press releases by academic medical centers: not so academic? Annals of Internal Medicine, 150(9), 613–618.Find this resource:
Zaller, J. (1992). The nature and origins of mass opinion. New York: Cambridge University Press.Find this resource:
Zaller, J. (1996). The myth of massive media impact revived: New support for a discredited idea. In D. C. Mutz, P. M. Sniderman, & R. A. Brody (Eds.), Political persuasion and attitude change (pp. 17–78). Ann Arbor: University of Michigan Press.Find this resource:
Zillman, D. (1983). Transfer of excitation in emotional behavior. In R. E. Petty & J. T. Cacioppo (Eds.), Social psychophysiology: A sourcebook (pp. 215–240). New York: Guilford.Find this resource: