Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Natural Hazard Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 03 December 2022

Understanding the Risk Communication Puzzle for Natural Hazards and Disastersfree

Understanding the Risk Communication Puzzle for Natural Hazards and Disastersfree

  • Emma E. H. DoyleEmma E. H. DoyleMassey University
  •  and Julia S. BeckerJulia S. BeckerMassey University

Summary

The study of risk communication has been explored in several diverse contexts, from a range of disciplinary perspectives, including psychology, health, media studies, visualization studies, the public understanding of science, and social science. Such diversity creates a puzzle of recommendations to address the many challenges of communicating risk before, during, and after a natural hazard event and disasters.

The history and evolution of risk communication across these diverse contexts is reviewed, followed by a discussion of risk communication particular to natural hazards and disasters. Example models of risk communication in the disaster and natural hazard context are outlined, followed by examples of studies into disaster risk communication from Aotearoa New Zealand, and key best practice principles for communicating risk in these contexts. Considerations are also provided on how science and risk communication can work together more effectively in future in the natural hazard and disaster space.

Such considerations include the importance of scientists, risk managers, and officials communicating to meet a diversity of decision-makers’ needs and understanding the evolution of those needs in a crisis across time demands and forecast horizons. To acquire a better understanding of such needs, participatory approaches to risk assessment and communication present the greatest potential in developing risk communication that is useful, useable, and used. Through partnerships forged at the problem formulation stage, risk assessors and communicators can gain an understanding of the science that needs to be developed to meet decision-needs, while communities and decision-makers can develop a greater understanding of the limitations of the science and risk assessment, leading to stronger and more trusting relationships.

It is critically important to evaluate these partnership programs due to the challenges that can arise (such as resourcing and trust), particularly given risk communication can often occur in an environment subject to power imbalances due to social structures and sociopolitical landscape. There is also often not enough attention paid to the evaluation of the risk communication products themselves, which is problematic because what we think is being communicated may unintentionally mislead due to formatting and display choices. By working in partnership with affected communities to develop decision-relevant communication products using evidence-based product design, work can be done toward communicating risk in the most effective, and ethical, way.

Subjects

  • Risk Assessment
  • Risk Communication and Warnings
  • Risk Management

Introduction

Communication plays a vital role through all phases of the natural hazard and disaster management cycles of reduction, readiness, response, and recovery.1 However, understandings of what we refer to when we use the word communication vary across contexts, disciplines, and individuals (e.g., Covello, 1992; Fischhoff, 2014; Heine, 1988; Luhmann, 1992; McBride, 2017; Rowe & Frewer, 2005; Scheufele, 2014). Examples range from interorganizational and interagency communication processes and protocols for information exchange during response (Doyle & Paton, 2017; Owen et al., 2013) to hazard-monitoring telecommunication standards to enhance warning systems (WMO, 2018), the delivery of risk and science communication via public education programs to build our individual and community resilience to future events, the dissemination of risk warnings (Kelman & Glantz, 2014), and the role of boundary organizations such as the Intergovernmental Panel on Climate Change (IPCC) that bridge communications between the extensive science community, policy-makers, and the public (Beck & Mahony, 2018). Further complexities arise when we consider how the meaning of the phrases risk communication, crisis communication, and risk and crisis communication vary across different sectors of society.

In a disaster and natural hazards context, this usually implies the communication of risk prior to an event and during the crisis itself via emergency response and critical warnings to improve life safety outcomes and inform independent judgments (see, e.g., Eiser et al., 2012). However, risk and crisis communication in an organizational context can refer to the risk to the organization or business and the need to mitigate damage to the credibility of an organization (e.g., see review in Morgan et al., 2001). Mitigating such damage might be achieved through developing an information campaign to address perceived distrust in a product through to crisis communication when errors have occurred.2 Thus, in this latter context, risk and crisis communication can be seen as “typically manipulative, designed to sell unsuspecting recipients on the communicator’s political agenda” (Morgan et al., 2001, p. 8), and distinct from communications designed to be solely informative (Eiser et al., 2012).

Given the wide range of diverse organizations and agencies involved in preparedness for, and recovery from, disasters and natural hazard events, some of whom may not have any prior experience of disaster or emergency management, these contrasting examples indicate the issues that can arise if communicators do not clarify their use of the different terminologies used in disaster management. This includes what risk communication is and the different motivations behind their related goals. However, while the motivations may differ, many similar principles apply across these contexts regarding the effectiveness of such communication, such as understanding the perspectives, needs, and specific concerns of communities; partnering with communities and decision-makers; and communicating honestly and openly. Next, the definitions of risk are briefly reviewed, before proceeding in Part 1 with a review of the history of risk communication and in Part 2 with a discussion of the particular context of disaster risk communication.

What is risk? The Oxford English Dictionary defines risk as both a noun: “(Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility” (Oxford English Dictionary, n.d.), and also a verb: “To endanger; to expose to the possibility of injury, death, or loss; to put at risk.” Meanwhile, the UNISDR defines risk as “the combination of the probability of an event and its negative consequences” (UN, 2009 p. 25), but they highlight how the popular usage of the word “risk” emphasizes the concept of chance, or possibility for a particular “cause, place and period” (i.e., risk can be conflated with likelihood), much like the OED definition. Considering legislative examples, risk is defined as “likelihood + consequence + hazard” by the New Zealand Civil Defence Emergency Management Act (Civil Defence Emergency Management Act of 2002, 2002), while the International Organization for Standardization (ISO) defines risk as the “effect of uncertainty on objectives” (ISO 31000:2018; ISO, n.d.), and the U.K. government states “risk refers to uncertainty of outcome, whether positive opportunity or negative threat, of actions and events. It is the combination of likelihood and impact, including perceived importance” (Cabinet, 2002, p. 7). Such variation in understanding of risk across disciplines is reflected in further definitions of risk for business, environment, health, security, insurance, occupational and other contexts, and their evolution over time.

When defining risk in a disaster context, it is important to understand that risk is conceptualized as a function of not just the hazard posed, but also the exposure of individuals, communities, and assets to that hazard, as well as the vulnerability or inability of individuals or a community to resist or respond to the hazard (Siwar & Islam, 2012; Wisner et al., 2003). Vulnerability thus encompasses physical, social, economic, and environmental factors that contribute to increasing disaster risk, and thus risk communication exists in a context that is influenced by current and historical socioeconomic and political meaning (Tierney, 2014; Wisner et al., 2013).

What is risk communication? The U.S. National Research Council defines risk communication as

an interactive process of exchange of information and opinion among individuals, groups, and institutions. It involves multiple messages about the nature of risk and other messages, not strictly about risk, that express concerns, opinions, or reaction to risk messages or to legal or institutional arrangements for risk management (as defined in NOAA, 2016, p. 8).

In the field of health, risk communication has been described as a “process of exchanging information among interested parties about the nature, magnitude, significance, or control of a risk” (Covello, 1992, p. 359), while for disasters, it “aims to prevent and mitigate harm from disasters, prepare the population before a disaster, disseminate information during disasters and aid subsequent recovery” (Bradley et al., 2014, p. 2). As stated by the World Bank, risk communication “shapes people’s perceptions of risk and influences their actions with respect to disaster preparedness and disaster response” (Shaw et al., 2012, p. 3).

Reviewing all the definitions of risk and risk communication is beyond the scope of this article, but it is clear from this brief selection of definitions that risk communication is a dynamic, holistic process extending beyond single products to the broader system within which we develop and exchange knowledge and awareness to make decisions individually and collectively about risk. As to be discussed herein, effective risk communication has moved beyond fine-tuning messages for delivery to a single “audience” toward a multifaceted approach with community and audience needs both at the heart of the design of the risk communication and in the recognition of the risk itself.

Why is effective risk communication so important? We need only look at a range of case studies across disaster history to identify how diverse failures within this holistic risk communication system of education, partnerships, warnings, technology, and individual, political, and collective decisions have resulted in gross losses to life, income, and infrastructure. An early example includes the loss of an estimated 23,000 lives in the 1985 Nevado del Ruiz volcanic eruption due to eruption-induced lahars. After a year of persistent volcanic and seismic activity, tragically the “emergency management system failed to avert disaster” (Voight, 1996, p. 720). Causes of the high fatality rates have been attributed to communication failures (García & Mendez-Fajury, 2017; Voight, 1996) and the need for more effective hazard management practices (see extensive chronological review in Voight, 1996). Voight (1996) proposed contributing factors included the government’s fear of false alarm and the economic or political cost of early evacuation, as well as issues with “institutional skepticism” and a loss of credibility as awareness of the volcano hazard grew. The warning and response failures are identified by Voight as arising due to “the limitations of prediction/detection, the under-advised and inadequately prepared local authorities, the unprepared populace, the refusal to accept a possible false alarm, and the lack of will to act on the uncertain information available” (Voight, 1996, p. 719; see also Voight et al., 2013; see also García & Mendez-Fajury, 2017, for a thorough account of the context leading up to this event and communication efforts since).

However, this disaster was not just a failure of interagency communications and to adequately warn, but it also arose due to the underlying cultural, social, economic, and political drivers (see, e.g., Tierney, 2014) that culminated in towns being situated within the lahar path. While (Voight, 1996; Voight et al., 2013) highlights the importance of developing more effective communication of risk and uncertainty to prevent such delays in decision-making during a crisis, highlighting also the dynamic process of this communication, this brief example shows the importance of developing effective risk communication not just for warnings but through community partnerships around risk awareness and decision-making. The aim is thus to reduce communities’ exposure and vulnerability prior to such a natural hazard event. As stated by García and Mendez-Fajury (2017), “In 1985 the Colombian population had no knowledge about previous volcanic disasters and thus there was no experience in risk communication. In consequence, despite the efforts of the scientific community to create risk awareness, the reaction was not effective” (p. 341). While there were a number of information campaigns developed before the eruption (hazard maps, dissemination of scientific findings), only about 8,000 people survived by evacuating to identified safe meeting points. However, it is unclear how many of these evacuated due to the prior information campaigns. We interpret García and Mendez-Fajury’s reference to “no experience” to refer to the lack of previous community experience of official volcanic advice and poor community engagement in the design of those communications.

Since this disaster, a number of community-based initiatives involving a partnership approach to risk and communication have been developed across Colombia to address the volcanic risk. These include the development in 2007 of risk scenario maps during a period of unrest at the Nevado Del Huila volcano by the local indigenous community, in collaboration with a range of government agencies and verified (and extended) by the national geologic agency INGEOMINAS. When the 2008 lahar occurred, this “long term multi-agency, interdisciplinary and participatory work” (García & Mendez-Fajury, 2017, p. 344) prior to the event is credited with a casualty rate of “only” 10 deaths and is a clear example of the evolution and advancement of a risk communication practice for a known hazard.

Similar risk communication issues, as well as the need for more holistic understanding of risk communication, can be identified across a range of other events. In the well-known L’Aquila 2009 earthquake and subsequent legal proceedings, the treatment and communication of scientific uncertainty in isolation from ethical, political, and societal concerns has been suggested as a key contribution to the controversies, conflicts, and disaster that arose (Benessia & de Marchi, 2017). While not classified as a “physical” natural hazard, the global Covid-19 pandemic crisis of the 2020s also offers many examples of international risk communication issues that are best solved through partnerships with communities, where an understanding of the socioeconomic issues of those communities leads to more effective risk communication and mitigation (see also Blake et al., 2017).

These introductory examples and definitions demonstrate how developing effective risk communication for natural hazards and disasters has moved from enhancing the design and delivery of a one-way product to considering the complexities and influences upon the landscape within which communication occurs and decisions are made via participatory two-way design (e.g., Gurabardhi et al., 2004). We review the history and evolution of risk communication in Part 1, outlining its shift from a dissemination of risk messages toward two-way communication, also echoed in the evolution of science communication. This is followed with a discussion of risk communication for disasters and natural hazards in Part 2, which outlines some example models for risk communication, as well as examples of studies into disaster risk communication, and summarizes some key principles for best practice in communicating risk in these contexts. We conclude by considering how science and risk communication can work together more effectively in the natural hazard and disaster space.

Part 1: The History and Landscape of Risk Communication

To present an extensive review of risk communication would be beyond the scope of this article. Indeed, an April 2021 search of risk communication in Web of Science returned nearly 8,300 articles (all fields), while a search for risk communication and review in the titles returned 52 articles (e.g., Gurabardhi et al., 2004; Kellens et al., 2013; Visschers et al., 2009), surely demonstrating the topic has been comprehensively reviewed and is continually evolving. Thus, a brief summary of the evolution of some of the key principles of risk communication is presented in this section, to inform identification of key principles in Part 2.

Risk communication has garnered extensive research attention across the wide range of physical hazards, from flood warnings (e.g., see Kellens et al. [2013] for a review of communication of flood risks and Intrieri et al. [2020] for an operational framework for flood risk communication) to the communication of risks associated with climate change (e.g., MacIntyre et al., 2019; Sterman, 2011). Research has been undertaken in the volcanic context, for example, on communication for volcanic preparedness (e.g., volcanic hazard communication using maps, Haynes et al. [2007]; the risk perception of young people on Vesuvius, Carlino et al. [2008]; and influences of evacuation decisions at Pacaya volcano, Guatemala, Lechner & Rouleau [2019]). Risk communication research has also explored earthquake preparedness (e.g., communicating public earthquake risk information, Mileti [1993]; visualizing seismic risk and uncertainty, Bostrom et al. [2008]) and the cascading hazards that can arise from such events (e.g., the challenges of risk communication following the Fukushima nuclear incident; Robertson & Pengilley, 2012).

Across these, risk communication often has two core goals: first to communicate to inform about general risk and enhance preparedness and mitigation, and second to enhance response to a specific event (e.g., through warnings; Mileti, 1993). Risk communication has also been extensively studied in various health and safety fields, from injury prevention (e.g., Austin & Fischhoff, 2012) to risky behaviors, including seatbelt use (Makoul et al., 2006; Starr, 2002), smoking (see McComas, 2006; Slovic et al., 2004), and food risk (e.g., Lofstedt, 2006). Other contexts include environmental risks (Frewer, 2004; Kuhn, 2000), wildfire protection and preparedness (e.g., Rohrmann, 1995; Zaksek & Arvai, 2004), and general health and safety and product warnings (e.g., Wogalter et al., 2002). More recently, the Covid-19 crisis has seen an explosion of research and systematic reviews into effective risk communication for early phase protective behaviors and later phase vaccine uptake (e.g., the role of values in communication, Hooker & Leask [2020]; longitudinal risk communication, Sutton et al. [2020]).

There have been a vast range of comprehensive reviews of risk communication research (e.g., Árvai & Rivers, 2013; Bier, 2001; Bradley et al., 2014; Frewer, 2004; Goerlandt et al., 2020; Gurabardhi et al., 2004, 2005; Heath & O’Hair, 2012; Kellens et al., 2013; Kondo et al., 2019; Lofstedt, 2010; Sellnow et al., 2009). Many of these reviews build on the classic early work of Fischhoff (1995), Leiss (1996), Keeney and Von Winterfeldt (1986), Fisher (1991), Rohrmann (1992), Sandman (1993), and Gutteling and Wiegman (1996), and they highlight the multidisciplinary nature of the field, including cognitive and social psychology, communication science, political science, organizational fields, and natural and life sciences (Gurabardhi et al., 2005).

Early Practice: Dissemination of Risk for Natural Hazards

In their review of the three phases of risk communication, Leiss (1996) states that the term risk communication was first coined in the literature in 1984, when the practice evolved from calculating and quantifying risk to a focus on persuasive communication. However, Lofstedt (2010) highlights that the practice of risk communication can actually find it roots in the early risk perception work of Gilbert White in the 1940s via his seminal paper on the “Human Adjustment to Floods” (Macdonald et al., 2012; White, 1945). This paper investigated the various factors creating flood “problems” and influencing communication and individual adjustment to living with floods. These included social impacts, emergency measures, flood protection and structural adjustments, land use, and response through relief and insurance. White’s paper was highly influential as it highlighted how human encroachment into floodplains contributes to disasters and was an early advocate for forms of flood management and mitigation that go beyond engineering, to consider public policy and social and human factors (see review in Macdonald et al., 2012). This, combined with the risk perception work of Fischhoff and Slovic in the 1970s (Fischhoff et al., 1978; Slovic, 1987), helped form the basis for risk communication work that understood that public perception of risk varies due to differing degrees of “control, catastrophic potential, and familiarity” (Lofstedt, 2010, p. 91).

In the 1970s, communication efforts thus focused on changing public views about a risk and were generally rooted in the motivation of enhancing technology acceptance (Frewer, 2004). This centered on the “deficit model” of communication (see reviews in Frewer, 2004; Vidal, 2018), which viewed the public as lacking experts’ understanding of risk and science and aimed to rectify those knowledge gaps, built on a premise that they were “ignorant of the scientific ‘truth’ about risk and probability” (Frewer, 2004, p. 392). By the late 1980s, these objectives had broadened to encompass elements aiming to understand an audience’s perspective to better inform their decisions. For example, Keeney and von Winterfeldt (1986) identified a risk communicator’s objective as including (pp. 420–421):

1.

To better educate the public about risks, risk analysis, and risk management

2.

To better inform the public about specific risks and actions taken to alleviate them

3.

To encourage personal risk reduction measures

4.

To improve the understanding of public values and concerns

5.

To increase mutual trust and credibility

6.

To resolve conflicts and controversy

In contrast, Rohrmann (1992) identifies that risk communication has three core goals within which the risk actors, audiences, channels, and situations vary. These are to (p. 171):

Advance/change knowledge and attitudes

Modify risk-relevant behavior

Facilitate cooperative conflict resolution

Thus, by the early 1990s, a spectrum of risk communication objectives could be identified. Fisher (1991) outlined that the different perspectives for the communication of risk range from a “one-way communication” that seeks to inform an audience, which tells them “what has been decided or done” and “what to do,” to “empowering the audience” at the other end: involving two-way dialogue, an understanding of audience concerns, and including those concerns in the risk assessment itself, as well as helping the audience interpret results to affect their decision. Thus, Fisher (1991) recommends that at the outset of any risk communication activity, a communicator must first clarify whether their goal is to (a) alert about a risk that an audience lacks awareness of or (b) help people seek information to make more informed decisions (as echoed by Siegrist, 2014). Through this, they highlight that while risk estimates can be hard to understand and research must address those presentation issues, other risk dimensions also affect perceptions and need greater understanding. These include the influence of the source of information (formal vs. informal), the style of presentation framing (gains vs. losses) and personal background, and the dimensions of the risk, whether it is humanmade, involuntary, and unfamiliar; involves a dreaded disease; or is borne by people other than those who benefit. Thus, risk judgments and responses to risk communication go beyond the technical and scientific measures of risk within a message and are influenced by wider individual and social factors.

The work of Fisher (1991) reflects an era where the philosophy and motivations of risk communication started to shift from this one-way to two-way approach. As reviewed by Gurabardhi et al. (2004), the early years of risk communication work (since the 1980s) focused on the determinants of public risk assessments, finding that they are based upon subjective risk characteristics rather than objective risk indicators. The goal was to understand audience behaviors in relation to risk communication and develop more effective risk communication through models and experiments, in order to “align the risk perception of the public with that of the risk experts” (p. 325). This was borne out of motivations to reduce fear of new technology and public resistance to it (e.g., nuclear). A range of different approaches were thus adopted to meet this goal and enhance this communication, and as summarized by Gurabardhi et al. (2004, p. 324), these included the psychometric paradigm, cultural risk theory, the mental models approach, attitude-behavior models, and the stress-coping paradigm (see, e.g., Baum et al., 1983; Bostrom et al., 1994; Dake, 1992; Slovic, 2000).

However, the practice of risk communication has since evolved as the importance of integrating and understanding values in the risk communication process was acknowledged and understood, particularly where risks could become politically contested or disputed (Frewer, 2004; Gurabardhi et al., 2004, 2005; Kasperson, 1986). As outlined by Frewer (2004), there was thus a gradual shift from risk education in the 1970s toward risk consultation in the early 2000s, with a focus on restoring public trust in risk management involving more extensive public consultation and participation in risk management and other science and technology issues (see also Rowe & Frewer, 2005). Those transitional communication efforts thus focused on (a) information transfer, persuasion, or dialog; (b) communication flow of either one-way or two-way flows of information; (c) risk decision-making processes; and (d) the importance of empowering different stakeholders to enhance trust (see reviews in Chess, 2001; Covello & Sandman, 2001; Fischhoff, 1995; Gurabardhi et al., 2005; Leiss, 1996).

Shifting to Two-Way Communication

This shift from a focus on one-way top-down communication products to two-way consultative communication processes has been identified by a number of other authors over the years, most notably Fischhoff (1995) and Leiss (1996). In 1995, Fischhoff outlined in his seminal narrative text “Risk Perception and Communication Unplugged: Twenty Years of Process” that risk communicators and researchers evolve through a series of developmental stages as they build their skill and awareness of effective communication practice. These include (Fischhoff, 1995, p. 138):

1)

all we have to do is get the numbers right;

2)

all we have to do is tell them the numbers;

3)

all we have to do is explain what we mean by the numbers;

4)

all we have to do is show them that they’ve accepted similar risks in the past;

5)

all we have to do is show them that it’s a good deal for them;

6)

all we have to do is treat them nicely; and

7)

all we have to do is make them partners.

As stated by Morgan et al. (2001), progression from Stage 1 to 6 increasingly acknowledges the recipients as individuals “with complex concerns” (p. 9), but it is limited by the assumption that the risk communication message content is determined by the communicator. By Stage 7, the recipients and the public are seen as partners who co-create the message.

This is similar to Leiss (1996) three phases of risk communication. In Phase I (1975–1984), the focus of research and practice emphasized risk itself. It aimed to build capacity to manage risks with a strong focus on high levels of detail, quantitative expressions of risk, and a particular focus on comparative risk estimates (see also Fischhoff, 1995). However, over time, weaknesses emerged, including an “arrogance of technical expertise” (Leiss, 1996, p. 88), where public perceptions of risk were correlated with false understanding in contrast to the “true” account of reality. This phase is described as “model of message transfer” by Renn (2014) and equates to Fischhoff’s developmental Stages 1 and 2, and it is particularly challenging in situations of uncertainty where simple yes/no decisions are not appropriate.

In Phase II (1985–1994; Leiss, 1996), the focus of research and practice was on communication, where risk statements were regarded as “acts of persuasive communication” (p. 88). Thus, work centered on identifying characteristics of successful communication, such as source credibility, message clarity, and effective use of channels, with a focus on needs and perceived reality. This is described as the “model of shared understanding about objective threats” by Renn (2014) and can be represented in Fischhoff’s (1995) developmental Stages 3 to 6. Leiss (1996) states that this was a radical break in research and practice from Phase 1, with a focus on using messages to persuade the listener to the “correctness of a point of view” (p. 89). Accordingly, it drew extensively on marketing and focused on enhancing trust and credibility. However, Leiss highlights that during this period, there was also an acknowledgment that too much manipulation or persuasion is potentially dangerous and can undermine trust and credibility (see also McBride [2017] for a thorough review of persuasion in risk communication). In this phase, risk communicators thus primarily aimed to communicate honestly and effectively by understanding public perceptions, rather than “touting the superiority of their assessments” (p. 90).

Finally, in Phase III (from 1994; Leiss, 1996), the focus shifted to the social context and identifying how to carry out sound risk communication due to the lack of trust pervasive in many risk issues. The goal is thus to build trust in a wider context of consensus building and meaningful stakeholder interaction, and Leiss (1996) recommends the formation of a code of good risk communication practice to achieve this. This phase is described as the “model of mutual constructing of meaning” by Renn (2014) and is equivalent to Fischhoff’s (1995) developmental Stage 7.

Leiss’s (1996) three phases and Fischhoff’s (1995) seven developmental stages both echo Fisher’s (1991) spectrum of risk communication objectives appropriate to different contexts, ranging from a one-way communication informing audiences to two-way dialogue empowering them. Importantly, both Lewis and Fischhoff highlight that the earlier phases and stages do not become irrelevant; rather, they become incorporated into the best practice of the later phase or stage.

Risk Communication After 1996

The works of Leiss (1996) and Fischhoff (1995) have been highly cited within the risk communication literature (citations of 466 and 1,613, respectively, as of April 5, 2021; GoogleScholar). They mark a turning point in risk communication research and practice. As reviewed by Frewer (2004), in the earlier communication efforts, researchers came to realize that the public remained highly skeptical even with the best communication efforts, as social factors were not integrated into the risk management policies and processes itself. By increasing the transparency of risk management practices, as well as public consultation and participation in risk management policy, risk communication practices improved. However, the increase in transparency enabled uncertainties and variabilities in the risk to be more scrutinized by the public, demanding more effective communication of them within this wider communication process. Overall, this shift in practice represented a “re-orientating” (Frewer, 2004, p. 392) of risk communication practice toward a citizen focus (rather than an expert knowledge focus).

These qualitative descriptions of the evolution of risk communication practice from one-way to two-way approaches have been systematically advanced in a (quantitative) review of 349 peer review articles on risk communication published between 1988 and 2000 by Gurabardhi et al. (2004, 2005). They identified that this shift in best practice is reflected in a gradual decrease in articles referring to one-way flow of risk communication and a corresponding increase of two-way communication studies with an increasing emphasis on stakeholder participation in risk decisions (Gurabardhi et al., 2005). They describe this as a shift from a technical to a democratic perspective (see also Rowan, 1994), which emphasizes the interaction between partners. Interestingly, while the risk communication flow changed from one-way to two-way over that time period, the risk communication strategy was not seen to change (where strategy refers to: technical information described by data sheets, quantitative risk assessments or comparative risk estimates, etc.; persuasion described by the message clarity, risk perception, and education; or dialogue described by the characteristics of the discourse).

Gurabardhi et al. (2005) define the technical perspective as being based on the premise that decisions should be left in the hands of the experts and relevant scientists, as well as the view that involving the public will result in poorer decisions (Löfstedt & Perri, 2008; Rowe & Frewer, 2005). The focus is thus on rationality, efficiency, and expertise, where communication is seen as informative and involves educational dissemination and transmission of information. The democratic view, however, focuses on justice and fairness and prioritizes the codetermination with citizens of the decisions that impact them (see Fiorino [1990], cited in Gurabardhi et al. [2005]). Thus, communication involves a more constructive dialogue among all stakeholders as they seek to reach an effective collaborative risk decision. In this process, subjective, experiential, social, and cultural values are thus acknowledged and can be key drivers to the discussions.

Despite evidence found by Gurabardhi et al. (2005) in their review for an increased democratic approach to communication, a number of researchers have critiqued participatory dialogues as the ultimate solution to risk management and communication. These researchers highlight that risk management and thus effective collaborative dialogues and participatory approaches to risk must consider not just the different social and cultural values and worldviews present (Scandlyn et al., 2013) but also the sociopolitical arena and history within which they occur (Mileti, 1999; Wisner et al., 2003; see also Tierney, 2014, chap. 3). This includes developing an understanding of the sociopolitical influences on an individual’s or a community’s response or trust in a risk communication campaign, as well as barriers to involvement in any participatory risk management endeavors (discussed further in section “Debates, Challenges, and the Future of Risk Communication Since 2010”).

Through a review of the literature from 1996 to 2005, McComas (2006) identified that risk communication has shifted to focus on a number of core themes: social trust, social amplification of risk, and the affect heuristic; influences of risk in mass media; mental models and their role in developing content; risk comparisons, narratives, and visuals in the production of risk messages; and severity, social norms, and efficacy. They thus highlight that the era of risk communication predicted by scholars a decade before (e.g., by Leiss, 1996) was present by the early 2000s, as communicators had moved from the “narrow vision of risk communication as persuasion and toward a broader understanding of the psychological, cultural, and social influences on risk communication” (McComas, 2006, p. 84), as predicted by Leiss (1996).

Goerlandt et al. (2020) recently conducted a similarly structured and systematic review of the risk communication research via a “scientometric” analysis,3 identifying the growth from psychology and social sciences investigating public health and environmental concerns into the development of risk communication research in emergency and disaster contexts. The field of research is identified as originating from a practice-oriented need to communicate regarding industrial (environmental) contamination and public health, with subsequent research exploring wider societal concerns such as nuclear power, epidemics, or natural disasters. Development of risk communication models or theories and risk communication efficacy represents a further research cluster, albeit quite “disjoint” from the others (p. 26). They thus identify a number of key challenges and themes, including the function of risk communication, assessing communication product or program effectiveness, the use of probabilistic information, the use of risk maps, and ethical aspects of risk communication. Two dominant narrative topics were identified across the 1,196 reviewed articles: One relates to societal risk governance and one addresses medical risk communication (between practitioner and patient or family), where the former focuses on communication between different societal stakeholders where values and objectives play a role, while the latter is concerned more with interpersonal communication and trust in a medical setting and concerns interventions and individual decision-making.4 They also identified geographic and cultural splits in the research, being dominated by Western countries (primarily the United States), followed by a growing field in China, South Korea, and Brazil. However, they identified very little risk communication research in South American, Eastern European, African, Middle Eastern, Asian, and Oceanian countries, which they attribute to the different governance structures and role of risk communication within them. They thus caution that this cultural bias means theories, models, and conceptual frameworks for risk communication must be utilized with care to account for different “social traditions, world views, or knowledge systems” (p. 24), discussed further next.

Debates, Challenges, and the Future of Risk Communication Since 2010

While there has been a shift in risk communication priorities toward two-way, democratic, or empowering approaches that relate to societal risk governance, the definitions of risk communication of the earliest work did suggest that this was part of its original motivation. For example, the U.S. National Research Council (NRC) in 1989 states (as quoted in Lofstedt, 2010, p. 91):

Risk communication is an interactive process of exchange of information and opinion among individuals, groups, and institutions. It involves multiple messages about the nature of risk and other messages, not strictly about risk, that express concerns, opinions or reactions to risk messages or to legal and institutional arrangements for risk management.

(NRC, 1989, p. 21)

Reflecting on the literature as a whole, Lofstedt (2010) thus identifies a list of core theoretical considerations similar to the themes of McComas (2006), including:

1.

The importance of the psychometric and contextual factors such as perceived controllability, catastrophic potential, degree of voluntariness, and whether natural or technological hazard, all of which affect risk perception.

2.

The communication of uncertainty and the role of transparency, and how it can increase public trust to help make informed choices, although this is not without critique (Lofstedt, 2003) due to associated challenges and perceptions of that uncertainty (see, e.g., reviews in Doyle et al., 2019; Doyle, Khan, et al., 2014; Doyle, McClure, Paton, et al., 2014; Eiser et al., 2012; Sword-Daniels et al., 2018).

3.

The social amplification of risk due to psychological, social, institutional, and cultural processes.

4.

How the stigmatization of the hazard can result in that hazard garnering more media and public attention, affecting any related risk communication.

5.

The role of trust in the authorities responsible for the information as well as the message content.

Similarly, reviewing the literature through the lens of decision analysis, French and Maule (2010) identified the psychometric paradigm as a core risk communication focus. They highlight that qualitative threat aspects such as dread can influence perception of risks; heuristics and biases play a key role in message interpretation, judgments, and decision-making; and response to a message can be influenced by the framing of the message itself (e.g., Kahneman & Tversky, 1979; Tversky & Kahneman, 1981). In addition, they also identified a research focus around the social perspective and the role of culture (Douglas, M. (2013); Thompson et al., 1990, as cited within French & Maule, 2010), which determines how groups perceive and act in the face of risk, the role of trust, and the social amplification of risk (Barnett & Breakwell, 2003; Kasperson et al., 1992, 2003, as cited within), including the role of the media in amplifying public concern (as also identified by McComas, 2006).

Thus, while there was an important shift in focus from one-way to two-way communications at the turn of the century, since 2010, this has been significantly advanced by incorporating other contextual factors, such as the psychometric factors discussed, as well as social, institutional, and cultural processes; stigmatization; uncertainty; trust; politics; and managing disrupting disinformation. This consideration has allowed for an advancement of our understanding of risk communication effectiveness, but the implementation of this knowledge into practice has remained challenged due to the limitations of resources, time, and effective knowledge transfer from researchers to communication practitioners (see also Doyle, Becker, et al., 2015).

Indeed, French and Maule (2010) identify that in a move toward more social perspectives around risk and decision-making, the fundamental question, “What information do people want?” still also needs to be addressed. Thus, risk communication within these dialogues will meet the needs of users and extends beyond just technical risk assessment content to include issues such as exposure, consequences, controllability, “other people’s experience with the risk” (p. 9), responsibility for negative consequences, and potential advantages. Based upon a review of the literature, they thus identify three core bodies of work that seek to identify perspectives and information desires. These include (a) the Mental Models Approach (MMA) (discussed further in section “Models for Disaster Risk Communication”), which seeks to identify how mental models of external reality and the potential actions differ between the lay public and scientific “experts” (Morgan et al. 2002); (b) methods for communicating statistical risk information due to issues with misinterpretations of probabilities and quantitative assessments (Gigerenzer et al., 2005; see also (Doyle, McClure, et al., 2020; Doyle, McClure, Johnston, et al., 2014); and (c) cognitive mapping, problem and issue structuring methods, and soft modeling (e.g., French et al., 2005; Mingers & Rosenhead, 2004; see also Atman et al., 1994; Bruine De Bruin & Bostrom, 2013; Davies, 2011; Wood et al., 2012).

Reflecting on this broad history of risk communication, Kasperson (2014a) thus proposed a set of principles for the future of risk communication practice, as follows (pp. 1237–1238):

1.

Risk communication programs need to be more sustained over time, better funded, and more ambitious in the goals adopted and the outcomes sought.

2.

The scope of risk communication should be broadened to internalize conflicting issues of concern, and decision-makers should deepen their analysis to address the embedding of risk issues in value and lifestyle structures.

3.

If uncertainties are large and deeply embedded, more communication will be needed, particularly that regarding those uncertainties that really matter in risk terms and not the full catalogue of uncertainties that scientists uncover. Attention will also be needed to identify which uncertainties can and cannot be reduced over time and within what time frames.

4.

In situations where high social distrust prevails, and this is increasingly common, a thorough revamping of the goals, structure, and conduct of risk communication will be needed.

However, a wide range of responses to these principles highlight how research and practice was (at the time) already demonstrating these principles (Bostrom, 2014; Kasperson, 2014b; Renn, 2014; Siegrist, 2014). For example, Bostrom (2014) outlined the growth of evidence-driven design, implementation of new engagement strategies, and efforts to effectively communicate uncertainty. Renn (2014) illustrated how Leiss (1996) Phase 3 of risk communication (termed “model of mutual construction of meaning” by Renn, 2014) demonstrates an inclusive approach as far back as 1996 that is in line with Kasperson’s (2014) principles.

However, Renn (2014) also highlights that the deficit model is still used in some sectors, such as industry and government, where it can be used to both trivialize and dramatize risks. Similarly, Siegrist (2014) notes that there is thus a wide range of opinions regarding what risk communication really is (citing Árvai & Rivers, 2013), ranging from personal safety to technology acceptance, some of which may be more successful than others (and may have a different focus: e.g., decreasing or increasing acceptance of technology depending on nongovernmental organization vs. government goals). They thus highlight some challenges of Kasperson’s (2014) principles, including (a) that it may not always be in the public best interest to trust government, thus erosion of trust is not always a bad thing, and (b) that good risk communication should aim to help people make better informed decisions rather than judging whether their decision, outcome, or action is good or bad. This requires evidence-based risk communication, including effective communication of the uncertainties, trade-offs, and risks needed to enable these informed decisions (see also Goerlandt et al., 2020; Renn, 2014).

In response to Kasperson’s (2014) principles, Bostrom (2014) highlighted the need to apply analytical tools to synthesize the large body of risk research and to progress toward more robust, replicable findings; the need to ensure risk communication practice is based upon stronger behavioral science foundations; and that risk communication research integrates all disciplines and application domains, as well as engages the full diversity of participants. We reemphasize the importance of these here, particularly given the continued exponential increase of risk communication research since this 2014 debate. Reflecting on the importance of multidisciplinary perspectives to communicating and managing risk, this becomes particularly important when considering participatory approaches to risk that must be aware of sociopolitical, economic, and cultural influences and history. These factors may both produce risk and also affect engagement in the management of risk. As outlined in our discussion of the definition of risk, risk communication can often occur in an environment subject to power imbalances (Marlowe et al., 2018; Scandlyn et al., 2013) and is not always “politically neutral” (see also Mileti, 1999; Tierney, 2014; Wisner et al., 2003). Accordingly, the associated disaster governance and risk management process is influenced by “globalization, world-system dynamics, social inequality, and sociodemographic trends” (Tierney, 2012, p. 341). Unfortunately, as stated by Tierney, research has disproportionally focused on the perception, communication, and management of risk compared to understanding where risks come from. Researchers and communicators must understand both the social origins and social production of risk (Kasperson et al., 1988; Renn, 1991; Thomas et al., 2009; Tierney, 2014; Wisner et al., 2003) and how that relates to individual and community vulnerability, resilience, and capacity to take action or to participate (Blake et al., 2017) and variations in demographics that influence the power relations present (Marlowe et al., 2018).

A further challenge arises in the communication of risk, and the science related to this risk, when individuals or organizations try to obfuscate the message and undermine the official or accepted science to meet their ulterior motives. For example, researchers have highlighted the role of the tobacco industry (and some scientists) in undermining smoking health risk messages (Oreskes & Conway, 2010), and similar behaviors have been identified in the alcohol industry’s influence on scientific policy advice (McCambridge & Mialon, 2018). It has also been a well-known challenge in communicating climate change science (Oreskes & Conway, 2010), with a range of bodies working to undermine scientific messages that may impact their agendas.

Since the mid-2010s, this challenge has also included the growth of a “post-truth” era (Van der Linden & Löfstedt, 2019) and the rise of disinformation that formed out of political motivations and “post-truth politics” or a “post-truth society” (Harsin, 2018). As stated by Harsin (2018), post-truth is not simply that citizens or politicians no longer respect truth, or accept as true what they believe or feel, but that post-truth is “a breakdown of social trust” (p. 1). The breakdown of trust is particularly for the formerly major “institutional truth-teller” represented by news media, hastened by the growth of mass communication technologies and social media. This post-truth environment has since stoked the global antivaccine protest movement during the management of the Covid-19 crisis, with international influences affecting individual trust in domestic government science advice due to the “viral” capability of social media (Peters et al., 2020). Managing risk communication in a post-truth era is a subject of extensive ongoing research that is beyond the scope of this article, but we direct readers to the U.S. National Academies of Sciences, Engineering, and Medicine’s (2017) advice on this topic, which includes: recommending the use of a systems approach to science communication that recognizes it exists within a larger network of information and influences; advocating for honest, bidirectional dialogue to achieve meaningful engagement; and understanding the appropriate timing for science communication that considers social media’s impact, particularly for contentious issues.

Parallels With the Evolution of Science Communication

The previous sections identified some synergies between science and risk communication, as well as outlined how risk communication strategies have shifted from risk education in the 1970s, represented by “one-way,” “informing,” or “technical” approaches, to risk consultation from the early 2000s, represented by “two-way,” “empowering,” or “democratic approaches” (Chess, 2001; Covello & Sandman, 2001; Fischhoff, 1995; Fisher, 1991; Gurabardhi et al., 2004; Leiss, 1996; Rowan, 1994). The synergies of these approaches are summarized in Table 1, which depicts the evolution of different risk communication models over time. Interestingly, the “deficit model” characteristic of the earlier approaches to communication (Hilgartner, 1990; Vidal, 2018) is also found in the history of science communication, the practice of which has evolved along similar lines to risk communication. This deficit model persists in science communication (Simis et al., 2016), disaster risk management (Cook & Zurita, 2019), and climate change (Suldovsky, 2017), where a perceived knowledge deficit is driven by assumptions about who the public are, as well as scientists’ lack of formal training in public communication (Simis et al., 2016). Unfortunately, the adoption of this knowledge deficit model thus puts restrictions and boundaries on adequate public participation and engagement, which “only fail[s] to empower publics” (Cook & Zurita, 2019, p. 56).

Reviews into the public understanding of science illustrate how it has also progressed from having a knowledge focus, assessing people’s “science literacy,” to an attitude focus, assessing beliefs and more recently focusing on engagement with science through studies into “science and society” (Bauer et al., 2007; Miller, 2001). The earlier science literacy movements (see also Prewitt [1983] and Wynne [1996], as cited in Bauer et al., 2007) were initially motivated by a definition of “literacy” bounded by knowledge of facts, understanding of scientific methods, and a rejection of superstitious beliefs. There was an assumption of either a public deficit of knowledge about science and its “facts” that science communication or education aimed to fix or a public deficit in appropriate attitudes about science that education programs aim to correct. However, critics of these earlier definitions argued that the key to understanding science is its process and not its facts (Bauer et al., 2007; Sturgis & Allum, 2004), and thus any assessment of science “literacy” would need to include an awareness of the processes and concerns of science, including uncertainty, peer reviewing, scientific controversies, replication, institutions, and science politics.

In the subsequent public understanding of science movement, the focus shifted to public attitudes about science (Bauer et al., 2007; Bodmer, 1987), while still also measuring knowledge. However, knowledge was acknowledged to be on a spectrum (rather than “literate”/“not literate”), and considerations included the relationship between knowledge and attitudes (Durant et al., 1989). This period was motivated by a desire to either educate or “seduce” the public (Bauer et al., 2007) and has also been labeled “public engagement with science” (Seakins & Hobson, 2017). However, critics of this approach argued that it was more important to understand knowledge in community and personal contexts and account for life’s concerns (Bauer et al., 2007; Ziman, 1991). Further, as highlighted by Seakins and Hobson (2017), this approach cannot sufficiently address a range of communication challenges, such as the public assuming scientists are in a “factual” search for truth, resulting in the public then dismissing scientific claims if they perceive the scientists themselves are not convinced (see also Dillon & Tinsley, 2016; Doyle et al., 2019; Rabinovich & Morton, 2012).

Accordingly, in the most recent science and society approach, Bauer et al. (2007) highlight that the aim is thus to fix the relationship between science and the public through activities, such as citizen science and action research, and enhance mutual understanding across these groups (Bidwell, 2009; see also Law et al., 2017). Thus, the focus has shifted to that of a “deficit of the technical experts” themselves (Bauer et al., 2007, p. 85), which instead questions the implicit and explicit views scientific experts hold of the public. Through this, the goal is to identify how the experts’ deficits and assumptions have impacted the trust within relationships between scientists and the public. In this paradigm, many researchers are involved in “action research,” where research and intervention are blurred (Bauer et al., 2007; Cassidy & Maule, 2012; Glicken, 2000; Linnerooth-Bayer et al., 2016; Ross et al., 2015), and there is an additional aim to improve confidence and trust.

However, as with risk communication, science communication and public engagement is influenced by its sociopolitical landscape and history, as well as related power dynamics (Marlowe et al., 2018; Tierney, 2014). These dynamics can also impact the science-to-policy interface (Balog-Way et al., 2020; Pidgeon, 2020; Woods, 2019). Woods (2019) suggests that better quality evidence synthesis is required to cut through the “post-truth” noise at this interface and highlights that a more structured public dialogue is needed such that science can play a more robust and transparent role. Similarly, Pidgeon (2020) has a number of recommendations for public engagement, including providing balanced information and policy framings, providing deliberative spaces for a variety of engagement and reflection, and using varied methods to elicit broader values.

Balog-Way et al. (2020) highlight that researchers should not just focus on enhancing evidence delivery for practitioners, but that they also have a responsibility to keep pace with the rapidly changing policy environment. This also requires that researchers focus on understanding policy relevance and develop an understanding of the distribution of decision-making control within the policy process. In addition, they must build core relationship and collaboration skills in order to effectively work in a two-way engagement style with policy-makers (see also Oliver et al., 2014). Through these, the goal is thus to ensure science and risk research becomes more policy relevant. Balog-Way et al. (2020) point to boundary organizations as a way to help facilitate the connection and relationship building needed between researchers and practitioners to achieve these goals. The IPCC is an example of such a boundary organization (Beck & Mahony, 2018), integrating the diverse climate change science occurring and helping to facilitate a single source of trusted coordinated evidence for decision-makers.

The most recent studies in science communication through the lens of the science and society approach have identified a number of important lessons that can also be applied to disaster risk communication. These include how an individual’s values and their group identities can be stronger predictors of their risk judgments than scientific literacy and numeracy (Frewer, 2004; Kahan et al., 2012; Sturgis & Allum, 2004; Ziman, 1991). Thus, effective risk communication may need to be framed in terms of group values and identity, not simply via quantitative descriptions, due to different perceptions and tolerance levels of uncertainty (Deitrick & Wentz, 2015; Doyle et al., 2019; Rabinovich & Morton, 2012). Other lessons can be drawn from a range of science communication themes, including the role of participatory science (e.g., Bidwell, 2009; Evers et al., 2016; Glicken, 2000), effective science-to-policy mechanisms (e.g., Karl et al., 2007; Scolobig & Pelling, 2016; Spruijt et al., 2014), citizen science (Law et al., 2017; Orchiston et al., 2016), the challenge of uncertainty (Doyle et al., 2019; Khan et al., 2017), misinformation and science (Lewandowsky et al., 2012, 2017), scientific storytelling (Dahlstrom, 2014; Kahan, 2015), and the need to understand the influence of politics and power dynamics, social structure, and agency (Halpern & O’Rourke, 2020; Marlowe et al., 2018; Scandlyn et al., 2013), which includes valuing and understanding the importance of other knowledge forms (see also the postnormal science approach; Funtowicz & Ravetz, 2003). Recommendations from some of these arenas are highlighted in section “Example Principles for Best Practice.”

Table 1. The Spectrum of Risk and Science Communication Approaches, as Identified in the Literature

Part 2: Advancing Disaster Risk Communication

Across the history of risk communication research, a number of risk communication models focused on natural hazards and disasters have also been proposed and developed, and a selection of example models is presented next.

Models for Disaster Risk Communication

The Risk Information Seeking and Processing Model (RISP) (Griffin et al., 1999; Yang et al., 2014) aims to “disentangle the social, psychological, and communicative factors that drive risk information seeking” (Yang et al., 2014, p. 20) and has been applied across different risk contexts from communicating preventative action (e.g., Griffin et al., 1999) to disaster recovery (e.g., Shi et al., 2020). The focus of this model is to understand how people’s risk perceptions influence the way they seek and manage or act upon information. Rather than focus on the crafting of messages and framework for managing risk communication, this model focuses on individual perceptions and interactions with information. Core elements include individual characteristics (e.g., past experience, attitudes, demographics), perceived hazard characteristics (risk judgment) and how that relates to affective response, and how both influence motivation (such as norms and sufficiency of information), all contributing negatively or positively to information-seeking behavior (seeking/avoiding) while being influenced by beliefs about information channels and information-gathering capacity (Griffin et al., 1999). Yang et al. (2014) highlight that a simplified version of the RISP model may enable it to be applied to other communication settings beyond risk (focusing in particular on norms and current knowledge).

The Mental Models of Risk Approach (MMRA) utilizes the concepts of “mental models,” which are sets of simplified causal beliefs and values that people use to interpret events (Bostrom, 2017; Johnson-Laird, 2010; Jones et al., 2011). They can represent an individual’s understanding of a system, and such systems can include physical phenomena (e.g., volcanic eruption), an organizational communication network (e.g., emergency management response communication), a warning system (e.g., flood warnings), and how we produce knowledge, including cultural, philosophical, sociopolitical, educational, and organizational influences (e.g., Bostrom et al., 1992; Greca & Moreira, 2000; Morgan et al., 2001). Mental models have been used as a tool to enhance science and risk communication across a range of disciplines and contexts (Abel et al., 1998), including how models are formed through science education (e.g., Hogan & Maglienti, 2001), how scientists develop causal investigations and hypothesis (e.g., Brewer, 2001), the models people construct of phenomena such as climate change (e.g., Sterman & Sweeney, 2007), and how the models of phenomena can inform risk and hazard communication (e.g., hurricane forecasts; Morss et al., 2016). The Carnegie Mellon Mental Models of Risk Approach (Morgan et al., 2001) utilizes mental models to enhance risk communications by eliciting expert and lay models of a phenomenon or communication system and comparing these to help design and enhance communication products or processes. These models are elicited either directly or indirectly (LaMere et al., 2020) through interviews, focus groups, and surveys. Critically, this process includes evaluation of the efficacy of the new communication (see also Bruine De Bruin & Bostrom, 2013). This approach can thus be used to enhance the one-way dissemination of public communication or improve warning systems through two-way communication in design. To address the critique that this method can privilege the expert model over the publics’, researchers such as Cassidy and Maule (2012) and Barnett and Breakwell (2003) have adapted these approaches to enable social representations of risk knowledge to be incorporated, bringing them more in line with the science and society and democratic risk communication approaches discussed in the section “Parallels With the Evolution of Science Communication” and illustrated in Table 1 (e.g., Bauer et al., 2007; Fischhoff, 1995; Fisher, 1991; Gurabardhi et al., 2005; Miller, 2001; Rowan, 1994).

Sellnow’s IDEA model (Sellnow et al., 2017; Sellnow & Sellnow, 2019) is grounded in experiential learning theory. Four key components exist in this model, including whether people see the relevance of a risk to themselves personally (internalization), whether people can access the information via various channels (distribution), whether people understand and think a risk is credible (explanation), and whether people take actions to protect themselves against the risk (action). This practice model aims to provide a framework for understanding some of the key components of the risk communication process. The authors suggest that such a model might be useful in designing effective messages to enhance uptake of self-protective actions during times of crises. Examples of the model’s application include for earthquake early warning (Sellnow et al., 2019) and Ebola (Sellnow-Richmond et al., 2018).

Community Engagement Theory (CET) takes a community resilience approach to risk communication for disaster preparedness (Paton, 2019). The theory was developed in response to the idea that traditional risk communication approaches were not effective in motivating people to prepare for disasters (as discussed in Part 1). Rather, the CET suggests that a holistic approach to preparedness should be taken, incorporating both risk communication and broader community development concepts. Such a holistic approach should not only motivate preparedness actions but also enable people to respond to events (e.g., respond to information that is communicated) and recover effectively (Becker et al., 2015). The theory suggests that as well as communicating the risks and consequences of hazard events and the benefits of preparing to community members, other societal attributes related to resilience should also be focused on, such as participation, collective efficacy, social capital, empowerment, and trust (Paton, 2019; Paton & Buergelt, 2019). Inclusion of these societal attributes is not straightforward, however, and context is key. For example, power dynamics will affect participatory processes (see Part 1 and Marlowe et al., 2018), and trust can be difficult to build and easily lost depending on social and political decisions that may be unrelated to the hazards and risks faced (Paton, 2008). Given its focus, this model incorporates two-way communication, as discussed in Part 1, and has been applied in settings with multiple hazard risks such as in the Hawke’s Bay in Aotearoa, New Zealand, which is exposed to numerous risks, including earthquake, tsunami, volcanic eruptions, storms, and so on (Becker et al., 2015; Becker, Paton, Johnston, et al., 2013; Becker, Paton, & McBride, 2013). The CET’s strong theoretical basis has benefited from its utility in practice.

The Crisis and Emergency Risk Communication (CERC) model is a practice-based model developed as a tool to enhance public health communications in emergency situations (Reynolds et al., 2002) and has since been used by the U.S. Centers for Disease Control (Reynolds & Seeger, 2005; Veil et al., 2008). The model brings together many risk and crisis communication principles within a unifying five-stage development framework and aims to merge the lessons from across health, risk, crisis, and disaster communication. This includes five core stages through the communication cycle of an event, from the preevent risk, in-event crisis, and postevent recovery communication. These include (a) precrisis communication and education campaigns (risk messages, warnings, preparations); (b) initial event rapid communication to public and affected groups (uncertainty reduction, self-efficacy, reassurance, empathy); (c) maintenance communication, including the correction of any misunderstandings (ongoing uncertainty reduction, self-efficacy, reassurance); (d) resolution (updates, discussion about cause, new understandings of risk, reinforce positive image); and (e) evaluation (discuss adequacy of response, consensus about lessons and new understandings of risk, link to precrisis activities). This practice model has been used extensively in different international contexts to address recent Covid-19 communication challenges (see, e.g., Emergency Preparedness and Response [accessed July 27, 2021] and Malik et al. [2021] and Ow Yong et al. [2020]).

A diverse and extensive range of other models exists for risk communication (e.g., Campbell et al., 2020; Intrieri et al., 2020; Silver, 2019; Smillie & Blissett, 2010), including many tailored to natural hazards and disaster contexts. Similarly, while not a theoretical model, the World Meteorological Office (WMO, 2018) Multi-Hazard Early Warning Systems checklist of actions for effective warnings provides structured recommendations based on evidence for best practice. To review all models and recommendations is beyond the scope of this article. However, the examples provided each show the dynamic nature of risk communication and the need to consider the phase of an event, the many distribution channels, and influences on communication due to individual, organizational, social, and cultural factors. The evolution of communication channels and the wealth of information available to individuals due to the growth of social media and the Internet adds further complexity to these considerations. Thus, approaches such as the message-centered approaches to risk communication offer potential solutions (Árvai & Rivers, 2013; Sellnow et al., 2009) as they encourage individuals and organizations to generate, collect, and evaluate multiple risk messages from a range of perspectives and to base decisions on converging information.

Risk Communication Examples From Aotearoa, New Zealand

Following on from the models presented, some specific situations where communication of natural hazard and disaster risk is utilized is now explored, including some examples from Aotearoa, New Zealand, research. This includes communication for long-term risks before and after an event (with the goal to encourage preparedness or recovery from disasters) and shorter-term risks (focused on immediate life safety responses to warnings before, during, or immediately after an event). Communication is often aimed at raising awareness about a risk, linked with the provision of advice to motivate people to take the correct protective action in response to a risk, whether it be to prepare, respond (e.g., evacuate), or recover. Figure 1 illustrates a conceptual risk communication cycle for disasters and natural hazard risk, with examples of activities along the spectrum of available time.

Figure 1. The risk communication cycle for disasters and natural hazard risk, illustrating which types of example activities are suitable depending upon the time available for (two-way) communication (see section “Risk Communication Examples From Aotearoa New Zealand”).

Longer-Term Risk (Reduction and Readiness and Recovery)

Longer-term communication of risk in times of quiescence, or when hazards have a slow onset, can be challenging. Communication of risk during periods of inactivity (e.g., for events such as earthquakes, tsunami, or volcanoes) can be difficult as people often do not find such issues salient when they are not directly impacted and may not take notice of the risk or take any protective action. Additionally, people’s beliefs and biases, for example, optimism bias, negative outcome expectancy, and fatalism (Johnston et al., 2013; Paton & McClure, 2013), might act to reduce the importance they place on a hazard and limit their preparedness or resilience actions. For example, following the 2010–2011 Canterbury earthquake sequence, a heighted awareness of risk was found for locations close to the earthquake in the 2 years following the event, but this was not replicated in locations not impacted by the event (McClure et al., 2011, 2016; McClure, Johnston, et al., 2015). This meant risk communication needed to be tailored differently in different geographic regions following the earthquakes.

In addition to periods of quiescence, long-term disruption might occur from climate change or hazards such as volcanic unrest. In the climate context, where people often discount impacts as being far into the future, there is acknowledgment that one-way communication alone is unlikely to promote adaptation and mitigation actions. However, researchers also highlight that improvements are possible, and communication should not solely focus on the risks and consequences of climate change, but work to emphasize key actions to take and the efficacy of those actions (Crosman et al., 2019; Milfont, 2012). For volcano unrest, risk communication over time needs to evolve as volcanic circumstances change. For example, following the 1995–1996 Ruapehu eruptions, threat of a breakout lahar from the dammed crater lake led to 10 years of evolution in communication about hazards, risks, and actions to take before, during, and after the lahar, an example of which can be seen in Figure 2. Informational communication about the nature of the hazard and risks was used early on and evolved through to response-focused communication closer to the lahar event, which ultimately led to a successful response (Becker et al., 2017). Additionally, some risks are seasonal, such as weather-related hazards or bushfire, adding the need to consider the best time for communicating risk. In these circumstances, in addition to general communication, risks and protective actions should be highlighted before the start of each “season” (Prior & Paton, 2008).

Figure 2. A sign communicating the risk of flash flooding from a dam break lahar from Ruapehu Crater Lake following the 1995–1996 eruptions.

(Photo: Julia Becker)

Like the other communication research reviewed in Part 1, work in the hazards and disaster space has highlighted that communication requires not only one-way efforts to raise awareness but also two-way communication to explore solutions to prepare for that risk (e.g., Paton, 2013). Efforts such as workshops, running scenarios, and undertaking exercises have been found to be extremely effective in engaging people about the risks posed by future events and empowering people to develop and implement protective solutions (Doyle, Paton, et al., 2015; Orchiston et al., 2018). As part of that process, agencies can take on a facilitation role to help initiate the two-way conversation and potentially resource any resulting solutions (Doyle, Becker, et al., 2015). The ShakeOut earthquake drill, for example, facilitated by governments, agencies, and educational facilities, has been a successful two-way communication initiative that has led to people improving their knowledge about earthquakes and actions and taking the correct protective action (e.g., Drop, Cover, Hold) in a real earthquake (Johnson, Johnston, et al., 2014; McBride et al., 2019; Vinnell et al., 2020). Communication challenges have been experienced for ShakeOut, however, in terms of communicating ShakeOut knowledge and actions to more vulnerable or at-risk groups (McBride et al., 2019). Analysis of this shakeout drill has highlighted that groups with limited mobility or with dependent responsibilities (e.g., younger children and elderly) had difficulty participating (McBride et al., 2019). To address the communication challenges associated with these barriers to participation, McBride et al. (2019) suggest alternative messages should be crafted for these groups. As well as the ShakeOut drill being embraced by educational facilities such as schools, other educational initiatives can also be employed to promote two-way communication. In New Zealand, for example, the National Emergency Management Agency provides a free resource called “What’s the Plan Stan” to support schools, teachers, and students to work together to develop knowledge and skills to prepare for emergencies (Johnson, Ronan, et al., 2014a; National Emergency Management Agency [NEMA], n.d.b).

In another example of the benefits of two-way communication, Kwok et al. (2018) explored a bottom-up approach to developing resilience in Wellington, New Zealand, and San Francisco, California. They found that understanding what was salient to local people via face-to-face discussion was imperative in understanding how to develop resilience for those communities, highlighting the value of a two-way communication approach. As discussed in the section “Parallels With the Evolution of Science Communication,” citizen science also offers opportunities for two-way communication. While some citizen science projects are simply based on citizens contributing passively to a project (e.g., via crowdsourced data), other projects are driven by citizens themselves and involve more participation and collaboration (Haklay, 2013). This participation and collaboration provides the opportunity for two-way communication, which in the hazards space could constitute having conversations about hazards and risk, as well as making plans on what to do about the risk. An example of such a type of citizen science project occurred in Orewa, Auckland, Aotearoa New Zealand, where a service group began a citizen science activity about tsunami readiness that led to the collection of data on readiness for tsunami and a tsunami evacuation exercise (Doyle, Lambie, et al., 2020). Both activities had a wide reach to public and schools and were an effective means of communicating risk and promoting action.

Forecasts play a part in communicating long-term risk, as well as having utility in the shorter-term risk and response context (Becker, Potter, McBride, et al., 2020). Forecasts are commonly used for shorter-term weather phenomena but do extend to climate projections and geological hazards such as earthquakes (i.e., Operational Earthquake Forecasts (OEF) or time‐dependent probabilities of future earthquakes; Jordan et al., 2014). Forecasts can be communicated in many ways, including probability statistics for certain timeframes, text explanations, and in formats such as maps, tables, graphs, and scenarios (Becker et al., 2019a; Michael et al., 2020; Thompson et al., 2015). Hazard-focused research has found that while probabilities are useful, they are often more readily interpreted by those with more technical backgrounds (Becker et al., 2019a; McClure, Doyle, et al., 2015), in line with what has been discussed in Part 1. Beneficial ways of presenting probabilities have been identified; for example, people like to see ranges of numbers, and the wording should support effective interpretation, for example, using consistent terms alongside each probability range (Doyle & Potter, 2015) and using terms like “within” to reduce people’s tendency to focus on the latter end of a time window forecast (Becker, Potter, McBride, et al., 2020; Doyle et al., 2011; Doyle, McClure, Johnston, et al., 2014). Less technical audiences may benefit from the presentation of scenarios that present information such as probabilities but also provide context in the form of a narrative to help them navigate the risk information (Becker et al., 2019b; Becker, Potter, McBride, et al., 2020). Similarly, uncertainty presents a particular challenge for effective communication, and thus various associated communication guidelines are being continuously developed, discussed further in section “Example Principles for Best Practice.”

Research on forecasts and other types of disaster risk communication aligns with previous findings (Chess, 2001; Covello & Sandman, 2001; Fischhoff, 1995; Fisher, 1991; Gurabardhi et al., 2004; Leiss, 1996; Rowan, 1994) that highlight diverse audience needs (in terms of what and how risk is communicated and at what level). Thus, risk information should be developed and communicated based upon those needs (e.g., Hudson-Doyle & Johnston, 2018; Keeney & von Winterfeldt, 1986; Wein et al., 2016). While effective risk communication often focuses on the public, communication efforts must also include the needs of audiences such as emergency managers, planners, and policy-makers all variously involved with longer-term governance decision-making (Crawford et al., 2019). The risk communication needs of these groups can differ considerably depending not just on their roles and tasks but also on their individual motivations and social influences, such as whether they are in an elected or appointed role. Importantly, risk communication must also recognize the needs (and leadership) of indigenous groups who will have their own worldview on what constitutes “risk” and their own approaches to talking about and dealing with risk (Kenney & Phibbs, 2021; Phibbs et al., 2015). Information needs will vary over time as disasters shift from predisaster timeframes into the response and recovery phases; consequently, risk communication should evolve to match the change in identified needs, as found during the Canterbury earthquake sequence (Becker et al., 2019b). This was initiated by the M7.2 Darfield earthquake in 2010, example damage from which can be seen in Figure 3, and over the subsequent 18 months included four major earthquakes, over 10,000 recorded aftershocks, and closure and demolition of most of Christchurch’s central business district (Potter et al., 2015).

Figure 3. Damage incurred from the 2010 Darfield earthquake, the earthquake that occurred on September 4, 2010, at the start of the Canterbury earthquake sequence. As the sequence unfolded over time, communication needs evolved too.

(Photo: Julia Becker)
Shorter-term Risk and Response (Warnings and Response)

Risk communication over shorter timeframes can constitute aspects such as forecasts and warnings, as well as the communication of risk immediately after an event. Forecasts are essential communication tools in a number of risk contexts, being led by the weather and forecasting domain, who use it to highlight the likelihood of occurrence of different types of weather, including sun, rain, wind, hail and snowstorms, cyclones, and tornadoes. There has been a depth of research on the best ways of communicating weather forecasts, and like for other hazards, a range of communication formats and messages is required to meet the needs of different audiences. Communication can be challenging due to the uncertainties inherent in forecasts, the uncertainty that people themselves apply to the forecast they receive (Morss et al., 2008), and the difficulty that some have in interpreting forecasts—including the interpretation of statistics such as probabilities (Grounds & Joslyn, 2018; Handmer & Proudley, 2007). That said, research also shows a public desire for such information and highlights the benefits if framed appropriately. For example, Grounds and Joslyn (2018) found that people’s access to numbers (such as probabilities) led to them making better decisions in response to a forecast. Where severe weather is forecast, warnings comprise a part of that communication. Traditionally, these have been based on phenomena (e.g., wind speeds and snowfall depth) (Potter et al., 2018). However, these warnings are improved when impact-based information is added alongside weather phenomena. This helps people understand what the impacts and consequences of a particular event might be, to help guide their preparations, and to support better decision-making (Potter et al., 2018, 2021; Taylor et al., 2019; Weyrich et al., 2018).

In addition to weather, warnings can be applied to a number of other hazards. At the very short-fuse end of the warning spectrum is Earthquake Early Warning (EEW), where technology in some countries can provide a few to tens of seconds of warning, in which time people can take protective action (Allen & Melgar, 2019; McBride & Ball, 2022). Other hazards have various warning lead-in times (Kelman & Glantz, 2014). For example, floods may occur as near-instantaneous flash flooding or provide days of warning, producing devastating damage, as shown in Figure 4; a local tsunami may occur within 10 minutes, a regional tsunami within 3 hours, or a distant tsunami within 12 hours, and volcanic eruptions may receive no warning, days of warning, or many years of warning. Consequently, communication needs will vary depending on these different contexts, such as signage to support immediate evacuation in combination with wider education campaigns, as illustrated in Figure 5 (Doyle, Becker, et al., 2015; Doyle, Lambie, et al., 2020). For example, in an EEW context because of the extremely short timeframe, warnings need to be quickly received (e.g., via a mobile phone) and have a short and directive message (e.g., EEW, Drop, Cover, Hold), which can be promptly actionable (Becker, Potter, Vinnell, et al., 2020; McBride & Ball, 2022). In contrast, hazards with more lead-in warning time have the benefit of use of a wider range of channels, messages, and time to take protective actions (see also Sellnow et al., 2017).

Figure 4. Results of flash flooding following the 2017 Edgecumbe floods in Aotearoa, New Zealand, after a stopbank breached, giving residents only minutes’ warning of impending floodwaters.

(Photo: Julia Becker)

Figure 5. Tsunami evacuation signage in Tapu Te Ranga | Island Bay, a southern suburb of Te Whanganui-a-Tara | Wellington, in Aotearoa New Zealand.

Lindell and Perry (2012) have examined the important components of risk communication for warnings across a variety of hazard contexts. They have found that while receipt of warning information is the first important step of even knowing there is a hazard to take action for, a number of other technical, environmental, and social factors also contribute to people taking protective action. They include the source of the information, channel preference, and the message itself. Information sources should be trusted agencies for people to pay the greatest attention to the message (García & Fearnley, 2012). Channels should be multiple and messages clear and consistent with directions of what people should do (Mileti & Sorensen, 1990), with links provided to further information if available. It can be challenging when there is little space to write a message (such as on mobile phones), but some researchers have developed recommendations for short messages (Bean et al., 2014; Potter et al., 2021; Wood et al., 2015). Important content for a short message includes the source (agency), hazard characteristics and location, impacts, suggested actions, where/who the warning applies to, timings (of warning issue/of response), and a link to further information (Potter et al., 2021).

Lindell and Perry (2012) also suggest that people will also look to the environment to assess the situation (e.g., is it raining, does the river look full?) and will rely on the social cues of others’ behavior to inform their own, also described by some researchers as “milling” (Wood et al., 2018). Also important in the process of interpreting and acting upon communicated messages is people’s perceptions of the threat, perceptions of protective actions, and perceptions of other people’s beliefs. The interaction of these factors will influence people’s eventual protection action decisions, including whether they look for further information, undertake action, or focus on emotions rather than practical actions. Findings from the warnings response literature show alignment with risk communication literature in terms of sources, channels, messaging, and external influences on the process, although in the warnings context, it is bound by particular timeframes in which the communication and response has to take place.

As technology has developed, it has had an associated impact on communication of risk for warnings. Traditional channels for communication have constituted radio, television, telephone trees, door knocking, and public alerts (e.g., via voice or siren), such as that designed into the lahar warning system at Ruapehu volcano (Becker et al., 2017; Leonard et al., 2008), or planned for multichannel tsunami alerting (NEMA, n.d.a). However, with the advance of technology, communication can be delivered direct to personal mobile phones via cell broadcast or texts or accessed over the Internet through web interfaces or mobile phone apps. For example, in the United States of America, Wireless Emergency Alert cell broadcasts are used for warning about a variety of hazards, and in Aotearoa, New Zealand, the Emergency Mobile Alert (EMA) system is used. This EMA was used to warn local communities about the potential tsunami risk following several earthquakes that occurred near the East Cape of the North Island on March 5, 2021 (Vinnell et al., 2022), and has been used to inform about changes to Covid-19 alert levels and stay-at-home orders. The development of new technologies and associated explosion of available warning applications over recent years highlights a challenge around ensuring the delivery of reliable information and the challenges of ensuring users have accurate, consistent, and appropriate information within systems and apps that they continue to use beyond their first interaction (Tan, Harrison, et al., 2020; Tan, Prasanna, et al., 2020a, 2020b).

Social media platforms (such as Facebook, Twitter, or Instagram) also constitute rapid ways of communicating information about risks and warning people about impending events. Challenges for communicating about disasters in the age of social media include the speed at which information is exchanged, the challenge of balancing one-way with two-way information, the availability of trained staff from agencies to contribute to social media information, coordination of information, and ethical considerations in the use of such information (Lovari & Bowen, 2020). Additionally, these platforms can provide a way for the public to quickly self-organize their activities and communicate between each other, often without input from official agencies (Tan et al., 2017). The vast array of social media and Internet-based sources that exist means that official and agency information may be one of only dozens or hundreds of channels of communication. Thus, the specter of rumors, myths, or deliberate misinformation tactics can arise (see, e.g., Krause et al., 2020; Lu, 2020). It is therefore imperative that official agencies have a preexisting trusted presence on social media, providing timely communication before events so that they are turned to during and after a crisis event. Such trust can be built through humor as well as official messaging (McBride & Ball, 2022). Ideally, such trusted communication will be incorporated by self-organized groups into their own internal communication (see also Paton, 2008). The importance of these trusted communication channels and communicators is, however, not just for public communication but is also imperative for interagency response communication as responders identify who to communicate with as well as what and when to communicate. This is most effective when personnel have shared mental models across agencies, enabling implicit anticipated supply of appropriate and needed information, rather than depending upon explicit requests (Doyle & Paton, 2017; Owen et al., 2013).

Example Principles for Best Practice

Bradley et al. (2014) undertook a systematic review of disaster risk communication intervention studies during four stages of this disaster cycle, identifying several different types of communication: from face-to-face group participation and education, board games, early warning alerts, telephone interventions, computer games, multichannel information campaigns, school-based intensive education programs, and media campaigns. Through their review, they found little “robust evidence of the effectiveness of risk communication for disaster knowledge, behavior and health outcomes in the response and recovery phases of disasters” (p. 20).

Despite the wide range of identified communication methods employed for different circumstances, as is evident in the general literature on risk communication, there is still a predominant focus on on-way communication styles (Bradley et al., 2014), countering the recommendations emerging from the wider risk communication field (see Part 1). Additionally, Bradley et al. (2014) noted that some suggested interventions had the potential to be harmful, resulting in less long-term protective behaviors or development of appropriate knowledge.

Reflecting on the material reviewed so far, it can be seen that it demonstrates the varied and complex nature of risk communication along the spectrum of one-way/technical to two-way/democratic approaches (Table 1). Across this body of work, a number of recommendations and principles for best practice can thus be identified. These extend the strategic goals outlined in Part 1 into practical guidelines that enhance disaster risk communication. It would be beyond the scope of this chapter to review them all here, but particular highlights include:

1.

First understanding the audiences’ structural understanding of risk such that communication of a risk analysis sits within this individual and societal structure (Keeney & von Winterfeldt, 1986).

2.

Identifying the information that is relevant to risk communication by identifying people’s “actual concerns,” and the related social environment surrounding the risk communication (Frewer, 2004), to ensure it meets an audience’s decision needs and concerns (Hudson-Doyle & Johnston, 2018).

3.

Identifying the goals of the risk communication while moving toward empowering the audience, including evaluating how this is actually being achieved (Fisher, 1991).

4.

Understanding and developing trust prior to any assessment of risk, codevelopment of two-way communication systems, or design of a one-way style of communication product (e.g., Bier, 2001; Frewer, 2004; Lofstedt, 2003; McComas, 2006). Lack of trust can be addressed by considering deliberative techniques (for fairness), technocratic measures (for inefficiency), rational risk strategies (for inefficiency) (Lofstedt, 2003), and other relationship-building approaches (see review in Doyle, McClure, Paton, et al., 2014).

5.

Being aware of the different goals of a risk communication (such as building trust, raising awareness, education, reaching agreement, and motivating action; Rowan, 1994) and designing communication strategies (and measures of success) for these different goals, such as vivid messages for awareness and stakeholder participation for agreement on action (Bier, 2001).

6.

Reflecting on the “characteristic of the risk,” given the related impact upon concerns and perspectives (Lofstedt, 2010), including whether it arises due to a natural or technical hazard, is voluntary or involuntarily reported, whether it impacts vulnerable groups (e.g., children), whether it is stigmatized, whether it will be amplified or attenuated by the media, and the levels of trust of the communicator and the public trust in the regulator, industry, or agency.

7.

Recognizing that a risk communication’s scope may be constrained by legal requirements, institutional policies, and audience characteristics, each of which should be adequately identified at the outset in a “planning phase” that considers (Bier, 2001) legal requirements or organizational policies, the purpose of the communication (see also Fisher, 1991), selecting the appropriate strategy for that purpose, identifying the characteristics of the audience (including receptivity and concerns, beyond knowledge and beliefs), and the sources of the audience.

8.

Acknowledging the vital role of evaluation, of products and programs (Bier, 2001; Bostrom et al., 2008; Doyle et al., 2019; Frewer, 2004; McComas, 2006; Rohrmann, 1992). This is particularly challenged by identifying what counts as “effective” or “successful” due to a lack of criteria for evaluation (McComas, 2006). However, efficacy of different message formats requires adequate pilot testing (Bier, 2001) and must consider decision-making effectiveness (Bostrom et al., 2008).

Considering this need for evaluation, as reviewed by Hudson-Doyle and Johnston (2018), it is not just the products but the process that needs evaluation following program evaluation standards (Johnson, Ronan, et al., 2014b). For the products themselves, the communication tools, techniques, languages, symbols, images, and maps should be empirically tested wherever possible (Bostrom et al., 2008; Briggs et al., 2012; Mastrandrea et al., 2010; Moss, 2011). Unfortunately, as discussed by Doyle et al. (2019) in the context of uncertainty communication, this is not always the case, which can cause erroneous decision-making (see also Benke et al., 2011; Bonneau et al., 2014; Boukhelifa & Duke, 2009; Brus & Svobodova, 2012; Deitrick & Wentz, 2015; Hope & Hunter, 2007; Tak et al., 2015). Unfortunately, if one focuses on preference and visual aesthetics alone, and does not empirically test the decision-making effectiveness of the products, the communication may result in interpretations and actions that are significantly different from that intended.

These lessons are supplemented by reflecting back in more detail on some recommendations identified through research into disaster risk communication in an Aotearoa, New Zealand, context (see section “Risk Communication Examples From Aotearoa, New Zealand”). Analysis of the communication and interpretation of aftershock information during the Canterbury earthquake sequence identified a number of factors that influenced people’s responses to aftershock information (Becker et al., 2019a; Wein et al., 2016).5 These have implications for wider risk communication in a crisis and include seven factors that influence responses to aftershock information and should be considered in design. These include (a) the accessibility of information; (b) the audience’s prior knowledge and experience of earthquakes; (c) the audience personalizing the communicated information to their circumstances to inform their decisions, particularly around impacts to personal safety or their formal job roles; (d) the role of experience; (e) emotions and feelings (including psychosocial well-being); (e) credibility and trust; and (f) external influences and changes in needs over time. Thus, eight recommendations were proposed (Becker et al., 2019b), which are adapted for general natural hazard contexts as follows:

1.

Develop a communication strategy prior to a crisis or hazard event.

2.

Prior to this crisis or hazard event, provide education and training about potential hazardous phenomena or risks that may occur.

3.

Allow for flexibility in communication.

4.

Provide a diversity of information for different audiences.

5.

Understand and account for influences on information about hazardous phenomena or other scientific or technical information (including forecasted future outcomes).

6.

Ensure information is situated within a context relevant to the audience.

7.

Inject empathy into communication, of particular importance when communicating with affected communities.

8.

Ensure interagency coordination around communication, for example, coordinating geoscience, emergency management, and mental health messaging ahead of time and practicing this coordinated communication during moderate events, scenario planning, and exercises (see also Wein et al., 2016).

A wide range of recommendations also exist regarding the communication of uncertain information; the use of probabilities; what and how to communicate when there is a diversity of expert opinion; incomplete or conflicting information; uncertainty in the data, knowledge, or understanding; or uncertainty in the communication and response expectations about who to communicate to, as well as what that communication should be. These are extensively reviewed in Hudson-Doyle and Johnston (2018), Doyle and Paton (2017), Doyle et al. (2019), Eiser et al. (2012), Khan et al. (2017), and Sword-Daniels et al. (2018). These draw on associated research and guidelines from organizations such as the World Meteorological Office (Gill et al., 2008), the IPCC (Budescu et al., 2009; Mastrandrea et al., 2010; Moss & Schneider, 2000; Patt & Dessai, 2005), and Aotearoa’s geological monitoring agency GeoNet (Doyle & Potter, 2015). They include a range of other technical lessons, such as format of messages, issues with risk comparisons, and ways to represent individual uncertainties linguistically or visually (e.g., Bier, 2001; Elith et al., 2002; Kloprogge et al., 2007; Retchless, 2014; van der Bles et al., 2020).

These lessons will not be repeated further here, and those reviews can be consulted directly for more information. However, highlights particularly relevant here from the review by Doyle et al. (2019) include:

-

Ethically, the focus of the communication of uncertainty should be on decision-maker centeredness, which is flexible and matches their uncertainty needs and tolerance, and is best achieved through participatory or two-way type dialogues (see also Kasperson, 2014a).

-

It is important to understand the social and organizational context and capabilities of decision-makers, which can impact their interpretations of, and needs for, uncertainty information and the evolution of those needs across time demands and forecast horizons.

-

The need to communicate more than just the scientific and technical uncertainty, but also the “social history of uncertainty” and to solicit social science expertise in communication, which can include communicating explicitly the potential value-ladenness of assumptions in a risk or model assessment (see also Kloprogge et al., 2011; Moss, 2011; Patt, 2007).

-

Any formalized communication strategy should be accompanied by exercises, simulations, and education programs with both the decision-makers and the public to help facilitate a greater understanding of the complexities inherent in these uncertain forecasts.

Across the risk, science, and disaster communication literature, the benefit of a participatory approach to risk (from the assessment through to the communication phase) resonates (French & Maule, 2010; see also Part 1 & Table 1). An extensive literature exists regarding the participatory, coproduction, knowledge exchange, and engagement literature (e.g., Clark et al., 2016; Linnerooth-Bayer et al., 2016; Page et al., 2016; Scolobig & Pelling, 2016). As reviewed by Doyle et al. (2019), by adopting a participatory approach to developing communication protocols, two-way relationships between scientists and policy-makers are developed. This requires the prioritization of the decision-makers’ needs and resources, such that they “know when they need to invest in the time and resources to take part in a participatory process, and when they do not” (Patt, 2009, p. 246). Benefits of adopting a participatory approach and coproduction of knowledge include making the science information more credible, salient, and legitimate (Page et al., 2016; Patt & Dessai, 2005); the generation of information and advice that is useful, useable, and used (Aitsi-Selmi et al., 2016; Rovins et al., 2014); and that it can enable the plurality of perspectives to be incorporated (Sword-Daniels et al., 2018). It can also enhance relationships and decisions if the legitimate differences in values and worldviews are respected (Linnerooth-Bayer et al., 2016). However, as discussed, researchers must be cognizant of the power structures and imbalances that can impact the effectiveness of these participatory processes (Halpern & O’Rourke, 2020; Marlowe et al., 2018).

Such participatory approaches to preevent two-way communication can help develop a mutual understanding of communication needs prior to a crisis. However, it can be a challenging, long, resource-intensive process (Scolobig & Pelling, 2016) that can be enhanced through joint fact-finding techniques (Barton et al., 2020; Karl et al., 2007; Schenk, 2016) and other group decision-making, engagement, and scenario tools (see review in Doyle & Paton, 2017). Doyle et al. (2019) also highlight that for such engagement and participatory approaches to work, a code of practice and professional guidelines must be developed to encompass the translational discourse, which considers funding, leadership, and ethical standards that can vary significantly between different disciplines. This should accommodate the five ethical principles to communicating science under uncertainty, including (a) honesty, (b) precision, (c) audience relevance, (d) process transparency, and (e) specification of uncertainty about conclusions (Keohane et al., 2014). To this can be added a need to support decision-makers to increase uncertainty tolerance.

Conclusions

As outlined in this chapter, the study and practice of risk communication has evolved to meet a variety of diverse contexts and goals. However, across these studies, the importance of understanding and meeting decision-makers’, users’, or audience’s needs is identified as a fundamental principle for effective risk communication. If time and resources permit, this would ideally involve a partnership approach where the identification of who is being communicated with, what needs to be communicated, and how is led by the final decision-maker (whether public or agency) long before a rapid risk communication is needed. This prior collaboration also helps to move risk communications toward an approach where the initial risk assessment itself is coidentified and codeveloped too (as outlined in Figure 1). It is most effective when the social and political context is adequately considered, such that social factors are integrated into the risk management policies and processes themselves and communities are active and equal partners in the risk management within which risk communication occurs.

However, often resources are not available to meet this need; the timeframes available to co-create suitable communication ahead of required need may be too short. Or the various agencies and members of the public may have too many competing demands, or have other more pressing needs, that restrict their ability (or their desire) to co-create communication or plans for communication. In this situation, how does an agency or organization develop the most effective rapid risk communication, particularly when there has been no prior relationship to develop a shared understanding? As outlined herein, this is an ongoing challenge—the solution of which is not clear. However, adopting a “two-way” empowering worldview that considers and listens to audience needs as a central philosophy may help guide effective communication even when only a “one-way” dissemination is actually possible. This may involve identifying previous successes and failures of communication within that context, looking to established evidence-based (audience-evaluated and “decision-tested”) communication products and strategies from analogous contexts, and rapidly identifying as far as possible the key concerns of the affected communities through avenues such as key spokespeople and community leaders, or through a rapid analysis of social media or equivalent. Identifying ways to bring spokespeople or community leaders into the communication design space, even if for only a rapid appraisal, may also help ensure communication is as effective and relevant as it can be under the circumstances. Involving such community leaders is vital to build trust between risk scientists and the public, particularly in the era of viral post-truth disinformation (noting that not all communities are geographically defined). However, the societal structures that contribute to risk and vulnerability not only can be a barrier to individuals and communities taking risk mitigation actions, but can also impact the success of public engagement if power imbalances are not understood.

We conclude this chapter by asking how science and risk communication can work together more effectively in the natural hazard and disaster space. Table 1 outlines some of the similarities in their evolution of practice, and a number of key texts and researchers cross between the two disciplines. There is pressing need to identify more clearly how an identification of cross-cutting principles and lessons can be used to strengthen both science and risk communication. For example, when considering risk communication and its role in longer-term decision-making, what principles from the study of science-to-policy should inform “risk-to-policy”? At an individual and community decision-making level, this is particularly important when uncertainty is high, where people’s understanding of the uncertainty inherent in the science may affect actions taken upon the risk communication the science has directly informed. Thus, we need to understand more clearly how perceptions, public understanding, and mistrust of science influence perceptions of related risk communication and information. This includes not just the influences upon “one-way” disseminated communication but crucially also the influence these perceptions of science (and scientists) may have upon the relationship building that is required to develop the partnerships fundamental to “two-way” democratic approaches to risk communication. This is a pressing issue for further research, particularly as the science of climate change forecasting becomes more integrated into our short- and longer-term natural hazard risk assessments, forecasts, warnings, and communication practice.

Finally, we highlight here that the focus of this chapter was a review of the history of risk communication for natural hazards and disasters, with examples from an Aotearoa New Zealand, context. It is by no means an exhaustive review and is also limited by its focus on Western studies and approaches to science and risk, as well as its restriction to English-language texts. As noted in other disciplines, such as psychology, much of the research we review is conducted with participants from WEIRD societies (Western, Educated, Industrialized, Rich, and Democratic; Jones, 2010). When combined with the restriction to English language, this results in a cultural bias in the reviewed studies’ findings as well as a bias in our recommendations. This is best addressed in future research by conducting locally relevant disaster research that is designed and led by local researchers (Das, 2022). Further, its aim was not to review the technical aspects of effective communication (e.g., framing, visualizations, words, graphs, mapping, or numbers, etc.), as these are extensively reviewed elsewhere.

Acknowledgments

The authors thank our many disaster and risk communication colleagues and friends who have helped shape our thinking over the years and whose influence on that thinking has no doubt shaped this manuscript too. Both authors are supported by the National Science Challenges: Resilience to Nature’s Challenges|Kia manawaroa—Ngā Ākina o Te Ao Tūroa 2019–2024 and (partially) supported by QuakeCoRE|Te Hiranga Rū—Aotearoa NZ Centre for Earthquake Resilience 2021, a New Zealand Tertiary Education Commission–funded center. This is QuakeCoRE publication number 0695.

Further Reading

References

Notes

  • 1. Note this is often referred to as “mitigation, preparedness, response and recovery” in international. contexts, such as in the United States of America.

  • 2. See also review in Reynolds and Seeger (2005) for a comparison of risk and crisis communication.

  • 3. A scientometric analysis is a literature review technique that utilizes bibliometric data to map the development of a scientific domain, considering factors such as citations and knowledge structures.

  • 4. Goerlandt et al. (2020) state, however, that “using other terms such as ‘crisis communication’, ‘emergency communication’, or ‘disaster communication’ will almost certainly lead to detecting other patterns and trends” (p. 25).

  • 5. The Canterbury earthquake sequence was initiated with the 2010 Mw 7.1 Darfield earthquake and included the fatal 2011 Mw 6.2 Christchurch earthquake and a number of severe aftershocks (> Mw 5.7) for many years (Potter et al., 2015).