You are looking at 61-80 of 108 articles
D. van Niekerk, G.J. Wentink, and L.B. Shoroma
Disaster and natural hazard governance has become a significant policy and legislative focus in South Africa since the early 1990s. Born out of necessity from a dysfunctional apartheid system, the new emphasis on disaster risk reduction in the democratic dispensation also ushered in a new era in the management of natural hazards and their associated risks and vulnerabilities. Widely cited as an international best practice in policy and law development, South Africa has led the way in natural hazard governance in sub-Sahara Africa as well as in much of the developing world. Various practices in natural hazard governance in South Africa are alluded to. Particular attention is given to the disaster risks of the country as well as to the various natural hazards that drive this risk profile. Statutory and legislative aspects are discussed through a multisectoral approach, and by citing a number of case studies, we show the application of natural hazard governance in South Africa. Certain remaining challenges are highlighted that are faced by the South Africa government such as a lack of political will at the local government level, deficits in risk governance, difficulties in resource allocation, a lack of intergovernmental relations, and a need for enhanced community participation, ownership, and decision making.
Mihir Bhatt, Kelsey Gleason, and Ronak B. Patel
South Asia is faced with a range of natural hazards, including floods, droughts, cyclones, earthquakes, landslides, and tsunamis. Rapid and unplanned urbanization, environmental degradation, climate change, and socioeconomic conditions are increasing citizens’ exposure to and risk from natural hazards and resulting in more frequent, intense, and costly disasters. Although governments and the international community are investing in disaster risk reduction, natural hazard governance in South Asian countries remain weak and often warrants a review when a major natural disaster strikes. Natural hazards governance is an emerging concept, and many countries in South Asia have a challenging hazard governance context.
Kanako Iuchi, Yasuhito Jibiki, Renato Solidum Jr., and Ramon Santiago
Located in the Pacific Ring of Fire and the typhoon belt, the Philippines is one of the most hazard prone countries in the world. The country faces different types of natural hazards including geophysical disturbances such as earthquakes and volcanic eruptions, meteorological and hydrological events such as typhoons and floods, and slow-onset disasters such as droughts. Together with rapidly increasing population growth and urbanization, large-scale natural phenomena have resulted in unprecedented scales of devastation. In the early 21st century alone, the country experienced some of the most destructive and costliest disasters in its history including Typhoon Yolanda (2013), Typhoon Pablo (2012), and the Bohol Earthquake (2013).
Recurrent natural disasters have prompted the Philippine government to develop disaster risk reduction and management (DRRM) strategies to better prepare, respond, and recover, as well as to be more resilient in the face of natural disasters. Since the early 1940s, the governing structure has undergone several revisions through legal and institutional arrangements. Historical natural disasters and seismic risks have affected and continue to threaten the National Capital Region (NCR) and the surrounding administrative areas; these were key factors in advancing DRRM laws and regulations, as well as in restructuring its governing bodies. The current DRRM structure was instituted under Republic Act no. 10121 (RA10121) in 2010 and was implemented to shift from responsive to proactive governance by better engaging local governments (LGUs), communities, and the private sector to reduce long-term disaster risk. This Republic Act established a national disaster risk reduction and management council (NDRRMC) to develop strategies that manage and reduce risk.
Typhoon Yolanda in 2013 was the most significant test of this revised governance structure and related strategies. The typhoon revealed drawbacks of the current council-led governing structure to advancing resilience. Salient topics include how to respond better to disaster realities, how to efficiently coordinate among relevant agencies, and how to be more inclusive of relevant actors. Together with other issues, such as the way to co-exist with climate change efforts, a thorough examination of RA 10121 by the national government and advocates for DRRM is underway. Some of the most important discourse to date focuses on ways to institute a powerful governing body that enables more efficient DRRM with administrative and financial power. The hope is that by instituting a governing system that can thoroughly lead all phases of preparedness, mitigation, response, and recovery, the country can withstand future—and likely more frequent—mega-disasters.
H. P. Gülkan
The current outlook in disaster risk management in Turkey is examined in its historic context in this article. Policies, legislation, and specific responsive actions have culminated in 2009 in the formation of a nationwide Disaster and Emergency Management Authority (“
P. Patrick Leahy
Society expects to have a safe environment in which to live, prosper, and sustain future generations. Generally, when we think of threats to our well-being, we think of human-induced causes such as overexploitation of water resources, contamination, and soil loss, to name just a few. However, natural hazards, which are not easily avoided or controllable (or, in many cases, predictable in the short term), have profound influences on our safety, economic security, social development, and political stability, as well as every individual’s overall well-being.
Natural hazards are all related to the processes that drive our planet. Indeed, the Earth would not be a functioning ecosystem without the dynamic processes that shape our planet’s landscapes over geologic time. Natural hazards (or geohazards, as they are sometimes called) include such events as earthquakes, volcanic eruptions, landslides and ground collapse, tsunamis, floods and droughts, geomagnetic storms, and coastal storms.
A key aspect of these natural hazards involves understanding and mitigating their impacts, which require that the geoscientist take a four-pronged approach. It must include a fundamental understanding of the processes that cause the hazard, an assessment of the hazard, monitoring to observe any changes in conditions that can be used to determine the status of a potential hazardous event, and perhaps most important, delivery of information to a broader community to evaluate the need for action.
A fundamental understanding of processes often requires a research effort that typically is the focus of academic and government researchers. Fundamental questions may include: (a) What triggers an earthquake, and why do some events escalate to a great magnitude while most are small-magnitude events?; (b) What processes are responsible for triggering a landslide?; (c) Can we predict the severity of an impending volcanic eruption? (d) Can we predict an impending drought or flood?; (e) Can we determine the height of a storm surge or storm track associated with coastal storm well in advance of landfall so that the impact can be mitigated?
Any effective hazard management system must strive to increase resilience. The only way to gain resiliency is to learn from past events and to decrease risk. To successfully increase resiliency requires having strong hazard identification programs with adequate monitoring and research components and very robust delivery mechanisms that deliver timely, accurate, and appropriate hazard information to a broad audience that will use the information is a wide variety of ways to meet their specific goals.
Snow- and ice-related hazardous processes threaten society in tropical to high-latitude mountain areas worldwide and at highly variable time scales. On the one hand, small snow avalanches are recorded in high numbers every winter. On the other hand, glacial lake outburst floods (GLOFs) or large-scale volcano–ice interactions occur less frequently but may evolve into destructive process chains resulting in major disasters. These extreme examples document the huge field of types, magnitudes, and frequencies of snow- and ice-related hazardous processes.
Mountain societies have learned to cope with natural hazards for centuries, guided by personal experiences and oral and written tradition. Historical records are today still important as a basis to mitigate snow- and ice-related hazards. They are complemented by a broad array of observation and modeling techniques. These techniques differ among themselves with regard to (1) the type of process under investigation and (2) the scale and purpose of investigation. Multi-scale monitoring and warning systems for snow avalanches are in operation in densely populated mid-latitude mountain areas. They build on meteorological and snow profile data in combination with a large pool of expert knowledge.
In contrast, ice-related processes such as ice- or rock-ice avalanches, GLOFs, or associated process chains cause damage less frequently in space and time, so that societies are less well adapted. Even though the hazard sources are often far from the society—making field observation challenging—flows travelling for tens of kilometers sometimes impact populated areas. These hazards are strongly influenced by climate change–induced glacier and permafrost dynamics. On the regional or national scale, the evolution of such hazards has to be monitored at short intervals through aerial and satellite imagery and terrain data, employing geographic information systems (GIS). Known hazardous situations have to be monitored in the field.
Physical models—applied either in the laboratory or at real-world sites—are employed to explore the mobility of hazardous processes. Since the 1950s, however, computer models have increasingly gained importance in exploring possible travel distances, impact areas, velocities, and impact forces of events. While simple empirical-statistical approaches are used at broad scales in combination with GIS, advanced numeric models are applied to analyze specific case studies. However, the input parameters for these models are uncertain so that (1) the model results have to be validated with observations and (2) appropriate strategies to deal with the uncertainties have to be applied before using the model results for hazard zoning or dimensioning of protective structures. Due to rapid atmospheric warming and related changes in the cryosphere, hazard situations beyond historical experiences are expected to be increasingly relevant in the future. Scenario-based modeling of complex systems and process chains therefore represents an emerging research direction.
Josh Greenberg and T. Joseph Scanlon
Media have always played an important role in times of emergency and disaster. Undersea cables, international news agencies, the press, radio and television, and, most recently, digital and mobile technologies—all have played myriad and complex roles in supporting emergency response and notification, and in helping constitute a shared experience that can be important to social mobilization and community formation. The geographical location of disasters and the identities of victims, the increasingly visual nature of disaster events, and the ubiquitous nature of media in our lives, all shape and influence which kinds of emergencies attract global media and public attention, and how we come to understand them.
Globalization has compressed time and space such that a whole range of disasters—from natural events (cyclones, earthquakes, and hurricanes) to industrial accidents and terrorist attacks—appear on our television and mobile screens with almost daily frequency. There is nothing inherent about these events that give them meaning—they occur in a real, material world; but for many of us, our experience of these events is shaped and determined in large part by our interactions with media industries, institutions, and technologies. Understanding the media’s construction of these events as disasters provides important insight into the nature of disaster mitigation, response and recovery.
People not only want to be safe from natural hazards; they also want to feel they are safe. Sometimes these two desires pull in different directions, and when they do, this slows the journey to greater physical adaptation and resilience.
All people want to feel safe—especially in their own homes. In fact, although not always a place of actual safety, in many cultures “home” is nonetheless idealized as a place of security and repose. The feeling of having a safe home is one part of what is termed ontological security: freedom from existential doubts and the ability to believe that life will continue in much the same way as it always has, without threat to familiar assumptions about time, space, identity, and well-being. By threatening our homes, floods, earthquakes, and similar events disrupt ontological security: they destroy the possessions that support our sense of who we are; they fracture the social structures that provide us with everyday needs such as friendship, play, and affection; they disrupt the routines that give our lives a sense of predictability; and they challenge the myth of our immortality. Such events, therefore, not only cause physical injury and loss; by damaging ontological security, they also cause emotional distress and jeopardize long-term mental health.
However, ontological security is undermined not only by the occurrence of hazard events but also by their anticipation. This affects people’s willingness to take steps that would reduce hazard vulnerability. Those who are confident that they can eliminate their exposure to a hazard will usually do so. More commonly, however, the available options come with uncertainty and social/psychological risks: often, the available options only reduce vulnerability, and sometimes people doubt the effectiveness of these options or their ability to choose and implement appropriate measures. In these circumstances, the risk to ontological security that is implied by action can have greater influence than the potential benefits. For example, although installing a floodgate might reduce a business’s flood vulnerability, the business owner might feel that its presence would act as an everyday reminder that the business, and the income derived from it, are not secure. Similarly, bolting furniture to the walls of a home might reduce injuries in the next earthquake, but householders might also anticipate that it would remind them that there is a continual threat to their home. Both of these circumstances describe situations in which the anticipation of future feelings can tap into less conscious anxieties about ontological security.
The manner in which people anticipate impacts on ontological security has several implications for preparedness. For example, it suggests that hazard warnings will be counterproductive if they are not accompanied by suggestions of easy, reliable ways of eliminating risk. It also suggests that adaptation measures should be designed not to enhance awareness of the hazard.
How big, how often, and where from? This is almost a mantra for researchers trying to understand tsunami hazard and risk. What we do know is that events such as the 2004 Indian Ocean Tsunami (2004 IOT) caught scientists by surprise, largely because there was no “research memory” of past events for that region, and as such, there was no hazard awareness, no planning, no risk assessment, and no disaster risk reduction. Forewarned is forearmed, but to be in that position, we have to be able to understand the evidence left behind by past events—palaeootsunamis—and to have at least some inkling of what generated them.
While the 2004 IOT was a devastating wake-up call for science, we need to bear in mind that palaeotsunami research was still in its infancy at the time. What we now see is still a comparatively new discipline that is practiced worldwide, but as the “new kid on the block,” there are still many unknowns. What we do know is that in many cases, there is clear evidence of multiple palaeotsunamis generated by a variety of source mechanisms. There is a suite of proxy data—a toolbox, if you will—that can be used to identify a palaeotsunami deposit in the sedimentary record. Things are never quite as simple as they sound, though, and there are strong divisions within the research community as to whether one can really differentiate between a palaeotsunami and a palaeostorm deposit, and whether proxies as such are the way to go. As the discipline matures, though, many of these issues are being resolved, and indeed we have now arrived at a point where we have the potential to detect “invisible deposits” laid down by palaeotsunamis once they have run out of sediment to lay down as they move inland. As such, we are on the brink of being able to better understand the full extent of inundation by past events, a valuable tool in gauging the magnitude of palaeotsunamis.
Palaeotsunami research is multidisciplinary, and as such, it is a melting pot of different scientific perspectives, which leads to rapid innovations. Basically, whatever is associated with modern events may be reflected in prehistory. Also, palaeotsunamis are often part of a landscape response pushed beyond an environmental threshold from which it will never fully recover, but that leaves indelible markers for us to read. In some cases, we do not even need to find a palaeotsunami deposit to know that one happened.
Warren S. Eller and Michael S. Pennington
Assessment is a necessary and critical component in process improvement. Moreover, there is a strong public expectation that because governance is a public good, it will incorporate demonstrable equitable and efficient processes. As a central tenet of New Public Management (NPM), a widely accepted approach to increase efficiency of public sector performance through the introduction of “business” practices, performance assessment has helped improve governance in general. However, employing assessment practices has been problematic at best in the realm of hazards preparedness and response. Notably, the fragmented nature of governance in the disaster response network, which spans both levels of government and public and private sectors, is not conducive to holistic evaluation. Similarly, the lack of clear goals, available funding, and trained evaluation personnel severely inhibit the ability to comprehensively assess performance in the management of natural hazards. Effective assessment in this area, that is evaluation that will significantly enhance hazard and vulnerability management in terms of mitigation, preparedness, and response, requires several distinct steps for effective implementation. This includes first understanding the dimensions of the natural hazards governance community and the assessment process. These are: (1) identifying the purpose of the review (formative—evaluation intending to improve processes or summative—evaluation intended for final examination of processes), (2) Identifying clear and concise goals for the program and ensuring these goals are consistent with federal, state, and local policy, and (3) identifying the underlying fragmentation between sectors, levels of governance, and disaster phase in the governance system. Based on these dimensions, the most effective assessments will be those that are incorporated within or developed from the actual governance system.
Lukas U. Arenson and Matthias Jakob
Mountain environments, home to about 12% of the global population and covering nearly a quarter of the global land surface, create hazardous conditions for various infrastructures. The economic and ecologic importance of these environments for tourism, transportation, hydropower generation, or natural resource extraction requires that direct and indirect interactions between infrastructures and geohazards be evaluated. Construction of infrastructure in mountain permafrost environments can change the ground thermal regime, affect gravity-driven processes, impact the strength of ice-rich foundations, or result in permafrost aggradation via natural convection. The severity of impact, and whether permafrost will degrade or aggrade in response to the construction, is a function of numerous parameters including climate change, which needs to be considered when evaluating the changes in existing or formation of new geohazards. The main challenge relates to the uncertainties associated with the projections of medium- (decadal) and long-term (century-scale) climate change. A fundamental understanding of the various processes at play and a good knowledge of the foundation conditions is required to ascertain that infrastructure in permafrost environment functions as intended. Many of the tools required for identifying geohazards in the periglacial and appropriate risk management strategies are already available.
Permafrost, or perennially frozen ground, and the processes linked to the water phase change in ground-pore media are sources of specific dangers to infrastructure and economic activity in cold mountainous regions. Additionally, conventional natural hazards (such as earthquakes, floods, and landslides) assume special characteristics in permafrost territories.
Permafrost hazards are created under two conditions. The first is a location with ice-bounded or water-saturated ground, in which the large amount of ice leads to potentially intensive processes of surface settlement or frost heaving. The second is linked with external, natural, and human-made disturbances that change the heat-exchange conditions. The places where ice-bounded ground meets areas that are subject to effective disturbances are the focus of hazard mapping and risk evaluation.
The fundamentals of geohazard evaluation and geohazard mapping in permafrost regions were originally developed by Gunnar Beskow, Vladimir Kudryavtsev, Troy Péwé, Oscar Ferrians, Jerry Brown, and other American, European, and Soviet authors from 1940s to the 1980s.
Modern knowledge of permafrost hazards was significantly enriched by the publication of Russian book called Permafrost Hazards, part of the six-volume series Natural Hazards in Russia (2000). The book describes, analyses, and evaluates permafrost-related hazards and includes methods for their modeling and mapping.
Simultaneous work on permafrost hazard evaluation continued in different countries with the active support of the International Permafrost Association. Prominent contributions during the new period of investigation were published by Drozdov, Clarke, Kääb, Pavlov, Koff and several other thematic groups of researchers. The importance of common international works became evident. The international project RiskNat: A Cross-Border European Project Taking into Account Permafrost-Related Hazards was developed as a new phenomenon in scientific development.
The intensive economic development in China presented new challenges for linear transportation routes and hydrologic infrastructures. A study of active fault lines and geological hazards along the Golmud–Lhasa Railway across the Tibetan plateau is a good example of the achievements by Chinese scientists.
The method for evaluating the permafrost hazards was based on survey data, monitoring data, and modeling results. The survey data reflected the current environmental conditions, and they are usually shown on a permafrost map. The monitoring data are helpful in understanding the current tendencies of permafrost evolution in different landscapes and regions. The modeling data provided a permafrost forecast that takes climate change and its impact on humans into account.
The International Conference on Permafrost in 2016, in Potsdam, Germany, demonstrated the new horizons of conventional and special permafrost mapping in offshore and continental areas. Permafrost hazards concern large and diverse aspects of human life. It is necessary to expand the approach to this problem from geology to also include geography, biology, social sciences, engineering, and other spheres of competencies in order to synthesize local and regional information. The relevance of this branch of science grows with taking into account climate change and the growing number of natural disasters.
Anna Bozza, Domenico Asprone, and Gaetano Manfredi
In the early 21st century, achieving the sustainability of urban environments while coping with increasingly occurring natural disasters is a very ambitious challenge for contemporary communities. In this context, urban resilience is a comprehensive objective that communities can follow to ensure future sustainable cities able to cope with the risks to which they are exposed.
Researchers have developed different definitions of resilience as this concept has been applied to diverse topics and issues in recent decades. Essentially, resilience is defined as the capability of a system to withstand major unexpected events and recover in a functional and efficient manner. When dealing with urban environments, the efficiency of the recovery can be related to multiple aspects, many of which are often hard to control. Mainly it is quantified in terms of the restoration of urban economy, population, and built form (Davoudi et al., 2012). In this article, engineering resilience is defined in relation to cities’ capability to be sustainable in the phase of an extreme event occurrence while reconfiguring their physical configuration. In this view, a city is resilient if it is sustainable in the occurrence of a hazardous event.
Accordingly, in an urban context, a wide range of nonhomogeneous factors and intrinsic dynamics have to be accounted for, which requires a multi-scale approach, from the single building level to the urban and, ultimately, the global environmental scale. As a consequence, cities can be understood as physical systems assessed through engineering metrics. Hence, the physical dimension represents a starting point from which to approach resilience. When shifting the focus from the single structure to the city scale, human behavior is revealed to be a critical factor because social actors behave and make choices every day in an unpredictable and unorganized manner, which affects city functioning. According to the ecosystem theory, urban complexity can be addressed through the ecosystem theory approach, which accounts for interrelations between physical and human components.
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models).
Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses).
Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.
James C. Schwab
Planning systems are essentially a layer of guidance or legal requirements that sit atop plans of any type at any governmental level at or below the source of that guidance. In the case of natural hazard risk reduction, they involve rules or laws dealing with plans to reduce loss of life or property from such events. In much of the world, this is either unexplored territory or the frontier of public planning; very little of what exists in this realm predates the 1980s, although one can find earlier roots of the public discussion behind such systems.
That said, the evolution of such systems in 21st century has been fairly rapid, at least in those nations with the resources and technical capacity to pursue the subject. Driven largely by substantial increases in disaster losses and growing concern about worldwide impacts of climate change, research, technology, and lessons from practice have grown apace. However, that progress has been uneven and subject to inequities in resources and governmental capacity.
Thomas A. Birkland
Natural disasters pose important problems for societies and governments. Governments are charged with making policies to protect public safety. Large disasters, then, can reveal problems in government policies designed to protect the public from the effects of such disasters. Large disasters can serve as focusing events, a term used to describe large, sudden, rare, and harmful events that gain a lot of attention from the public and from policy makers. Such disasters highlight problems and, as the public policy literature suggests, open windows of opportunity for policy change. However, as a review of United States disaster policy from 1950 through 2015 shows, change in disaster policy is often, but not always, driven by major disasters that act as focusing events. But the accumulation of experience from such disasters can lead to learning, which can be useful if later, even more damaging and attention-grabbing events arise.
Rapid urbanization and growing populations have put tremendous pressures on limited global housing stocks. As the frequency of disasters has increased with devastating impacts on this limited stock of housing, the discourse on post-disaster housing recovery has evolved in several ways. Prior to the 1970s, the field was largely understudied, and there was a narrow understanding of how households and communities rebuilt their homes after a catastrophic event and on the effectiveness of housing recovery policy and programs designed to assist them. Early debates on post-disaster housing recovery centered on cultural and technological appropriateness of housing recovery programs. The focus on materials, technology, and climate missed larger socioeconomic and political complexities of housing recovery. Since then, the field has come a long way: current theoretical and policy debates focus on the effect of governance structures, funding practices, the consequences of public and private interventions, and socioeconomic and institutional arrangements that effect housing recovery outcomes.
There are a number of critical issues that shape long-term post-disaster housing recovery processes and outcomes, especially in urban contexts. Some of them include the role of the government in post-disaster housing recovery, governance practices that drive recovery processes and outcomes, the challenges of paying for post-disaster housing repair and reconstruction, the disconnect between planning for rebuilding and planning for housing recovery, and the mismatch between existing policy programs and housing needs after a catastrophic event—particularly for affordable housing recovery. Moreover, as housing losses after disasters continue to increase, and as the funding available to rebuild housing stocks shrinks, it has become increasingly important to craft post-disaster housing recovery policy and programs that apply the limited resources in the most efficient and impactful ways. Creating housing recovery programs by employing a needs-based approach instead of one based solely on loss could more effectively focus limited resources on those that might need it the most. Such an approach would be broad based and proportional, as it would address the housing recovery of a wide range of groups based upon their needs, including low-income renters, long-term leaseholders, residents of informal settlements and manufactured homes, as well as those with preexisting resources such as owner-occupant housing.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Natural Hazard Science. Please check back later for the full article.
Natural disasters have increased dramatically in the twenty-first century. An estimated 217 million people are affected by natural disasters each year. Recent disasters, both nationally and globally, provide insight into how the degree of destruction and number of fatalities can negatively affect survivors. Cultural, political, and geographic factors may increase risk of trauma and negative mental health outcomes. Understanding these risks is critical to helping survivors recover in the aftermath of disasters. Different disasters pose different risks, and some communities are chronically affected. How to support these communities psychologically in the face of ongoing threats of destruction is an important question.
Recent years have also seen major advances in technology that provide new and innovative ways to manage disasters. Technological strategies can be harnessed to better serve the interests of disaster-affected communities. For example, warning times for disasters have increased because of better instrumentation and the ability to send messages sooner to communities that may be in the path of a disaster. These increased warning times may allow for psychological preparation before a disaster that can support positive mental health outcomes in recovery. Demands for evidence-based mental health interventions require an understanding of best practices in disaster response, challenges to past relief efforts, and the strategies and factors that can enhance effective future efforts.
Scott E. Robinson and Warren S. Eller
Natural hazards governance calls upon a diverse array of actors. The focus of most research—and most media coverage—has long been on governmental actors. Indeed, natural hazards governance relies on a complex arrangement of actors connected from the local, state, and national levels. Local organizations are the initial point of contact and face emerging threats. If the event exceeds the capacity of local organizations to respond, the governance system escalates the problem by expanding the participants to include state-level and, eventually, national-level actors. Natural hazards governance seeks to smooth and rationalize this process of escalation and expansion. Recent research has expanded this view to include nongovernmental actors like charitable organizations, religious institutions, and even private business. While charitable organizations have long been part of natural hazards governance, a broader range of charities, religious institutions, and private-sector companies has recently become more important to practice and scholarship. In many ways, the governance of these nongovernmental organizations resembles the structure of the governmental structure with its emphasis on escalation, expansion, and functional differentiation. Given the inclusion of so diverse a group of cooperating organizations, natural hazards governance faces notable challenges of communication, authority, and reliability.
Andrea Sarzynski and Paolo Cavaliere
Public participation in environmental management, and more specifically in hazard mitigation planning, has received much attention from scholars and practitioners. A shift in perspective now sees the public as a fundamental player in decision making rather than simply as the final recipient of a policy decision. Including the public in hazard mitigation planning brings widespread benefits. First, communities gain awareness of the risks they live with, and thus, this is an opportunity to empower communities and improve their resilience. Second, supported by a collaborative participation process, emergency managers and planners can achieve the ultimate goal of strong mitigation plans.
Although public participation is highly desired as an instrument to improve hazard mitigation planning, appropriate participation techniques are context dependent and some trade-offs exist in the process design (such as between representativeness and consensus building). Designing participation processes requires careful planning and an all-around consideration of the representativeness of stakeholders, timing, objectives, knowledge, and ultimately desired goals to achieve. Assessing participation also requires more consistent methods to facilitate policy learning from diverse experiences. New decision-support tools may be necessary to gain widespread participation from laypersons lacking technical knowledge of hazards and risks.