Agent-based models have facilitated greater understanding of flood insurance futures, and will continue to advance this field as modeling technology develops further. As the pressures of climate-change increase and global populations grow, the insurance industry will be required to adapt to a less predictable operating environment. Complicating the future of flood insurance is the role flood insurance plays within a state, as well as how insurers impact the interests of other stakeholders, such as mortgage providers, property developers, and householders. As such, flood insurance is inextricably linked with the politics, economy, and social welfare of a state, and can be considered as part of a complex system of changing environments and diverse stakeholders. Agent-based models are capable of modeling complex systems, and, as such, have utility for flood insurance systems. These models can be considered as a platform in which the actions of autonomous agents, both individuals and collectives, are simulated. Cellular automata are the lowest level of an agent-based model and are discrete and abstract computational systems. These automata, which operate within a local and/or universal environment, can be programmed with characteristics of stakeholders and can act independently or interact collectively. Due to this, agent-based models can capture the complexities of a multi-stakeholder environment displaying diversity of behavior and, concurrently, can cater for the changing flood environment. Agent-based models of flood insurance futures have primarily been developed for predictive purposes, such as understanding the impact of introductions of policy instruments. However, the ways in which these situations have been approached by researchers have varied; some have focused on recreating consumer behavior and psychology, while others have sought to recreate agent interactions within a flood environment. The opportunities for agent-based models are likely to become more pronounced as online data becomes more readily available and artificial intelligence technology supports model development.
Agent-Based Modeling of Flood Insurance Futures
Assessment and Adaptation to Climate Change-Related Flood Risks
Brenden Jongman, Hessel C. Winsemius, Stuart A. Fraser, Sanne Muis, and Philip J. Ward
The flooding of rivers and coastlines is the most frequent and damaging of all natural hazards. Between 1980 and 2016, total direct damages exceeded $1.6 trillion, and at least 225,000 people lost their lives. Recent events causing major economic losses include the 2011 river flooding in Thailand ($40 billion) and the 2013 coastal floods in the United States caused by Hurricane Sandy (over $50 billion). Flooding also triggers great humanitarian challenges. The 2015 Malawi floods were the worst in the country’s history and were followed by food shortage across large parts of the country. Flood losses are increasing rapidly in some world regions, driven by economic development in floodplains and increases in the frequency of extreme precipitation events and global sea level due to climate change. The largest increase in flood losses is seen in low-income countries, where population growth is rapid and many cities are expanding quickly. At the same time, evidence shows that adaptation to flood risk is already happening, and a large proportion of losses can be contained successfully by effective risk management strategies. Such risk management strategies may include floodplain zoning, construction and maintenance of flood defenses, reforestation of land draining into rivers, and use of early warning systems. To reduce risk effectively, it is important to know the location and impact of potential floods under current and future social and environmental conditions. In a risk assessment, models can be used to map the flow of water over land after an intense rainfall event or storm surge (the hazard). Modeled for many different potential events, this provides estimates of potential inundation depth in flood-prone areas. Such maps can be constructed for various scenarios of climate change based on specific changes in rainfall, temperature, and sea level. To assess the impact of the modeled hazard (e.g., cost of damage or lives lost), the potential exposure (including buildings, population, and infrastructure) must be mapped using land-use and population density data and construction information. Population growth and urban expansion can be simulated by increasing the density or extent of the urban area in the model. The effects of floods on people and different types of buildings and infrastructure are determined using a vulnerability function. This indicates the damage expected to occur to a structure or group of people as a function of flood intensity (e.g., inundation depth and flow velocity). Potential adaptation measures such as land-use change or new flood defenses can be included in the model in order to understand how effective they may be in reducing flood risk. This way, risk assessments can demonstrate the possible approaches available to policymakers to build a less risky future.
Debris-Flow Risk Assessment
Matthias Jakob, Kris Holm, and Scott McDougall
Debris flows are one of the most destructive landslide processes worldwide, given their ubiquity in mountainous areas occupied by human settlement or industrial facilities around the world. Given the episodic nature of debris flows, these hazards are often un- or under-recognized. Three fundamental components of debris-flow risk assessments include frequency-magnitude analysis, numerical scenario modeling, and consequence analysis to estimate the severity of damage and loss. Recent advances in frequency-magnitude analysis take advantage of developments in methods to estimate the age of deposits and size of past and potential future events. Notwithstanding, creating reliable frequency-magnitude relationships is often challenged by practical limitations to investigate and statistically analyze past debris-flow events that are often discontinuous, as well as temporally and spatially censored. To estimate flow runout and destructive potential, several models are used worldwide. Simple empirical models have been developed based on statistical geometric correlations, and two-dimensional and three-dimensional numerical models are commercially available. Quantitative risk assessment (QRA) methods for assessing public safety were developed for the nuclear industry in the 1970s and have been applied to landslide risk in Hong Kong starting in 1998. Debris-flow risk analyses estimate the likelihood of a variety of consequences. Quantitative approaches involve prediction of the annual probability of loss of life to individuals or groups and estimates of annualized economic losses. Recent progress in quantitative debris-flow risk analyses include improved methods to characterize elements at risk within a GIS environment and estimates of their vulnerability to impact. Improvements have also been made in how these risks are communicated to decision makers and stakeholders, including graphic display on conventional and interactive online maps. Substantial limitations remain, including the practical impossibility of estimating every direct and indirect risk associated with debris flows and a shortage of data to estimate vulnerabilities to debris-flow impact. Despite these limitations, quantitative debris-flow risk assessment is becoming a preferred framework for decision makers in some jurisdictions, to compare risks to defined risk tolerance thresholds, support decisions to reduce risk, and quantify the residual risk remaining following implementation of risk reduction measures.
Evolution of Strategic Flood Risk Management in Support of Social Justice, Ecosystem Health, and Resilience
Throughout history, flood management practice has evolved in response to flood events. This heuristic approach has yielded some important incremental shifts in both policy and planning (from the need to plan at a catchment scale to the recognition that flooding arises from multiple sources and that defenses, no matter how reliable, fail). Progress, however, has been painfully slow and sporadic, but a new, more strategic, approach is now emerging. A strategic approach does not, however, simply sustain an acceptable level of flood defence. Strategic Flood Risk Management (SFRM) is an approach that relies upon an adaptable portfolio of measures and policies to deliver outcomes that are socially just (when assessed against egalitarian, utilitarian, and Rawlsian principles), contribute positively to ecosystem services, and promote resilience. In doing so, SFRM offers a practical policy and planning framework to transform our understanding of risk and move toward a flood-resilient society. A strategic approach to flood management involves much more than simply reducing the chance of damage through the provision of “strong” structures and recognizes adaptive management as much more than simply “wait and see.” SFRM is inherently risk based and implemented through a continuous process of review and adaptation that seeks to actively manage future uncertainty, a characteristic that sets it apart from the linear flood defense planning paradigm based upon a more certain view of the future. In doing so, SFRM accepts there is no silver bullet to flood issues and that people and economies cannot always be protected from flooding. It accepts flooding as an important ecosystem function and that a legitimate ecosystem service is its contribution to flood risk management. Perhaps most importantly, however, SFRM enables the inherent conflicts as well as opportunities that characterize flood management choices to be openly debated, priorities to be set, and difficult investment choices to be made.
Flood Risk Analysis
Floods affect more people worldwide than any other natural hazard. Flood risk results from the interplay of a range of processes. For river floods, these are the flood-triggering processes in the atmosphere, runoff generation in the catchment, flood waves traveling through the river network, possibly flood defense failure, and finally, inundation and damage processes in the flooded areas. In addition, ripple effects, such as regional or even global supply chain disruptions, may occur. Effective and efficient flood risk management requires understanding and quantifying the flood risk and its possible future developments. Hence, risk analysis is a key element of flood risk management. Risk assessments can be structured according to three questions: What can go wrong? How likely is it that it will happen? If it goes wrong, what are the consequences? Before answering these questions, the system boundaries, the processes to be included, and the detail of the analysis need to be carefully selected. One of the greatest challenges in flood risk analyses is the identification of the set of failure or damage scenarios. Often, extreme events beyond the experience of the analyst are missing, which may bias the risk estimate. Another challenge is the estimation of probabilities. There are at most a few observed events where data on the flood situation, such as inundation extent, depth, and loss are available. That means that even in the most optimistic situation there are only a few data points to validate the risk estimates. The situation is even more delicate when the risk has to be quantified for important infrastructure objects, such as breaching of a large dam or flooding of a nuclear power plant. Such events are practically unrepeatable. Hence, estimating of probabilities needs to be based on all available evidence, using observations whenever possible, but also including theoretical knowledge, modeling, specific investigations, experience, or expert judgment. As a result, flood risk assessments are often associated with large uncertainties. Examples abound where authorities, people at risk, and disaster management have been taken by surprise due to unexpected failure scenarios. This is not only a consequence of the complexity of flood risk systems, but may also be attributed to cognitive biases, such as being overconfident in the risk assessment. Hence, it is essential to ask: How wrong can the risk analysis be and still guarantee that the outcome is acceptable?
Future Lake Development in Deglaciating Mountain Ranges
Wilfried Haeberli and Fabian Drenkhan
Continued retreat and disappearance of glaciers cause fundamental changes in cold mountain ranges and new landscapes to develop, and the consequences can reach far beyond the still ice-covered areas. A key element is the formation of numerous new lakes where overdeepened parts of glacier beds become exposed. With the first model results from the Swiss Alps around 2010 of distributed glacier thicknesses over entire mountain regions, the derivation of glacier beds as potential future surface topographies became possible. Since then, climate-, water-, and hazard-related quantitative research about future lakes in deglaciating mountains all over the world rapidly evolved. Currently growing and potential future open water bodies are part of new environments in marked imbalance. The surrounding steep icy slopes and peaks are affected by glacial debuttressing and permafrost degradation, with associated long-term stability reduction. This makes the new lakes potential sources of far-reaching floods or debris flows, and they represent serious multipliers of hazards and risks to down-valley humans and their infrastructure. Such hazard and risk aspects are also of primary importance where the lakes potentially connect with hydropower production, freshwater supply, tourism, cultural values, and landscape protection. Planning for sustainable adaptation strategies optimally starts from the anticipation in space and time of possible lake formation in glacier-covered areas by numerical modeling combined with analyses of ice-morphological indications. In a second step, hazards and risks related to worst-case scenarios of possible impact and flood waves must be assessed. These results then define the range of possibilities for use and management of future lakes. Careful weighing of both potential synergies and conflicts is necessary. In some cases, multipurpose projects may open viable avenues for combining solutions related to technical challenges, safety requirements, funding problems, and societal acceptance. Successful implementation of adaptive projects requires early integration of technical-scientific and local knowledge, including the needs and interests of local users and decision makers, into comprehensive, participatory, and long-term planning. A key question is the handling of risks from extreme events with disastrous damage potential and low but increasing probability of occurrence. As future landscapes and lakes develop rapidly and are of considerable socioeconomic and political interest, they present often difficult and complex situations for which solutions must be found soon. Related transdisciplinary work will need to adequately address the sociocultural, economic, and political aspects.
Hazards, Social Resilience, and Safer Futures
The concepts of hazards and risks began in engineering when scientists were measuring the points at which materials would become sufficiently stressed by the pressures upon them that they would break. These concepts migrated into the environmental sciences to assess risk in the natural terrain, including the risks that human activities posed to the survival of animals (including fish in streams) and plants in the biosphere. From there, they moved to the social sciences, primarily in formal disaster discourses. With the realization that modern societies constantly faced risks cushioned in uncertainties within everyday life, the media popularized the concept of risk and its accoutrements, including mitigation, adaptation, and preventative measures, among the general populace. A crucial manifestation of this is the media’s accounts of the risks affecting different groups of people or places contracting Covid-19, which burst upon a somnambulant world in December 2019 in Wuhan, China. The World Health Organization (WHO) declared Covid-19 a pandemic on March 11, 2020. Politicians of diverse hues sought to reassure nervous inhabitants that they had followed robust, scientific advice on risks to facilitate “flattening the curve” by spreading the rate of infection in different communities over a longer period to reduce demand for public health services. Definitions of hazard, risk, vulnerability, and resilience evolved as they moved from the physical sciences into everyday life to reassure edgy populations that their social systems, especially the medical ones, could cope with the demands of disasters. While most countries have managed the risk Covid-19 posed to health services, this has been at a price that people found difficult to accept. Instead, as they reflected upon their experiences of being confronted with the deaths of many loved ones, especially among elders in care homes; adversities foisted upon the disease’s outcomes by existing social inequalities; and loss of associative freedoms, many questioned whether official mitigation strategies were commensurate with apparent risks. The public demanded an end to such inequities and questioned the bases on which politicians made their decisions. They also began to search for certainties in the social responses to risk in the hopes of building better futures as other institutions, schools, and businesses went into lockdown, and social relationships and people’s usual interactions with others ceased. For some, it seemed as if society were crumbling around them, and they wanted a better version of their world to replace the one devastated by Covid-19 (or other disasters). Key to this better version was a safer, fairer, more equitable and reliable future. Responses to the risks within Covid-19 scenarios are similar to responses to other disasters, including earthquakes, volcanic eruptions, wildfires, tsunamis, storms, extreme weather events, and climate change. The claims of “building back better” are examined through a resilience lens to determine whether such demands are realizable, and if not, what hinders their realization. Understanding such issues will facilitate identification of an agenda for future research into mitigation, adaptation, and preventative measures necessary to protect people and the planet Earth from the harm of subsequent disasters.
Human Extinction from Natural Hazard Events
Like any other species, Homo sapiens can potentially go extinct. This risk is an existential risk: a threat to the entire future of the species (and possible descendants). While anthropogenic risks may contribute the most to total extinction risk natural hazard events can plausibly cause extinction. Historically, end-of-the-world scenarios have been popular topics in most cultures. In the early modern period scientific discoveries of changes in the sky, meteors, past catastrophes, evolution and thermodynamics led to the understanding that Homo sapiens was a species among others and vulnerable to extinction. In the 20th century, anthropogenic risks from nuclear war and environmental degradation made extinction risks more salient and an issue of possible policy. Near the end of the century an interdisciplinary field of existential risk studies emerged. Human extinction requires a global hazard that either destroys the ecological niche of the species or harms enough individuals to reduce the population below a minimum viable size. Long-run fertility trends are highly uncertain and could potentially lead to overpopulation or demographic collapse, both contributors to extinction risk. Astronomical extinction risks include damage to the biosphere due to radiation from supernovas or gamma ray bursts, major asteroid or comet impacts, or hypothesized physical phenomena such as stable strange matter or vacuum decay. The most likely extinction pathway would be a disturbance reducing agricultural productivity due to ozone loss, low temperatures, or lack of sunlight over a long period. The return time of extinction-level impacts is reasonably well characterized and on the order of millions of years. Geophysical risks include supervolcanism and climate change that affects global food security. Multiyear periods of low or high temperature can impair agriculture enough to stress or threaten the species. Sufficiently radical environmental changes that lead to direct extinction are unlikely. Pandemics can cause species extinction, although historical human pandemics have merely killed a fraction of the species. Extinction risks are amplified by systemic effects, where multiple risk factors and events conspire to increase vulnerability and eventual damage. Human activity plays an important role in aggravating and mitigating these effects. Estimates from natural extinction rates in other species suggest an overall risk to the species from natural events smaller than 0.15% per century, likely orders of magnitude smaller. However, due to the current situation with an unusually numerous and widely dispersed population the actual probability is hard to estimate. The natural extinction risk is also likely dwarfed by the extinction risk from human activities. Many extinction hazards are at present impossible to prevent or even predict, requiring resilience strategies. Many risks have common pathways that are promising targets for mitigation. Endurance mechanisms against extinction may require creating refuges that can survive the disaster and rebuild. Because of the global public goods and transgenerational nature of extinction risks plus cognitive biases there is a large undersupply of mitigation effort despite strong arguments that it is morally imperative.
Measuring Flood Discharge
Marian Muste and Ton Hoitink
With a continuous global increase in flood frequency and intensity, there is an immediate need for new science-based solutions for flood mitigation, resilience, and adaptation that can be quickly deployed in any flood-prone area. An integral part of these solutions is the availability of river discharge measurements delivered in real time with high spatiotemporal density and over large-scale areas. Stream stages and the associated discharges are the most perceivable variables of the water cycle and the ones that eventually determine the levels of hazard during floods. Consequently, the availability of discharge records (a.k.a. streamflows) is paramount for flood-risk management because they provide actionable information for organizing the activities before, during, and after floods, and they supply the data for planning and designing floodplain infrastructure. Moreover, the discharge records represent the ground-truth data for developing and continuously improving the accuracy of the hydrologic models used for forecasting streamflows. Acquiring discharge data for streams is critically important not only for flood forecasting and monitoring but also for many other practical uses, such as monitoring water abstractions for supporting decisions in various socioeconomic activities (from agriculture to industry, transportation, and recreation) and for ensuring healthy ecological flows. All these activities require knowledge of past, current, and future flows in rivers and streams. Given its importance, an ability to measure the flow in channels has preoccupied water users for millennia. Starting with the simplest volumetric methods to estimate flows, the measurement of discharge has evolved through continued innovation to sophisticated methods so that today we can continuously acquire and communicate the data in real time. There is no essential difference between the instruments and methods used to acquire streamflow data during normal conditions versus during floods. The measurements during floods are, however, complex, hazardous, and of limited accuracy compared with those acquired during normal flows. The essential differences in the configuration and operation of the instruments and methods for discharge estimation stem from the type of measurements they acquire—that is, discrete and autonomous measurements (i.e., measurements that can be taken any time any place) and those acquired continuously (i.e., estimates based on indirect methods developed for fixed locations). Regardless of the measurement situation and approach, the main concern of the data providers for flooding (as well as for other areas of water resource management) is the timely delivery of accurate discharge data at flood-prone locations across river basins.
Megacity Disaster Risk Governance
James K. Mitchell
Megacity disaster risk governance is a burgeoning interdisciplinary field that seeks to encourage improved public decision-making about the safety and sustainability of the world’s largest urban centers in the face of environmental threats ranging from floods, storms, earthquakes, wildfires, and pandemics to the multihazard challenges posed by human-forced climate change. It is a youthful, lively, contested, ambitious and innovative endeavor that draws on research in three separate but overlapping areas of inquiry: disaster risks, megacities, and governance. Toward the end of the 20th century, each of these fields underwent major shifts in thinking that opened new possibilities for action. First, the human role in disaster risks came to the fore, giving increased attention to humans as agents of risk creation and providing increased scope for inputs from social sciences and humanities. Second, the scale, complexity, and political–economic salience of very large cities attained high visibility, leading to recognition that they are also sites of unprecedented risks, albeit with significant differences between rapidly growing poorer cities and slower growing affluent ones. Third, the concept of public decision-making expanded beyond its traditional association with actions of governments to include contributions from a wide range of nongovernmental groups that had not previously played prominent roles in public affairs. At least three new conceptions of megacity disaster risk governance emerged out of these developments. They include adaptive risk governance, smart city governance, and aesthetic governance. Adaptive risk governance focuses on capacities of at-risk communities to continuously adjust to dynamic uncertainties about future states of biophysical environments and human populations. It is learning-centered, collaborative, and nimble. Smart city governance seeks to harness the capabilities of new information and communication technologies, and their associated human institutions, to the increasingly automated tasks of risk anticipation and response. Aesthetic governance privileges the preferences of social, scientific, design, or political elites and power brokers in the formulation and execution of policies that bear on risks. No megacity has yet comprehensively or uniformly adopted any of these risk governance models, but many are experimenting with various permutations and hybrid variations that combine limited applications with more traditional administrative practices. Arrangements that are tailor-made to fit local circumstances are the norm. However, some version of adaptive risk governance seems to be the leading candidate for wider adoption, in large part because it recognizes the need to continuously accommodate new challenges as environments and societies change and interact in ways that are difficult to predict. Although inquiries are buoyant, there remain many unanswered questions and unaddressed topics. These include the differential vulnerability of societal functions that are served by megacities and appropriate responses thereto; the nature and biases of risk information transfers among different types of megacities; and appropriate ways of tackling ambiguities that attend decision-making in megacities. Institutions of megacity disaster risk governance will take time to evolve. Whether that process can be speeded up and applied in time to stave off the worst effects of the risks that lie ahead remains an open question.
Modeling Power Outage Risk From Natural Hazards
Seth Guikema and Roshanak Nateghi
Natural disasters can have significant widespread impacts on society, and they often lead to loss of electric power for a large number of customers in the most heavily impacted areas. In the United States, severe weather and climate events have been the leading cause of major outages (i.e., more than 50,000 customers affected), leading to significant socioeconomic losses. Natural disaster impacts can be modeled and probabilistically predicted prior to the occurrence of the extreme event, although the accuracy of the predictive models will vary across different types of disasters. These predictions can help utilities plan for and respond to extreme weather and climate events, helping them better balance the costs of disaster responses with the need to restore power quickly. This, in turn, helps society recover from natural disasters such as storms, hurricanes, and earthquakes more efficiently. Modern Bayesian methods may provide an avenue to further improve the prediction of extreme event impacts by allowing first-principles structural reliability models to be integrated with field-observed failure data. Climate change and climate nonstationarity pose challenges for natural hazards risk assessment, especially for hydrometeorological hazards such as tropical cyclones and floods, although the link between these types of hazards and climate change remains highly uncertain and the topic of many research efforts. A sensitivity-based approach can be taken to understand the potential impacts of climate change-induced alterations in natural hazards such as hurricanes. This approach gives an estimate of the impacts of different potential changes in hazard characteristics, such as hurricane frequency, intensity, and landfall location, on the power system, should they occur. Further research is needed to better understand and probabilistically characterize the relationship between climate change and hurricane intensity, frequency, and landfall location, and to extend the framework to other types of hydroclimatological events. Underlying the reliability of power systems in the United States is a diverse set of regulations, policies, and rules governing electric power system reliability. An overview of these regulations and the challenges associated with current U.S. regulatory structure is provided. Specifically, high-impact, low-frequency events such as hurricanes are handled differently in the regulatory structure; there is a lack of consistency between bulk power and the distribution system in terms of how their reliability is regulated. Moreover, the definition of reliability used by the North American Reliability Corporation (NERC) is at odds with generally accepted definitions of reliability in the broader reliability engineering community. Improvements in the regulatory structure may have substantial benefit to power system customers, though changes are difficult to realize. Overall, broader implications are raised for modeling other types of natural hazards. Some of the key takeaway messages are the following: (1) the impacts natural hazard on infrastructure can be modeled with reasonable accuracy given sufficient data and modern risk analysis methods; (2) there are substantial data on the impacts of some types of natural hazards on infrastructure; and (3) appropriate regulatory frameworks are needed to help translate modeling advances and insights into decreased impacts of natural hazards on infrastructure systems.
How big, how often, and where from? This is almost a mantra for researchers trying to understand tsunami hazard and risk. What we do know is that events such as the 2004 Indian Ocean Tsunami (2004 IOT) caught scientists by surprise, largely because there was no “research memory” of past events for that region, and as such, there was no hazard awareness, no planning, no risk assessment, and no disaster risk reduction. Forewarned is forearmed, but to be in that position, we have to be able to understand the evidence left behind by past events—palaeootsunamis—and to have at least some inkling of what generated them. While the 2004 IOT was a devastating wake-up call for science, we need to bear in mind that palaeotsunami research was still in its infancy at the time. What we now see is still a comparatively new discipline that is practiced worldwide, but as the “new kid on the block,” there are still many unknowns. What we do know is that in many cases, there is clear evidence of multiple palaeotsunamis generated by a variety of source mechanisms. There is a suite of proxy data—a toolbox, if you will—that can be used to identify a palaeotsunami deposit in the sedimentary record. Things are never quite as simple as they sound, though, and there are strong divisions within the research community as to whether one can really differentiate between a palaeotsunami and a palaeostorm deposit, and whether proxies as such are the way to go. As the discipline matures, though, many of these issues are being resolved, and indeed we have now arrived at a point where we have the potential to detect “invisible deposits” laid down by palaeotsunamis once they have run out of sediment to lay down as they move inland. As such, we are on the brink of being able to better understand the full extent of inundation by past events, a valuable tool in gauging the magnitude of palaeotsunamis. Palaeotsunami research is multidisciplinary, and as such, it is a melting pot of different scientific perspectives, which leads to rapid innovations. Basically, whatever is associated with modern events may be reflected in prehistory. Also, palaeotsunamis are often part of a landscape response pushed beyond an environmental threshold from which it will never fully recover, but that leaves indelible markers for us to read. In some cases, we do not even need to find a palaeotsunami deposit to know that one happened.
Permafrost-Related Geohazards in Cold Russian Regions
Permafrost, or perennially frozen ground, and the processes linked to the water phase change in ground-pore media are sources of specific dangers to infrastructure and economic activity in cold mountainous regions. Additionally, conventional natural hazards (such as earthquakes, floods, and landslides) assume special characteristics in permafrost territories. Permafrost hazards are created under two conditions. The first is a location with ice-bounded or water-saturated ground, in which the large amount of ice leads to potentially intensive processes of surface settlement or frost heaving. The second is linked with external, natural, and human-made disturbances that change the heat-exchange conditions. The places where ice-bounded ground meets areas that are subject to effective disturbances are the focus of hazard mapping and risk evaluation. The fundamentals of geohazard evaluation and geohazard mapping in permafrost regions were originally developed by Gunnar Beskow, Vladimir Kudryavtsev, Troy Péwé, Oscar Ferrians, Jerry Brown, and other American, European, and Soviet authors from 1940s to the 1980s. Modern knowledge of permafrost hazards was significantly enriched by the publication of Russian book called Permafrost Hazards, part of the six-volume series Natural Hazards in Russia (2000). The book describes, analyses, and evaluates permafrost-related hazards and includes methods for their modeling and mapping. Simultaneous work on permafrost hazard evaluation continued in different countries with the active support of the International Permafrost Association. Prominent contributions during the new period of investigation were published by Drozdov, Clarke, Kääb, Pavlov, Koff and several other thematic groups of researchers. The importance of common international works became evident. The international project RiskNat: A Cross-Border European Project Taking into Account Permafrost-Related Hazards was developed as a new phenomenon in scientific development. The intensive economic development in China presented new challenges for linear transportation routes and hydrologic infrastructures. A study of active fault lines and geological hazards along the Golmud–Lhasa Railway across the Tibetan plateau is a good example of the achievements by Chinese scientists. The method for evaluating the permafrost hazards was based on survey data, monitoring data, and modeling results. The survey data reflected the current environmental conditions, and they are usually shown on a permafrost map. The monitoring data are helpful in understanding the current tendencies of permafrost evolution in different landscapes and regions. The modeling data provided a permafrost forecast that takes climate change and its impact on humans into account. The International Conference on Permafrost in 2016, in Potsdam, Germany, demonstrated the new horizons of conventional and special permafrost mapping in offshore and continental areas. Permafrost hazards concern large and diverse aspects of human life. It is necessary to expand the approach to this problem from geology to also include geography, biology, social sciences, engineering, and other spheres of competencies in order to synthesize local and regional information. The relevance of this branch of science grows with taking into account climate change and the growing number of natural disasters.
Physical Vulnerability in Earthquake Risk Assessment
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models). Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses). Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.
Remote Sensing and Physical Modeling of Fires, Floods, and Landslides
Mahesh Prakash, James Hilton, Claire Miller, Vincent Lemiale, Raymond Cohen, and Yunze Wang
Remotely sensed data for the observation and analysis of natural hazards is becoming increasingly commonplace and accessible. Furthermore, the accuracy and coverage of such data is rapidly improving. In parallel with this growth are ongoing developments in computational methods to store, process, and analyze these data for a variety of geospatial needs. One such use of this geospatial data is for input and calibration for the modeling of natural hazards, such as the spread of wildfires, flooding, tidal inundation, and landslides. Computational models for natural hazards show increasing real-world applicability, and it is only recently that the full potential of using remotely sensed data in these models is being understood and investigated. Some examples of geospatial data required for natural hazard modeling include: • elevation models derived from RADAR and Light Detection and Ranging (LIDAR) techniques for flooding, landslide, and wildfire spread models • accurate vertical datum calculations from geodetic measurements for flooding and tidal inundation models • multispectral imaging techniques to provide land cover information for fuel types in wildfire models or roughness maps for flood inundation studies Accurate modeling of such natural hazards allows a qualitative and quantitative estimate of risks associated with such events. With increasing spatial and temporal resolution, there is also an opportunity to investigate further value-added usage of remotely sensed data in the disaster modeling context. Improving spatial data resolution allows greater fidelity in models allowing, for example, the impact of fires or flooding on individual households to be determined. Improving temporal data allows short and long-term trends to be incorporated into models, such as the changing conditions through a fire season or the changing depth and meander of a water channel.
Scaling Theory of Floods for Developing a Physical Basis of Statistical Flood Frequency Relations
Prediction of floods at locations where no streamflow data exist is a global issue because most of the countries involved don’t have adequate streamflow records. The United States Geological Survey developed the regional flood frequency (RFF) analysis to predict annual peak flow quantiles, for example, the 100-year flood, in ungauged basins. RFF equations are pure statistical characterizations that use historical streamflow records and the concept of “homogeneous regions.” To supplement the accuracy of flood quantile estimates due to limited record lengths, a physical solution is required. It is further reinforced by the need to predict potential impacts of a changing hydro-climate system on flood frequencies. A nonlinear geophysical theory of floods, or a scaling theory for short, focused on river basins and abandoned the “homogeneous regions” concept in order to incorporate flood producing physical processes. Self-similarity in channel networks plays a foundational role in understanding the observed scaling, or power law relations, between peak flows and drainage areas. Scaling theory of floods offers a unified framework to predict floods in rainfall-runoff (RF-RO) events and in annual peak flow quantiles in ungauged basins. Theoretical research in the course of time clarified several key ideas: (1) to understand scaling in annual peak flow quantiles in terms of physical processes, it was necessary to consider scaling in individual RF-RO events; (2) a unique partitioning of a drainage basin into hillslopes and channel links is necessary; (3) a continuity equation in terms of link storage and discharge was developed for a link-hillslope pair (to complete the mathematical specification, another equation for a channel link involving storage and discharge can be written that gives the continuity equation in terms of discharge); (4) the self-similarity in channel networks plays a pivotal role in solving the continuity equation, which produces scaling in peak flows as drainage area goes to infinity (scaling is an emergent property that was shown to hold for an idealized case study); (5) a theory of hydraulic-geometry in channel networks is summarized; and (6) highlights of a theory of biological diversity in riparian vegetation along a network are given. The first observational study in the Goodwin Creek Experimental Watershed, Mississippi, discovered that the scaling slopes and intercepts vary from one RF-RO event to the next. Subsequently, diagnostic studies of this variability showed that it is a reflection of variability in the flood-producing mechanisms. It has led to developing a model that links the scaling in RF-RO events with the annual peak flow quantiles featured here. Rainfall-runoff models in engineering practice use a variety of techniques to calibrate their parameters using observed streamflow hydrographs. In ungagged basins, streamflow data are not available, and in a changing climate, the reliability of historic data becomes questionable, so calibration of parameters is not a viable option. Recent progress on developing a suitable theoretical framework to test RF-RO model parameterizations without calibration is briefly reviewed. Contributions to generalizing the scaling theory of floods to medium and large river basins spanning different climates are reviewed. Two studies that have focused on understanding floods at the scale of the entire planet Earth are cited. Finally, two case studies on the innovative applications of the scaling framework to practical hydrologic engineering problems are highlighted. They include real-time flood forecasting and the effect of spatially distributed small dams in a river network on real-time flood forecasting.
Understanding the Risk Communication Puzzle for Natural Hazards and Disasters
Emma E. H. Doyle and Julia S. Becker
The study of risk communication has been explored in several diverse contexts, from a range of disciplinary perspectives, including psychology, health, media studies, visualization studies, the public understanding of science, and social science. Such diversity creates a puzzle of recommendations to address the many challenges of communicating risk before, during, and after a natural hazard event and disasters. The history and evolution of risk communication across these diverse contexts is reviewed, followed by a discussion of risk communication particular to natural hazards and disasters. Example models of risk communication in the disaster and natural hazard context are outlined, followed by examples of studies into disaster risk communication from Aotearoa New Zealand, and key best practice principles for communicating risk in these contexts. Considerations are also provided on how science and risk communication can work together more effectively in future in the natural hazard and disaster space. Such considerations include the importance of scientists, risk managers, and officials communicating to meet a diversity of decision-makers’ needs and understanding the evolution of those needs in a crisis across time demands and forecast horizons. To acquire a better understanding of such needs, participatory approaches to risk assessment and communication present the greatest potential in developing risk communication that is useful, useable, and used. Through partnerships forged at the problem formulation stage, risk assessors and communicators can gain an understanding of the science that needs to be developed to meet decision-needs, while communities and decision-makers can develop a greater understanding of the limitations of the science and risk assessment, leading to stronger and more trusting relationships. It is critically important to evaluate these partnership programs due to the challenges that can arise (such as resourcing and trust), particularly given risk communication can often occur in an environment subject to power imbalances due to social structures and sociopolitical landscape. There is also often not enough attention paid to the evaluation of the risk communication products themselves, which is problematic because what we think is being communicated may unintentionally mislead due to formatting and display choices. By working in partnership with affected communities to develop decision-relevant communication products using evidence-based product design, work can be done toward communicating risk in the most effective, and ethical, way.
Understanding Volcanoes and Volcanic Hazards
Between 50 and 70 volcanoes erupt each year—just a fraction of the 1,000 identified volcanoes that may erupt in the near future. When compared with the catastrophic loss of lives and property resulting from typhoons, earthquakes, and floods, losses from the more infrequent but equally devastating volcanic eruptions are often overlooked. Volcanic events are usually dramatic, but their various effects may occur almost imperceptibly or with horrendous speed and destruction. The intermittent nature of this activity makes it difficult to maintain public awareness of the risks. Assessing volcanic hazards and their risks remains a major challenge for volcanologists. Several generations ago, only a small, international fraternity of volcanologists was involved in the complex and sometimes dangerous business of studying volcanoes. To understand eruptions required extensive fieldwork and analysis of the eruption products—a painstaking process. Consequently, most of the world’s volcanoes had not been studied, and many were not yet even recognized. Volcano research was meagerly supported by some universities and a handful of government-sponsored geological surveys. Despite the threats posed by volcanoes, few volcanological observatories had been established to monitor their activity. Volcanology is now a global venture. Gone are the days when volcanologists were educated or employed chiefly by the industrial nations. Today, volcanologists and geological surveys are located in many nations with active volcanoes. Volcanological meetings, once limited to geologists, geophysicists, and a smattering of meteorologists and disaster planners, have greatly expanded. Initially, it was a hard sell to convince volcanologists that professionals from the “soft sciences” could contribute to the broad discipline of volcanology. However, it has become clear that involving decision makers such as urban planners, politicians, and public health professionals with volcanologists is a must when exploring and developing practical, effective volcanic-risk mitigation. Beginning in 1995, the “Cities on Volcanoes” meetings were organized to introduce an integrated approach that would eventually help mitigate the risks of volcanic eruptions. The first conference, held in Rome and Naples, Italy, encompassed a broad spectrum of topics from the fields of volcanology, geographic information systems, public health, remote sensing, risk analysis, civil engineering, sociology and psychology, civil defense, city management, city planning, education, the media, the insurance industry, and infrastructure management. The stated mission of that meeting was to “better evaluate volcanic crisis preparedness and emergency management in cities and densely populated areas.” Since that meeting nearly twenty years ago, Cities on Volcanoes meetings have taken place in New Zealand, Hawaii, Ecuador, Japan, Spain, and Mexico; the 2014 venue was Yogyakarta, Indonesia. The significant and rewarding result of these efforts is a growing connection between basic science and the practical applications needed to better understand the myriad risks as well as the possible hazard mitigation strategies associated with volcanic eruptions. While we pursue this integrated approach, we see advances in the technologies needed to evaluate and monitor volcanoes. It is impossible to visit all the world’s restless volcanoes, let alone establish effective monitoring stations for most of them. However, we can now scrutinize their thermal signatures and local ground deformation with instruments on earth-observing satellites. When precursory activity is detected by remote sensors in an area where a population is at risk, teams can be deployed for ground-based monitoring of that activity. In addition, by evaluating a volcano’s past eruption history, scientists can forecast both future activity and the possible risks to inhabitants. Using physics-based modeling, there is a better understanding of the types and severity of potential eruption phenomena such as pyroclastic flows, ash eruptions, gaseous discharge, and lava flows. Field observations of changes indicating an imminent eruption are now monitored with geophysical and geochemical instrumentation that is smaller, tougher, and more affordable. Volcanology has evolved into a broader, integrated scientific discipline, but there is much still to be accomplished. The new generation of volcanologists, who have the advantage of knowing the theoretical underpinnings of volcanic activity, can now turn to the allied endeavor of reducing risk—their aspiration for the 21st century.