Like any other species, Homo sapiens can potentially go extinct. This risk is an existential risk: a threat to the entire future of the species (and possible descendants). While anthropogenic risks may contribute the most to total extinction risk natural hazard events can plausibly cause extinction.
Historically, end-of-the-world scenarios have been popular topics in most cultures. In the early modern period scientific discoveries of changes in the sky, meteors, past catastrophes, evolution and thermodynamics led to the understanding that Homo sapiens was a species among others and vulnerable to extinction. In the 20th century, anthropogenic risks from nuclear war and environmental degradation made extinction risks more salient and an issue of possible policy. Near the end of the century an interdisciplinary field of existential risk studies emerged.
Human extinction requires a global hazard that either destroys the ecological niche of the species or harms enough individuals to reduce the population below a minimum viable size. Long-run fertility trends are highly uncertain and could potentially lead to overpopulation or demographic collapse, both contributors to extinction risk.
Astronomical extinction risks include damage to the biosphere due to radiation from supernovas or gamma ray bursts, major asteroid or comet impacts, or hypothesized physical phenomena such as stable strange matter or vacuum decay. The most likely extinction pathway would be a disturbance reducing agricultural productivity due to ozone loss, low temperatures, or lack of sunlight over a long period. The return time of extinction-level impacts is reasonably well characterized and on the order of millions of years. Geophysical risks include supervolcanism and climate change that affects global food security. Multiyear periods of low or high temperature can impair agriculture enough to stress or threaten the species. Sufficiently radical environmental changes that lead to direct extinction are unlikely. Pandemics can cause species extinction, although historical human pandemics have merely killed a fraction of the species.
Extinction risks are amplified by systemic effects, where multiple risk factors and events conspire to increase vulnerability and eventual damage. Human activity plays an important role in aggravating and mitigating these effects.
Estimates from natural extinction rates in other species suggest an overall risk to the species from natural events smaller than 0.15% per century, likely orders of magnitude smaller. However, due to the current situation with an unusually numerous and widely dispersed population the actual probability is hard to estimate. The natural extinction risk is also likely dwarfed by the extinction risk from human activities.
Many extinction hazards are at present impossible to prevent or even predict, requiring resilience strategies. Many risks have common pathways that are promising targets for mitigation. Endurance mechanisms against extinction may require creating refuges that can survive the disaster and rebuild. Because of the global public goods and transgenerational nature of extinction risks plus cognitive biases there is a large undersupply of mitigation effort despite strong arguments that it is morally imperative.
Marian Muste and Ton Hoitink
With a continuous global increase in flood frequency and intensity, there is an immediate need for new science-based solutions for flood mitigation, resilience, and adaptation that can be quickly deployed in any flood-prone area. An integral part of these solutions is the availability of river discharge measurements delivered in real time with high spatiotemporal density and over large-scale areas. Stream stages and the associated discharges are the most perceivable variables of the water cycle and the ones that eventually determine the levels of hazard during floods. Consequently, the availability of discharge records (a.k.a. streamflows) is paramount for flood-risk management because they provide actionable information for organizing the activities before, during, and after floods, and they supply the data for planning and designing floodplain infrastructure. Moreover, the discharge records represent the ground-truth data for developing and continuously improving the accuracy of the hydrologic models used for forecasting streamflows. Acquiring discharge data for streams is critically important not only for flood forecasting and monitoring but also for many other practical uses, such as monitoring water abstractions for supporting decisions in various socioeconomic activities (from agriculture to industry, transportation, and recreation) and for ensuring healthy ecological flows. All these activities require knowledge of past, current, and future flows in rivers and streams.
Given its importance, an ability to measure the flow in channels has preoccupied water users for millennia. Starting with the simplest volumetric methods to estimate flows, the measurement of discharge has evolved through continued innovation to sophisticated methods so that today we can continuously acquire and communicate the data in real time. There is no essential difference between the instruments and methods used to acquire streamflow data during normal conditions versus during floods. The measurements during floods are, however, complex, hazardous, and of limited accuracy compared with those acquired during normal flows. The essential differences in the configuration and operation of the instruments and methods for discharge estimation stem from the type of measurements they acquire—that is, discrete and autonomous measurements (i.e., measurements that can be taken any time any place) and those acquired continuously (i.e., estimates based on indirect methods developed for fixed locations). Regardless of the measurement situation and approach, the main concern of the data providers for flooding (as well as for other areas of water resource management) is the timely delivery of accurate discharge data at flood-prone locations across river basins.
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models).
Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses).
Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.