Tsunamis are natural hazards that have caused massive destruction and loss of life in coastal areas worldwide for centuries. Major programs promoting tsunami safety, however, date from the early 20th century and have received far greater emphasis following two major events in the opening decade of the 21st century: the Indian Ocean Tsunami of December 26, 2004, and the Great East Japan Earthquake and Tsunami of March 11, 2011. In the aftermath of these catastrophic disasters, warning systems and the technologies associated with them have expanded from a concentration in the Pacific Ocean to other regions with significant tsunami vulnerability. Preparedness and hazard mitigation programs, once the province of wealthier nations, are now being shared with developing countries. While warning systems and tsunami mapping and modeling are basic tools in promoting tsunami safety, there are a number of strategies that are essential in protecting lives and property in major tsunami events. Preparedness strategies consist of tsunami awareness and education and actions that promote response readiness. These strategies should provide an understanding of how tsunamis occur, where they occur, how to respond to warnings or natural signs that a tsunami may occur, and what locations are safe for evacuation. Hazard mitigation strategies are designed to reduce the likelihood that coastal populations will be impacted by a tsunami, typically through engineered structures or removing communities from known tsunami inundation zones. They include natural or constructed high ground for evacuation, structures for vertical evacuation (either single purpose structures specifically for tsunami evacuation or existing buildings that are resistant to tsunami forces), seawalls, breakwaters, forest barriers, and tsunami river gates. Coastal jurisdictions may also use land-use planning ordinances or coastal zoning to restrict development in areas of significant risk of tsunami inundation. The relative efficacy of these strategies and locations where they have been implemented will be addressed, as will the issues and challenges regarding their implementation.
James Goltz and Katsuya Yamori
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models). Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses). Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.
P. Patrick Leahy
Society expects to have a safe environment in which to live, prosper, and sustain future generations. Generally, when we think of threats to our well-being, we think of human-induced causes such as overexploitation of water resources, contamination, and soil loss, to name just a few. However, natural hazards, which are not easily avoided or controllable (or, in many cases, predictable in the short term), have profound influences on our safety, economic security, social development, and political stability, as well as every individual’s overall well-being. Natural hazards are all related to the processes that drive our planet. Indeed, the Earth would not be a functioning ecosystem without the dynamic processes that shape our planet’s landscapes over geologic time. Natural hazards (or geohazards, as they are sometimes called) include such events as earthquakes, volcanic eruptions, landslides and ground collapse, tsunamis, floods and droughts, geomagnetic storms, and coastal storms. A key aspect of these natural hazards involves understanding and mitigating their impacts, which require that the geoscientist take a four-pronged approach. It must include a fundamental understanding of the processes that cause the hazard, an assessment of the hazard, monitoring to observe any changes in conditions that can be used to determine the status of a potential hazardous event, and perhaps most important, delivery of information to a broader community to evaluate the need for action. A fundamental understanding of processes often requires a research effort that typically is the focus of academic and government researchers. Fundamental questions may include: (a) What triggers an earthquake, and why do some events escalate to a great magnitude while most are small-magnitude events?; (b) What processes are responsible for triggering a landslide?; (c) Can we predict the severity of an impending volcanic eruption? (d) Can we predict an impending drought or flood?; (e) Can we determine the height of a storm surge or storm track associated with coastal storm well in advance of landfall so that the impact can be mitigated? Any effective hazard management system must strive to increase resilience. The only way to gain resiliency is to learn from past events and to decrease risk. To successfully increase resiliency requires having strong hazard identification programs with adequate monitoring and research components and very robust delivery mechanisms that deliver timely, accurate, and appropriate hazard information to a broad audience that will use the information is a wide variety of ways to meet their specific goals.
Fatalism about natural disasters hinders action to prepare for those disasters, and overcoming this fatalism is one key element to preparing people for these disasters. Research by Bostrom and colleagues shows that failure to act often reflects gaps and misconceptions in citizen’s mental models of disasters. Research by McClure and colleagues shows that fatalistic attitudes reflect people’s attributing damage to uncontrollable natural causes rather than controllable human actions, such as preparation. Research shows which precise features of risk communications lead people to see damage as preventable and to attribute damage to controllable human actions. Messages that enhance the accuracy of mental models of disasters by including human factors recognized by experts lead to increased preparedness. Effective messages also communicate that major damage in disasters is often distinctive and reflects controllable causes. These messages underpin causal judgments that reduce fatalism and enhance preparation. Many of these messages are not only beneficial but also newsworthy. Messages that are logically equivalent but are differently framed have varying effects on risk judgments and preparedness. The causes of harm in disasters are often contested, because they often imply human responsibility for the outcomes and entail significant cost.
Amr Elnashai and Hussam Mahmoud
With current rapid growth of cities and the move toward the development of both sustainable and resilient infrastructure systems, it is vital for the structural engineering community to continue to improve their knowledge in earthquake engineering to limit infrastructure damage and the associated social and economic impacts. Historically, the development of such knowledge has been accomplished through the deployment of analytical simulations and experimental testing. Experimental testing is considered the most accurate tool by which local behavior of components or global response of systems can be assessed, assuming the test setup is realistically configured and the experiment is effectively executed. However, issues of scale, equipment capacity, and availability of research funding continue to hinder full-scale testing of complete structures. On the other hand, analytical simulation software is limited to solving specific type of problems and in many cases fail to capture complex behaviors, failure modes, and collapse of structural systems. Hybrid simulation has emerged as a potentially accurate and efficient tool for the evaluation of the response of large and complex structures under earthquake loading. In hybrid (experiment-analysis) simulation, part of a structural system is experimentally represented while the rest of the structure is numerically modeled. Typically, the most critical component is physically represented. By combining a physical specimen and a numerical model, the system-level behavior can be better quantified than modeling the entire system purely analytically or testing only a component. This article discusses the use of hybrid simulation as an effective tool for the seismic evaluation of structures. First, a chronicled development of hybrid simulation is presented with an overview of some of the previously conducted studies. Second, an overview of a hybrid simulation environment is provided. Finally, a hybrid simulation application example on the response of steel frames with semi-rigid connections under earthquake excitations is presented. The simulations included a full-scale physical specimen for the experimental module of a connection, and a 2D finite element model for the analytical module. It is demonstrated that hybrid simulation is a powerful tool for advanced assessment when used with appropriate analytical and experimental realizations of the components and that semi-rigid frames are a viable option in earthquake engineering applications.
The immediate aftermath of a great urban earthquake is a dramatic and terrible event, comparable to a massive terrorist attack. Yet the shocking impact soon fades from the public mind and receives surprisingly little attention from historians, unlike wars and human atrocities. In 1923, the Great Kanto earthquake and its subsequent fires demolished most of Tokyo and Yokohama and killed around 140,000 Japanese: a level of devastation and fatalities comparable with the atomic bombing of Hiroshima and Nagasaki in 1945. But the second event has infinitely more resonance in public consciousness and historical studies than the first. Indeed, most people would be challenged to name a single earthquake with an indisputable historical impact, including even the most famous of all earthquakes: the San Francisco earthquake and fire of 1906. In truth, however, great earthquakes, from ancient times—as recorded by Greek and biblical writers—to the present day, have had major cultural, economic, and political consequences—often a combination of all three—some of which were beneficial. Thus, the current prime minister of India owes his election in 2014 to an earthquake that devastated part of his home state of Gujarat in 2001, which led to its striking economic growth. The martial law imposed on Tokyo and Yokohama after the 1923 earthquake gave new authority to the Japanese army, which eventually took over the Japanese government and led Japan to war with China and the world. The destruction of San Francisco in 1906 produced a boom in rebuilding and financial and technological development of the surrounding area on the San Andreas Fault, including what became Silicon Valley. A great earthquake in Venezuela in 1812 was the principal cause of the temporary defeat of its leader Simon Bolivar by the Spanish colonial regime, but his subsequent exile led to his permanent freeing of Bolivia, Colombia, Ecuador, Peru, and Venezuela from Spanish rule. The catastrophic Lisbon earthquake of 1755—as well known in the early 19th century as the 1945 atomic bombings are today—was a pivotal factor in the freeing of Enlightenment science from Catholic religious orthodoxy, as epitomized by Voltaire’s satirical novel Candide, written in response to the earthquake. Even the minor earthquakes in Britain in 1750, the so-called Year of Earthquakes, produced the earliest scientific understanding of earthquakes, published by the Royal Society: the beginning of seismology. The long-term impact of a great earthquake depends on its epicenter, magnitude, and timing—and also on human factors: the political, social, intellectual, religious, and cultural resources specific to a region’s history. Each earthquake-struck society offers its own particular lesson, and yet, taken together, such earth-shattering events have important shared consequences for the history of the world.
Earthquakes involve sudden shear sliding motion between large rock masses across internal contact surfaces called faults. The slip on the fault releases strain energy previously stored in the surrounding rock that accumulated due to frictional resistance to sliding. Most earthquakes are directly caused by plate tectonics, and locate in the cool, brittle rock near Earth’s surface. Events with seismic magnitude measured 8.0 or greater are called great earthquakes and involve slip of from several to tens of meters across faults with lengths from 100 to more than 1,000 kilometers. These huge ruptures tend to occur on or near plate boundaries; the largest are on shallow-dipping plate boundary faults (megathrusts) found in compressional regions called subduction zones, where one tectonic plate is thrusting under another. Some great earthquakes occur within bending or detaching plates as they deform seaward of or below a subduction zone. Yet others occur on plate boundary strike-slip faults where two plates are shearing horizontally past one another, or within deforming plate interiors. Elastic wave energy released during the fault sliding is recorded and studied by seismologists to determine the fault location, orientation and sense of sliding motion, amount of radiated elastic wave energy, and distribution of slip on the fault during the event (co-seismic slip). Geodetic methods measure elastic strain accumulation prior to an earthquake, co-seismic slip, and afterslip on the fault that occurs without earthquakes, along with viscous deformation of the mantle as it responds to the fault offset. Great earthquakes commonly locate under the ocean, and the sudden motion of the seafloor generates tsunami—gravitational water waves that can be recorded with ocean floor pressure sensors (these waves are also used to determine co-seismic slip). As seismic, geodetic. and tsunami modeling methods have progressed over the past 50 years, our understanding of great earthquake rupture processes and earthquake interactions has advanced steadily in the context of plate tectonics and improved understanding of rock friction. All faults have heterogeneous frictional properties inferred from non-uniform sliding during each event, with areas of large slip instabilities called asperities having slip-velocity weakening friction and other areas having slip-velocity strengthening friction that results in stable sliding. The seismic wave shaking and tsunami waves can cause great devastation for humanity, so efforts are made to anticipate future earthquake hazards. As plate tectonics steadily move Earth’s plates, elastic strain around plate boundary faults accumulates and releases in a repeated stick-slip sliding process that causes a limited degree of regularity of faulting. Given the history of prior earthquakes on a given fault, we can identify seismic gaps where future slip events are likely to occur. With geodesy we can also now measure locations of accumulating slip deficit relative to plate motions, as well as variation in seismic coupling, which characterizes the fraction of plate motion accounted for by earthquake failure.
H. P. Gülkan
The current outlook in disaster risk management in Turkey is examined in its historic context in this article. Policies, legislation, and specific responsive actions have culminated in 2009 in the formation of a nationwide Disaster and Emergency Management Authority (“Afet ve Acil Durum Yönetimi Başkanlığı” or AFAD in Turkish) that reports directly to the prime minister. Earthquakes are the principal drivers for disaster management in Turkey. The assessment of the system in effect in Turkey from a management science viewpoint is summarized. The chronological description of the Turkish system has been linked to major disaster occurrences and consequent legislative changes.