Tsunamis are natural hazards that have caused massive destruction and loss of life in coastal areas worldwide for centuries. Major programs promoting tsunami safety, however, date from the early 20th century and have received far greater emphasis following two major events in the opening decade of the 21st century: the Indian Ocean Tsunami of December 26, 2004, and the Great East Japan Earthquake and Tsunami of March 11, 2011. In the aftermath of these catastrophic disasters, warning systems and the technologies associated with them have expanded from a concentration in the Pacific Ocean to other regions with significant tsunami vulnerability. Preparedness and hazard mitigation programs, once the province of wealthier nations, are now being shared with developing countries. While warning systems and tsunami mapping and modeling are basic tools in promoting tsunami safety, there are a number of strategies that are essential in protecting lives and property in major tsunami events. Preparedness strategies consist of tsunami awareness and education and actions that promote response readiness. These strategies should provide an understanding of how tsunamis occur, where they occur, how to respond to warnings or natural signs that a tsunami may occur, and what locations are safe for evacuation. Hazard mitigation strategies are designed to reduce the likelihood that coastal populations will be impacted by a tsunami, typically through engineered structures or removing communities from known tsunami inundation zones. They include natural or constructed high ground for evacuation, structures for vertical evacuation (either single purpose structures specifically for tsunami evacuation or existing buildings that are resistant to tsunami forces), seawalls, breakwaters, forest barriers, and tsunami river gates. Coastal jurisdictions may also use land-use planning ordinances or coastal zoning to restrict development in areas of significant risk of tsunami inundation. The relative efficacy of these strategies and locations where they have been implemented will be addressed, as will the issues and challenges regarding their implementation.
James Goltz and Katsuya Yamori
Amr Elnashai and Hussam Mahmoud
With current rapid growth of cities and the move toward the development of both sustainable and resilient infrastructure systems, it is vital for the structural engineering community to continue to improve their knowledge in earthquake engineering to limit infrastructure damage and the associated social and economic impacts. Historically, the development of such knowledge has been accomplished through the deployment of analytical simulations and experimental testing. Experimental testing is considered the most accurate tool by which local behavior of components or global response of systems can be assessed, assuming the test setup is realistically configured and the experiment is effectively executed. However, issues of scale, equipment capacity, and availability of research funding continue to hinder full-scale testing of complete structures. On the other hand, analytical simulation software is limited to solving specific type of problems and in many cases fail to capture complex behaviors, failure modes, and collapse of structural systems. Hybrid simulation has emerged as a potentially accurate and efficient tool for the evaluation of the response of large and complex structures under earthquake loading. In hybrid (experiment-analysis) simulation, part of a structural system is experimentally represented while the rest of the structure is numerically modeled. Typically, the most critical component is physically represented. By combining a physical specimen and a numerical model, the system-level behavior can be better quantified than modeling the entire system purely analytically or testing only a component. This article discusses the use of hybrid simulation as an effective tool for the seismic evaluation of structures. First, a chronicled development of hybrid simulation is presented with an overview of some of the previously conducted studies. Second, an overview of a hybrid simulation environment is provided. Finally, a hybrid simulation application example on the response of steel frames with semi-rigid connections under earthquake excitations is presented. The simulations included a full-scale physical specimen for the experimental module of a connection, and a 2D finite element model for the analytical module. It is demonstrated that hybrid simulation is a powerful tool for advanced assessment when used with appropriate analytical and experimental realizations of the components and that semi-rigid frames are a viable option in earthquake engineering applications.
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models). Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses). Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.
The immediate aftermath of a great urban earthquake is a dramatic and terrible event, comparable to a massive terrorist attack. Yet the shocking impact soon fades from the public mind and receives surprisingly little attention from historians, unlike wars and human atrocities. In 1923, the Great Kanto earthquake and its subsequent fires demolished most of Tokyo and Yokohama and killed around 140,000 Japanese: a level of devastation and fatalities comparable with the atomic bombing of Hiroshima and Nagasaki in 1945. But the second event has infinitely more resonance in public consciousness and historical studies than the first. Indeed, most people would be challenged to name a single earthquake with an indisputable historical impact, including even the most famous of all earthquakes: the San Francisco earthquake and fire of 1906. In truth, however, great earthquakes, from ancient times—as recorded by Greek and biblical writers—to the present day, have had major cultural, economic, and political consequences—often a combination of all three—some of which were beneficial. Thus, the current prime minister of India owes his election in 2014 to an earthquake that devastated part of his home state of Gujarat in 2001, which led to its striking economic growth. The martial law imposed on Tokyo and Yokohama after the 1923 earthquake gave new authority to the Japanese army, which eventually took over the Japanese government and led Japan to war with China and the world. The destruction of San Francisco in 1906 produced a boom in rebuilding and financial and technological development of the surrounding area on the San Andreas Fault, including what became Silicon Valley. A great earthquake in Venezuela in 1812 was the principal cause of the temporary defeat of its leader Simon Bolivar by the Spanish colonial regime, but his subsequent exile led to his permanent freeing of Bolivia, Colombia, Ecuador, Peru, and Venezuela from Spanish rule. The catastrophic Lisbon earthquake of 1755—as well known in the early 19th century as the 1945 atomic bombings are today—was a pivotal factor in the freeing of Enlightenment science from Catholic religious orthodoxy, as epitomized by Voltaire’s satirical novel Candide, written in response to the earthquake. Even the minor earthquakes in Britain in 1750, the so-called Year of Earthquakes, produced the earliest scientific understanding of earthquakes, published by the Royal Society: the beginning of seismology. The long-term impact of a great earthquake depends on its epicenter, magnitude, and timing—and also on human factors: the political, social, intellectual, religious, and cultural resources specific to a region’s history. Each earthquake-struck society offers its own particular lesson, and yet, taken together, such earth-shattering events have important shared consequences for the history of the world.
Earthquakes involve sudden shear sliding motion between large rock masses across internal contact surfaces called faults. The slip on the fault releases strain energy previously stored in the surrounding rock that accumulated due to frictional resistance to sliding. Most earthquakes are directly caused by plate tectonics, and locate in the cool, brittle rock near Earth’s surface. Events with seismic magnitude measured 8.0 or greater are called great earthquakes and involve slip of from several to tens of meters across faults with lengths from 100 to more than 1,000 kilometers. These huge ruptures tend to occur on or near plate boundaries; the largest are on shallow-dipping plate boundary faults (megathrusts) found in compressional regions called subduction zones, where one tectonic plate is thrusting under another. Some great earthquakes occur within bending or detaching plates as they deform seaward of or below a subduction zone. Yet others occur on plate boundary strike-slip faults where two plates are shearing horizontally past one another, or within deforming plate interiors. Elastic wave energy released during the fault sliding is recorded and studied by seismologists to determine the fault location, orientation and sense of sliding motion, amount of radiated elastic wave energy, and distribution of slip on the fault during the event (co-seismic slip). Geodetic methods measure elastic strain accumulation prior to an earthquake, co-seismic slip, and afterslip on the fault that occurs without earthquakes, along with viscous deformation of the mantle as it responds to the fault offset. Great earthquakes commonly locate under the ocean, and the sudden motion of the seafloor generates tsunami—gravitational water waves that can be recorded with ocean floor pressure sensors (these waves are also used to determine co-seismic slip). As seismic, geodetic. and tsunami modeling methods have progressed over the past 50 years, our understanding of great earthquake rupture processes and earthquake interactions has advanced steadily in the context of plate tectonics and improved understanding of rock friction. All faults have heterogeneous frictional properties inferred from non-uniform sliding during each event, with areas of large slip instabilities called asperities having slip-velocity weakening friction and other areas having slip-velocity strengthening friction that results in stable sliding. The seismic wave shaking and tsunami waves can cause great devastation for humanity, so efforts are made to anticipate future earthquake hazards. As plate tectonics steadily move Earth’s plates, elastic strain around plate boundary faults accumulates and releases in a repeated stick-slip sliding process that causes a limited degree of regularity of faulting. Given the history of prior earthquakes on a given fault, we can identify seismic gaps where future slip events are likely to occur. With geodesy we can also now measure locations of accumulating slip deficit relative to plate motions, as well as variation in seismic coupling, which characterizes the fraction of plate motion accounted for by earthquake failure.