Among the factors that affect the climate, few are as diverse and challenging to understand as aerosols. Minute particles suspended in the atmosphere, aerosols are emitted through a wide range of natural and industrial processes, and are transported around the globe by winds and weather. Once airborne, they affect the climate both directly, through scattering and absorption of solar radiation, and indirectly, through their impact on cloud properties. Combining all their effects, anthropogenic changes to aerosol concentrations are estimated to have had a climate impact over the industrial era that is second only to CO2. Their atmospheric lifetime of only a few days, however, makes their climate effects substantially different from those of well-mixed greenhouse gases.
Major aerosol types include sea salt, dust, sulfate compounds, and black carbon—or soot—from incomplete combustion. Of these, most scatter incoming sunlight back to space, and thus mainly cool the climate. Black carbon, however, absorbs sunlight, and therefore acts as a heating agent much like a greenhouse gas. Furthermore, aerosols can act as cloud condensation nuclei, causing clouds to become whiter—and thus more reflecting—further cooling the surface. Black carbon is again a special case, acting to change the stability of the atmosphere through local heating of the upper air, and also changing the albedo of the surface when it is deposited on snow and ice, for example.
The wide range of climate interactions that aerosols have, and the fact that their distribution depends on the weather at the time and location of emission, lead to large uncertainties in the scientific assessment of their impact. This in turn leads to uncertainties in our present understanding of the climate sensitivity, because while aerosols have predominantly acted to oppose 20th-century global warming by greenhouse gases, the magnitude of aerosol effects on climate is highly uncertain.
Finally, aerosols are important for large-scale climate events such as major volcanoes, or the threat of nuclear winter. The relative ease with which they can be produced and distributed has led to suggestions for using targeted aerosol emissions to counteract global warming—so-called climate engineering.
12
Article
María Laura Bettolli
Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution.
Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions.
Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day.
The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.
Article
Stefano Tibaldi and Franco Molteni
The atmospheric circulation in the mid-latitudes of both hemispheres is usually dominated by westerly winds and by planetary-scale and shorter-scale synoptic waves, moving mostly from west to east. A remarkable and frequent exception to this “usual” behavior is atmospheric blocking. Blocking occurs when the usual zonal flow is hindered by the establishment of a large-amplitude, quasi-stationary, high-pressure meridional circulation structure which “blocks” the flow of the westerlies and the progression of the atmospheric waves and disturbances embedded in them. Such blocking structures can have lifetimes varying from a few days to several weeks in the most extreme cases. Their presence can strongly affect the weather of large portions of the mid-latitudes, leading to the establishment of anomalous meteorological conditions. These can take the form of strong precipitation episodes or persistent anticyclonic regimes, leading in turn to floods, extreme cold spells, heat waves, or short-lived droughts. Even air quality can be strongly influenced by the establishment of atmospheric blocking, with episodes of high concentrations of low-level ozone in summer and of particulate matter and other air pollutants in winter, particularly in highly populated urban areas.
Atmospheric blocking has the tendency to occur more often in winter and in certain longitudinal quadrants, notably the Euro-Atlantic and the Pacific sectors of the Northern Hemisphere. In the Southern Hemisphere, blocking episodes are generally less frequent, and the longitudinal localization is less pronounced than in the Northern Hemisphere.
Blocking has aroused the interest of atmospheric scientists since the middle of the last century, with the pioneering observational works of Berggren, Bolin, Rossby, and Rex, and has become the subject of innumerable observational and theoretical studies. The purpose of such studies was originally to find a commonly accepted structural and phenomenological definition of atmospheric blocking. The investigations went on to study blocking climatology in terms of the geographical distribution of its frequency of occurrence and the associated seasonal and inter-annual variability. Well into the second half of the 20th century, a large number of theoretical dynamic works on blocking formation and maintenance started appearing in the literature. Such theoretical studies explored a wide range of possible dynamic mechanisms, including large-amplitude planetary-scale wave dynamics, including Rossby wave breaking, multiple equilibria circulation regimes, large-scale forcing of anticyclones by synoptic-scale eddies, finite-amplitude non-linear instability theory, and influence of sea surface temperature anomalies, to name but a few. However, to date no unique theoretical model of atmospheric blocking has been formulated that can account for all of its observational characteristics.
When numerical, global short- and medium-range weather predictions started being produced operationally, and with the establishment, in the late 1970s and early 1980s, of the European Centre for Medium-Range Weather Forecasts, it quickly became of relevance to assess the capability of numerical models to predict blocking with the correct space-time characteristics (e.g., location, time of onset, life span, and decay). Early studies showed that models had difficulties in correctly representing blocking as well as in connection with their large systematic (mean) errors.
Despite enormous improvements in the ability of numerical models to represent atmospheric dynamics, blocking remains a challenge for global weather prediction and climate simulation models. Such modeling deficiencies have negative consequences not only for our ability to represent the observed climate but also for the possibility of producing high-quality seasonal-to-decadal predictions. For such predictions, representing the correct space-time statistics of blocking occurrence is, especially for certain geographical areas, extremely important.
Article
Gabriele Gramelsberger
Climate and simulation have become interwoven concepts during the past decades because, on the one hand, climate scientists shouldn’t experiment with real climate and, on the other hand, societies want to know how climate will change in the next decades. Both in-silico experiments for a better understanding of climatic processes as well as forecasts of possible futures can be achieved only by using climate models. The article investigates possibilities and problems of model-mediated knowledge for science and societies. It explores historically how climate became a subject of science and of simulation, what kind of infrastructure is required to apply models and simulations properly, and how model-mediated knowledge can be evaluated. In addition to an overview of the diversity and variety of models in climate science, the article focuses on quasiheuristic climate models, with an emphasis on atmospheric models.
Article
Pierre Friedlingstein
Climate and carbon cycle are tightly coupled on many time scales, from the interannual to the multimillennial. Observation always shows a positive feedback between climate and the carbon cycle: elevated atmospheric CO2 leads to warming, but warming is expected to further release of carbon to the atmosphere, enhancing the atmospheric CO2 increase. Earth system models do represent these climate–carbon cycle feedbacks, always simulating a positive feedback over the 21st century; that is, climate change will lead to loss of carbon from the land and ocean reservoirs. These processes partially offset the increases in land and ocean carbon sinks caused by rising atmospheric CO2. As a result, more of the emitted anthropogenic CO2 will remain in the atmosphere. There is, however, a large uncertainty on the magnitude of this feedback. Recent studies now help to reduce this uncertainty. On short, interannual, time scales, El Niño years record larger-than-average atmospheric CO2 growth rate, with tropical land ecosystems being the main drivers. These climate–carbon cycle anomalies can be used as emerging constraint on the tropical land carbon response to future climate change. On a longer, centennial, time scale, the variability of atmospheric CO2 found in records of the last millennium can be used to constrain the overall global carbon cycle response to climate. These independent methods confirm that the climate–carbon cycle feedback is positive, but probably more consistent with the lower end of the comprehensive models range, excluding very large climate–carbon cycle feedbacks.
Article
Kerry H. Cook
Accurate projections of climate change under increasing atmospheric greenhouse gas levels are needed to evaluate the environmental cost of anthropogenic emissions, and to guide mitigation efforts. These projections are nowhere more important than Africa, with its high dependence on rain-fed agriculture and, in many regions, limited resources for adaptation. Climate models provide our best method for climate prediction but there are uncertainties in projections, especially on regional space scale. In Africa, limitations of observational networks add to this uncertainty since a crucial step in improving model projections is comparisons with observations. Exceeding uncertainties associated with climate model simulation are uncertainties due to projections of future emissions of CO2 and other greenhouse gases. Humanity’s choices in emissions pathways will have profound effects on climate, especially after the mid-century.
The African Sahel is a transition zone characterized by strong meridional precipitation and temperature gradients. Over West Africa, the Sahel marks the northernmost extent of the West African monsoon system. The region’s climate is known to be sensitive to sea surface temperatures, both regional and global, as well as to land surface conditions. Increasing atmospheric greenhouse gases are already causing amplified warming over the Sahara Desert and, consequently, increased rainfall in parts of the Sahel. Climate model projections indicate that much of this increased rainfall will be delivered in the form of more intense storm systems.
The complicated and highly regional precipitation regimes of East Africa present a challenge for climate modeling. Within roughly 5º of latitude of the equator, rainfall is delivered in two seasons—the long rains in the spring, and the short rains in the fall. Regional climate model projections suggest that the long rains will weaken under greenhouse gas forcing, and the short rains season will extend farther into the winter months. Observations indicate that the long rains are already weakening.
Changes in seasonal rainfall over parts of subtropical southern Africa are observed, with repercussions and challenges for agriculture and water availability. Some elements of these observed changes are captured in model simulations of greenhouse gas-induced climate change, especially an early demise of the rainy season. The projected changes are quite regional, however, and more high-resolution study is needed. In addition, there has been very limited study of climate change in the Congo Basin and across northern Africa. Continued efforts to understand and predict climate using higher-resolution simulation must be sustained to better understand observed and projected changes in the physical processes that support African precipitation systems as well as the teleconnections that communicate remote forcings into the continent.
Article
Antonio Navarra
Syukuro Manabe was awarded the Nobel Prize in Physics in 2021 for his work on climate modeling. The Prize recognizes an exceptional career that pioneered a new area of the scientific enterprise revealing the power of numerical simulations and methods for advancing scientific discovery and producing new knowledge. Manabe contributed decisively to the creation of the modern scientific discipline of climate science through numerical modeling, stressing clarity of ideas and simplicity of approach. He described in no uncertain terms the role of greenhouse gases in the atmosphere and the impact of changes in the radiation balance of the atmosphere caused by the anthropogenic increase of such gases, and he revealed the role of water vapor in the greenhouse effect. He also understood the importance of including all the components of the climate system (the oceans, sea ice, and land surface) to reach a comprehensive treatment of the mechanisms of climate in a general circulation model, paving the way to the modern earth system models and the establishment of climate modeling as a leading scientific discipline.
Article
Frauke Feser
Storms are characterized by high wind speeds; often large precipitation amounts in the form of rain, freezing rain, or snow; and thunder and lightning (for thunderstorms). Many different types exist, ranging from tropical cyclones and large storms of the midlatitudes to small polar lows, Medicanes, thunderstorms, or tornadoes. They can lead to extreme weather events like storm surges, flooding, high snow quantities, or bush fires. Storms often pose a threat to human lives and property, agriculture, forestry, wildlife, ships, and offshore and onshore industries. Thus, it is vital to gain knowledge about changes in storm frequency and intensity. Future storm predictions are important, and they depend to a great extent on the evaluation of changes in wind statistics of the past.
To obtain reliable statistics, long and homogeneous time series over at least some decades are needed. However, wind measurements are frequently influenced by changes in the synoptic station, its location or surroundings, instruments, and measurement practices. These factors deteriorate the homogeneity of wind records. Storm indexes derived from measurements of sea-level pressure are less prone to such changes, as pressure does not show very much spatial variability as wind speed does. Long-term historical pressure measurements exist that enable us to deduce changes in storminess for more than the last 140 years. But storm records are not just compiled from measurement data; they also may be inferred from climate model data.
The first numerical weather forecasts were performed in the 1950s. These served as a basis for the development of atmospheric circulation models, which were the first generation of climate models or general-circulation models. Soon afterward, model data was analyzed for storm events and cyclone-tracking algorithms were programmed. Climate models nowadays have reached high resolution and reliability and can be run not just for the past, but also for future emission scenarios which return possible future storm activity.
Article
William Joseph Gutowski and Filippo Giorgi
Regional climate downscaling has been motivated by the objective to understand how climate processes not resolved by global models can influence the evolution of a region’s climate and by the need to provide climate change information to other sectors, such as water resources, agriculture, and human health, on scales poorly resolved by global models but where impacts are felt. There are four primary approaches to regional downscaling: regional climate models (RCMs), empirical statistical downscaling (ESD), variable resolution global models (VARGCM), and “time-slice” simulations with high-resolution global atmospheric models (HIRGCM). Downscaling using RCMs is often referred to as dynamical downscaling to contrast it with statistical downscaling. Although there have been efforts to coordinate each of these approaches, the predominant effort to coordinate regional downscaling activities has involved RCMs.
Initially, downscaling activities were directed toward specific, individual projects. Typically, there was little similarity between these projects in terms of focus region, resolution, time period, boundary conditions, and phenomena of interest. The lack of coordination hindered evaluation of downscaling methods, because sources of success or problems in downscaling could be specific to model formulation, phenomena studied, or the method itself. This prompted the organization of the first dynamical-downscaling intercomparison projects in the 1990s and early 2000s. These programs and several others following provided coordination focused on an individual region and an opportunity to understand sources of differences between downscaling models while overall illustrating the capabilities of dynamical downscaling for representing climatologically important regional phenomena. However, coordination between programs was limited.
Recognition of the need for further coordination led to the formation of the Coordinated Regional Downscaling Experiment (CORDEX) under the auspices of the World Climate Research Programme (WCRP). Initial CORDEX efforts focused on establishing and performing a common framework for carrying out dynamically downscaled simulations over multiple regions around the world. This framework has now become an organizing structure for downscaling activities around the world. Further efforts under the CORDEX program have strengthened the program’s scientific motivations, such as assessing added value in downscaling, regional human influences on climate, coupled ocean–land–atmosphere modeling, precipitation systems, extreme events, and local wind systems. In addition, CORDEX is promoting expanded efforts to compare capabilities of all downscaling methods for producing regional information. The efforts are motivated in part by the scientific goal to understand thoroughly regional climate and its change and by the growing need for climate information to assist climate services for a multitude of climate-impacted sectors.
Article
Stephen Corfidi
Forecasting severe convective weather remains one of the most challenging tasks facing operational meteorology today, especially in the mid-latitudes, where severe convective storms occur most frequently and with the greatest impact. The forecast difficulties reflect, in part, the many different atmospheric processes of which severe thunderstorms are a by-product. These processes occur over a wide range of spatial and temporal scales, some of which are poorly understood and/or are inadequately sampled by observational networks. Therefore, anticipating the development and evolution of severe thunderstorms will likely remain an integral part of national and local forecasting efforts well into the future.
Modern severe weather forecasting began in the 1940s, primarily employing the pattern recognition approach throughout the 1950s and 1960s. Substantial changes in forecast approaches did not come until much later, however, beginning in the 1980s. By the start of the new millennium, significant advances in the understanding of the physical mechanisms responsible for severe weather enabled forecasts of greater spatial and temporal detail. At the same time, technological advances made available model thermodynamic and wind profiles that supported probabilistic forecasts of severe weather threats.
This article provides an updated overview of operational severe local storm forecasting, with emphasis on present-day understanding of the mesoscale processes responsible for severe convective storms, and the application of recent technological developments that have revolutionized some aspects of severe weather forecasting. The presentation, nevertheless, notes that increased understanding and enhanced computer sophistication are not a substitute for careful diagnosis of the current meteorological environment and an ingredients-based approach to anticipating changes in that environment; these techniques remain foundational to successful forecasts of tornadoes, large hail, damaging wind, and flash flooding.
Article
R. J. Trapp
Cumulus clouds are pervasive on earth, and play important roles in the transfer of energy through the atmosphere. Under certain conditions, shallow, nonprecipitating cumuli may grow vertically to occupy a significant depth of the troposphere, and subsequently may evolve into convective storms.
The qualifier “convective” implies that the storms have vertical accelerations that are driven primarily, though not exclusively, by buoyancy over a deep layer. Such buoyancy in the atmosphere arises from local density variations relative to some base state density; the base state is typically idealized as a horizontal average over a large area, which is also considered the environment. Quantifications of atmospheric buoyancy are typically expressed in terms of temperature and humidity, and allow for an assessment of the likelihood that convective clouds will form or initiate. Convection initiation is intimately linked to existence of a mechanism by which air is vertically lifted to realize this buoyancy and thus accelerations. Weather fronts and orography are the canonical lifting mechanisms.
As modulated by an ambient or environmental distribution of temperature, humidity, and wind, weather fronts also facilitate the transition of convective clouds into storms with locally heavy rain, lightning, and other possible hazards. For example, in an environment characterized by winds that are weak and change little with distance above the ground, the storms tend to be short lived and benign. The structure of the vertical drafts and other internal storm processes under weak wind shear—i.e., a small change in the horizontal wind over some vertical distance—are distinct relative to those when the environmental wind shear is strong. In particular, strong wind shear in combination with large buoyancy favors the development of squall lines and supercells, both of which are highly coherent storm types. Besides having durations that may exceed a few hours, both of these storm types tend to be particularly hazardous: squall lines are most apt to generate swaths of damaging “straight-line” winds, and supercells spawn the most intense tornadoes and are responsible for the largest hail. Methods used to predict convective-storm hazards capitalize on this knowledge of storm formation and development.
Article
Elisabeth Lipiatou and Anastasios Kentarchos
Although the first European Union Framework Programme (FP) for research and technological development was created in 1984, it was the second FP (FP2) in 1987 that devoted resources to climatological research for the first time. The start of FP2 coincided with the establishment of the Intergovernmental Panel on Climate Change in 1988, aimed at providing a comprehensive assessment on the state of knowledge of the science of climate change.
FP-funded research was not an end in itself but a means for the European Union (EU) to achieve common objectives based on the principle of cross-border research cooperation and coordination to reduce fragmentation and effectively tackle common challenges.
Since 1987, climate science has been present in all nine FPs (as of 2023) following an evolutionary process as goals, priority areas, and financial and implementation instruments have constantly changed to adapt to new needs. A research- and technology-oriented Europe was gradually created including in the area of climate science.
There has historically been a strong intrinsic link, there has been a strong, intrinsic link between research and environmental and climate policies. Climate science under the FPs, focusing initially on oceans, the carbon cycle, and atmospheric processes, has increased tremendously both in scope and scale, encompassing a broad range of areas over time, such as climate modeling, polar research, ocean acidification, regional seas and oceans, impacts and adaptation, decarbonization pathways, socioeconomic analyses, sustainability, observations, and climate services.
The creation and evolution of the EU’s FPs has played a critical role in establishing Europe’s leading position on climate science by means of promoting excellence, increasing the relevance of climate research for policymaking, and building long-lasting communities and platforms across Europe and beyond as international cooperation has been a key feature of the FPs. No other group of countries collaborates on climate science at such scale. Due to their inherited long-term planning and cross-national nature, the FPs have provided a stable framework for advancing climate science by incentivizing scientists and institutions with diverse expertise to work together, creating the necessary critical mass to tackle the increasing complex and interdisciplinary nature of climate science, rationalizing resource allocation, and setting norms and standards for scientific collaboration. It is hard to imagine in retrospect how a similar level of impact could have been achieved solely at a national level.
Looking ahead and capitalizing on the rich experience and lessons learned since the 1980s, important challenges and opportunities need to be addressed. These include critical gaps in knowledge, even higher integration of disciplines, use of new technologies and artificial intelligence for state-of-the-art data analysis and modeling, capturing interlinkages with sustainable development goals, better coordination between national and EU agendas, higher mobility of researchers and ideas from across Europe and beyond, and stronger interactions between scientists and nonscientific entities (public authorities, the private sector, financial institutions, and civil society) in order to better communicate climate science and proactively translate new knowledge into actionable plans.
Article
Leigh Orf
Since the dawn of the digital computing age in the mid-20th century, computers have been used as virtual laboratories for the study of atmospheric phenomena. The first simulations of thunderstorms captured only their gross features, yet required the most advanced computing hardware of the time. The following decades saw exponential growth in computational power that was, and continues to be, exploited by scientists seeking to answer fundamental questions about the internal workings of thunderstorms, the most devastating of which cause substantial loss of life and property throughout the world every year.
By the mid-1970s, the most powerful computers available to scientists contained, for the first time, enough memory and computing power to represent the atmosphere containing a thunderstorm in three dimensions. Prior to this time, thunderstorms were represented primarily in two dimensions, which implicitly assumed an infinitely long cloud in the missing dimension. These earliest state-of-the-art, fully three-dimensional simulations revealed fundamental properties of thunderstorms, such as the structure of updrafts and downdrafts and the evolution of precipitation, while still only roughly approximating the flow of an actual storm due computing limitations.
In the decades that followed these pioneering three-dimensional thunderstorm simulations, new modeling approaches were developed that included more accurate ways of representing winds, temperature, pressure, friction, and the complex microphysical processes involving solid, liquid, and gaseous forms of water within the storm. Further, these models also were able to be run at a resolution higher than that of previous studies due to the steady growth of available computational resources described by Moore’s law, which observed that computing power doubled roughly every two years. The resolution of thunderstorm models was able to be increased to the point where features on the order of a couple hundred meters could be resolved, allowing small but intense features such as downbursts and tornadoes to be simulated within the parent thunderstorm. As model resolution increased further, so did the amount of data produced by the models, which presented a significant challenge to scientists trying to compare their simulated thunderstorms to observed thunderstorms. Visualization and analysis software was developed and refined in tandem with improved modeling and computing hardware, allowing the simulated data to be brought to life and allowing direct comparison to observed storms. In 2019, the highest resolution simulations of violent thunderstorms are able to capture processes such as tornado formation and evolution which are found to include the aggregation of many small, weak vortices with diameters of dozens of meters, features which simply cannot not be simulated at lower resolution.
Article
Matti Leppäranta
The physics of the ice season in the Baltic Sea is presented for its research history and present state of understanding. Knowledge has been accumulated since the 1800s, first in connection of operational ice charting; deeper physics came into the picture in the 1960s along with sea ice structure and pressure ridges. Then the drift of ice and ice forecasting formed the leading line for 20 years, and over to the present century, ice climate modeling and satellite remote sensing have been the primary research topics. The physics of the Baltic Sea ice season is quite well understood, and toward future ice conditions realistic scenarios can be constructed from hypothetical regional climate scenarios.
The key factor in climate scenarios is the air temperature in the Baltic Sea region. The local freezing and breakup dates show sensitivity of 5–8 days’ change to climate warming by 1 °C, while this sensitivity of sea ice thickness is 5–10 cm. However, sea ice thickness and breakup date show sensitivity also to snow accumulation: More snow gives later breakup, but the thickness of ice may decrease due to better insulation or increase due to more snow-ice. The annual probability of freezing decreases with climate warming, and the sensitivity of maximum annual ice extent is 35,000–40,000 km2 (8.3%–9.5% of the Baltic Sea area) for 1 °C climate warming. Due to the large sensitivity to air temperature, the severity of the Baltic Sea ice season is closely related to the North Atlantic Oscillation.
Article
Aitor Anduaga
A typhoon is a highly organized storm system that develops from initial cyclone eddies and matures by sucking up from the warm tropical oceans large quantities of water vapor that condense at higher altitudes. This latent heat of condensation is the prime source of energy supply that strengthens the typhoon as it progresses across the Pacific Ocean. A typhoon differs from other tropical cyclones only on the basis of location. While hurricanes form in the Atlantic Ocean and eastern North Pacific Ocean, typhoons develop in the western North Pacific around the Philippines, Japan, and China.
Because of their violent histories with strong winds and torrential rains and their impact on society, the countries that ring the North Pacific basin—China, Japan, Korea, the Philippines, and Taiwan—all often felt the need for producing typhoon forecasts and establishing storm warning services. Typhoon accounts in the pre-instrumental era were normally limited to descriptions of damage and incidences, and subsequent studies were hampered by the impossibility of solving the equations governing the weather, as they are distinctly nonlinear. The world’s first typhoon forecast was made in 1879 by Fr. Federico Faura, who was a Jesuit scientist from the Manila Observatory. His brethren from the Zikawei Jesuit Observatory, Fr. Marc Dechevrens, first reconstructed the trajectory of a typhoon in 1879, a study that marked the beginning of an era. The Jesuits and other Europeans like William Doberck studied typhoons as a research topic, and their achievements are regarded as products of colonial meteorology.
Between the First and Second World Wars, there were important contributions to typhoon science by meteorologists in the Philippines (Ch. Deppermann, M. Selga, and J. Coronas), China (E. Gherzi), and Japan (T. Okada, and Y. Horiguti). The polar front theory developed by the Bergen School in Norway played an important role in creating the large-scale setting for tropical cyclones. Deppermann became the greatest exponent of the polar front theory and air-masses analysis in the Far East and Southeast Asia.
From the end of WWII, it became evident that more effective typhoon forecasts were needed to meet military demands. In Hawaii, a joint Navy and Air Force center for typhoon analysis and forecasting was established in 1959—the Joint Typhoon Warning Center (JTWC). Its goals were to publish annual typhoon summaries and conduct research into tropical cyclone forecasting and detection. Other centers had previously specialized in issuing typhoon warnings and analysis. Thus, research and operational forecasting went hand in hand not only in the American JTWC but also in China (the Hong Kong Observatory, the Macao Meteorological and Geophysical Bureau), Japan (the Regional Specialized Meteorological Center), and the Philippines (Atmospheric, Geophysical and Astronomical Service Administration [PAGASA]). These efforts produced more precise scientific knowledge about the formation, structure, and movement of typhoons. In the 1970s and the 1980s, three new tools for research—three-dimensional numerical cloud models, Doppler radar, and geosynchronous satellite imagery—provided a new observational and dynamical perspective on tropical cyclones. The development of modern computing systems has offered the possibility of making numerical weather forecast models and simulations of tropical cyclones. However, typhoons are not mechanical artifacts, and forecasting their track and intensity remains an uncertain science.
Article
The Sahel of Africa has been identified as having the strongest land–atmosphere (L/A) interactions on Earth. The Sahelian L/A interaction studies started in the late 1970s. However, due to controversies surrounding the early studies, in which only a single land parameter was considered in L/A interactions, the credibility of land-surface effects on the Sahel’s climate has long been challenged. Using general circulation models and regional climate models coupled with biogeophysical and dynamic vegetation models as well as applying analyses of satellite-derived data, field measurements, and assimilation data, the effects of land-surface processes on West African monsoon variability, which dominates the Sahel climate system at intraseasonal, seasonal, interannual, and decadal scales, as well as mesoscale, have been extensively investigated to realistically explore the Sahel L/A interaction: its effects and the mechanisms involved.
The Sahel suffered the longest and most severe drought on the planet in the 20th century. The devastating environmental and socioeconomic consequences resulting from drought-induced famines in the Sahel have provided strong motivation for the scientific community and society to understand the causes of the drought and its impact. It was controversial and under debate whether the drought was a natural process, mainly induced by sea-surface temperature variability, or was affected by anthropogenic activities. Diagnostic and modeling studies of the sea-surface temperature have consistently demonstrated it exerts great influence on the Sahel climate system, but sea-surface temperature is unable to explain the full scope of the Sahel climate variability and the later 20th century’s drought. The effect of land-surface processes, especially land-cover and land-use change, on the drought have also been extensively investigated. The results with more realistic land-surface models suggest land processes are a first-order contributor to the Sahel climate and to its drought during the later 1960s to the 1980s, comparable to sea surface temperature effects. The issues that caused controversies in the early studies have been properly addressed in the studies with state-of-the-art models and available data.
The mechanisms through which land processes affect the atmosphere are also elucidated in a number of studies. Land-surface processes not only affect vertical transfer of radiative fluxes and heat fluxes but also affect horizontal advections through their effect on the atmospheric heating rate and moisture flux convergence/divergence as well as horizontal temperature gradients.
Article
Hans von Storch and Patrick Heimbach
Klaus Hasselmann and Syukuro Manabe shared one half of the 2021 Nobel Prize in Physics for their achievements in “physical modelling of Earth’s climate, quantifying variability and reliably predicting global warming.” The Swedish Academy asserted: “Klaus Hasselmann created a model that links together weather and climate, thus answering the question of why climate models can be reliable despite weather being changeable and chaotic. He also developed methods for identifying specific signals, fingerprints, that both natural phenomena and human activities imprint in the climate. His methods have been used to prove that the increased temperature in the atmosphere is due to human emissions of carbon dioxide.”
Klaus Hasselmann is best known for founding the Max Planck Institute for Meteorology in Hamburg, where he implemented his ideas on quantifying internal variability in the climate system and its components (“stochastic climate model”), and on devising a methodology to separate “noise,” that is, variability not provoked by external drivers, from a “signal” reflecting the impact of such external drivers. In this way, he introduced a paradigm shift from a deterministic view of the climate system to a genuinely stochastic one. This proved instrumental in detecting anthropogenic climate change beyond natural variability (“detection”) and in demonstrating that the ongoing change could not be explained without a dominant role of elevated atmospheric levels of greenhouse gases (“attribution”). Hasselmann and Manabe initiated the construction of two of the leading quasi-realistic climate models featuring not only an atmosphere and ocean but also the carbon cycle. These achievements were recognized by the Nobel Prize.
The spectrum of themes where Klaus Hasselmann left significant footprints extends far beyond climate dynamics, covering a wide range of geophysical topics. By the time he entered the field of climate research, Hasselmann had already produced groundbreaking work on the modeling and predicting of ocean surface waves. He and his wife led the development of a third-generation wave model, versions of which are in operational use today at major numerical weather prediction centers around the world, including the European Centre for Medium-Range Weather Forecasts (ECMWF) in Europe and the National Oceanic and Atmospheric Administration (NOAA) in the United States. After his retirement, Hasselmann considered his contribution to geophysical issues of climate and climate change sufficient and chose to focus on two different topics. One concerned the coupling of societal decision making with the geophysical system. A second concerned Hasselmann’s interest in elementary particle physics, which dates back to his work in the 1960s when he described nonlinear resonant wave-wave interactions by means of Feynman diagrams. Following early ideas by Kaluza and Klein, Hasselmann pursued a deterministic, unified field theory of particles and fields, which he termed “metron theory.” It remains incomplete, and given Hasselmann’s age may never be completed by himself, but may have to await a smart young physicist to take on the challenge.
Article
Hannah Christensen and Laure Zanna
Numerical computer models play a key role in Earth science. They are used to make predictions on timescales ranging from short-range weather forecasts to multi-century climate projections. Computer models are also used as tools to understand the past, present, and future climate system, enabling numerical experiments to be carried out to explore physical processes of interest. To understand the behavior of these models, their formulation must be appreciated, including the simplifications and approximations employed in developing the model code.
Foremost among these approximations are the parametrization schemes used to represent subgrid scale physical processes. A useful mathematical formulation of parametrization often involves Reynolds averaging, whereby a flow described by the Navier–Stokes equations is separated into a slow, resolved component and a fast, unresolved component. On performing this decomposition, the component representing the unresolved, fast processes is shown to impact the resolved scale flow: It is this component that a parametrization seeks to represent.
Parametrization schemes encode the understanding of the salient physics needed to describe processes in the atmosphere and ocean and other components of the Earth system, such as land and ice. For example, finding the relationship between the Reynolds stresses and the mean fields of the system is the turbulence closure problem, which is common to both atmospheric and oceanic numerical models. Atmospheric parametrization schemes include those representing radiation, clouds and cloud microphysics, moist convection, gravity waves, and the boundary layer (which encompasses a representation of turbulent mixing). In the ocean, eddy processes must also be parametrized, including stirring and mixing due to both sub-mesoscale and mesoscale eddies. The similarities between the parametrization problem in atmospheric and oceanic models facilitate transfer of knowledge between these two communities, such that promising avenues of research in one community can in principle readily be adapted and adopted by the other.
Article
Annick Terpstra and Shun-ichi Watanabe
Polar lows are intense maritime mesoscale cyclones developing in both hemispheres poleward of the main polar front. These rapidly developing severe storms are accompanied by strong winds, heavy precipitation (hail and snow), and rough sea states. Polar lows can have significant socio-economic impact by disrupting human activities in the maritime polar regions, such as tourism, fisheries, transportation, research activities, and exploration of natural resources. Upon landfall, they quickly decay, but their blustery winds and substantial snowfall affect the local communities in coastal regions, resulting in airport-closure, transportation breakdown and increased avalanche risk.
Polar lows are primarily a winter phenomenon and tend to develop during excursions of polar air masses, originating from ice-covered areas, over the adjacent open ocean. These so-called cold-air outbreaks are driven by the synoptic scale atmospheric configuration, and polar lows usually develop along air-mass boundaries associated with these cold-air outbreaks. Local orographic features and the sea-ice configuration also play prominent roles in pre-conditioning the environment for polar low development. Proposed dynamical pathways for polar low development include moist baroclinic instability, symmetric convective instability, and frontal instability, but verification of these mechanisms is limited due to sparse observations and insufficient resolution of reanalysis data.
Maritime areas with a frequent polar low presence are climatologically important regions for the global ocean circulation, hence local changes in energy exchange between the atmosphere and ocean in these regions potentially impacts the global climate system. Recent research indicates that the enhanced heat and momentum exchange by mesoscale cyclones likely has a pronounced impact on ocean heat transport by triggering deep water formation in the ocean and by modifying horizontal mixing in the atmosphere. Since the beginning of the satellite-era a steady decline of sea-ice cover in the Northern Hemisphere has expanded the ice-free polar regions, and thus the areas for polar low development, yet the number of polar lows is projected to decline under future climate scenarios.
Article
Florian Sévellec and Bablu Sinha
The Atlantic meridional overturning circulation (AMOC) is a large, basin-scale circulation located in the Atlantic Ocean that transports climatically important quantities of heat northward. It can be described schematically as a northward flow in the warm upper ocean and a southward return flow at depth in much colder water. The heat capacity of a layer of 2 m of seawater is equivalent to that of the entire atmosphere; therefore, ocean heat content dominates Earth’s energy storage. For this reason and because of the AMOC’s typically slow decadal variations, the AMOC regulates North Atlantic climate and contributes to the relatively mild climate of Europe. Hence, predicting AMOC variations is crucial for predicting climate variations in regions bordering the North Atlantic. Similar to weather predictions, climate predictions are based on numerical simulations of the climate system. However, providing accurate predictions on such long timescales is far from straightforward. Even in a perfect model approach, where biases between numerical models and reality are ignored, the chaotic nature of AMOC variability (i.e., high sensitivity to initial conditions) is a significant source of uncertainty, limiting its accurate prediction.
Predictability studies focus on factors determining our ability to predict the AMOC rather than actual predictions. To this end, processes affecting AMOC predictability can be separated into two categories: processes acting as a source of predictability (periodic harmonic oscillations, for instance) and processes acting as a source of uncertainty (small errors that grow and significantly modify the outcome of numerical simulations). To understand the former category, harmonic modes of variability or precursors of AMOC variations are identified. On the other hand, in a perfect model approach, the sources of uncertainty are characterized by the spread of numerical simulations differentiated by the application of small differences to their initial conditions. Two alternative and complementary frameworks have arisen to investigate this spread. The pragmatic framework corresponds to performing an ensemble of simulations, by imposing a randomly chosen small error on the initial conditions of individual simulations. This allows a probabilistic approach and to statistically characterize the importance of the initial condition by evaluating the spread of the ensemble. The theoretical framework uses stability analysis to identify small perturbations to the initial conditions, which are conducive to significant disruption of the AMOC.
Beyond these difficulties in assessing the predictability, decadal prediction systems have been developed and tested through a range of hindcasts. The inherent difficulties of operational forecasts span from developing efficient initialization methods to setting accurate radiative forcing to correcting for model drift and bias, all these improvements being estimated and validated through a range of specifically designed skill metrics.
12