Southern Africa extends from the equator to about 34°S and is essentially a narrow, peninsular land mass bordered to its south, west, and east by oceans. Its termination in the mid-ocean subtropics has important consequences for regional climate, since it allows the strongest western boundary current in the world ocean (warm Agulhas Current) to be in close proximity to an intense eastern boundary upwelling current (cold Benguela Current). Unlike other western boundary currents, the Agulhas retroflects south of the land mass and flows back into the South Indian Ocean, thereby leading to a large area of anomalously warm water south of South Africa which may influence storm development over the southern part of the land mass. Two other unique regional ocean features imprint on the climate of southern Africa—the Angola-Benguela Frontal Zone (ABFZ) and the Seychelles-Chagos thermocline ridge (SCTR). The former is important for the development of Benguela Niños and flood events over southwestern Africa, while the SCTR influences Madden-Julian Oscillation and tropical cyclone activity in the western Indian Ocean. In addition to South Atlantic and South Indian Ocean influences, there are climatic implications of the neighboring Southern Ocean.
Along with Benguela Niños, the southern African climate is strongly impacted by ENSO and to lesser extent by the Southern Annular Mode (SAM) and sea-surface temperature (SST) dipole events in the Indian and South Atlantic Oceans. The regional land–sea distribution leads to a highly variable climate on a range of scales that is still not well understood due to its complexity and its sensitivity to a number of different drivers. Strong and variable gradients in surface characteristics exist not only in the neighboring oceans but also in several aspects of the land mass, and these all influence the regional climate and its interactions with climate modes of variability.
Much of the interior of southern Africa consists of a plateau 1 to 1.5 km high and a narrow coastal belt that is particularly mountainous in South Africa, leading to sharp topographic gradients. The topography is able to influence the track and development of many weather systems, leading to marked gradients in rainfall and vegetation across southern Africa.
The presence of the large island of Madagascar, itself a region of strong topographic and rainfall gradients, has consequences for the climate of the mainland by reducing the impact of the moist trade winds on the Mozambique coast and the likelihood of tropical cyclone landfall there. It is also likely that at least some of the relativity aridity of the Limpopo region in northern South Africa/southern Zimbabwe results from the location of Madagascar in the southwestern Indian Ocean.
While leading to challenges in understanding its climate variability and change, the complex geography of southern Africa offers a very useful test bed for improving the global models used in many institutions for climate prediction. Thus, research into the relative shortcomings of the models in the southern African region may lead not only to better understanding of southern African climate but also to enhanced capability to predict climate globally.
Storms are characterized by high wind speeds; often large precipitation amounts in the form of rain, freezing rain, or snow; and thunder and lightning (for thunderstorms). Many different types exist, ranging from tropical cyclones and large storms of the midlatitudes to small polar lows, Medicanes, thunderstorms, or tornadoes. They can lead to extreme weather events like storm surges, flooding, high snow quantities, or bush fires. Storms often pose a threat to human lives and property, agriculture, forestry, wildlife, ships, and offshore and onshore industries. Thus, it is vital to gain knowledge about changes in storm frequency and intensity. Future storm predictions are important, and they depend to a great extent on the evaluation of changes in wind statistics of the past.
To obtain reliable statistics, long and homogeneous time series over at least some decades are needed. However, wind measurements are frequently influenced by changes in the synoptic station, its location or surroundings, instruments, and measurement practices. These factors deteriorate the homogeneity of wind records. Storm indexes derived from measurements of sea-level pressure are less prone to such changes, as pressure does not show very much spatial variability as wind speed does. Long-term historical pressure measurements exist that enable us to deduce changes in storminess for more than the last 140 years. But storm records are not just compiled from measurement data; they also may be inferred from climate model data.
The first numerical weather forecasts were performed in the 1950s. These served as a basis for the development of atmospheric circulation models, which were the first generation of climate models or general-circulation models. Soon afterward, model data was analyzed for storm events and cyclone-tracking algorithms were programmed. Climate models nowadays have reached high resolution and reliability and can be run not just for the past, but also for future emission scenarios which return possible future storm activity.
What are the local consequences of a global climate change? This question is important for proper handling of risks associated with weather and climate. It also tacitly assumes that there is a systematic link between conditions taking place on a global scale and local effects. It is the utilization of the dependency of local climate on the global picture that is the backbone of downscaling; however, it is perhaps easiest to explain the concept of downscaling in climate research if we start asking why it is necessary.
Global climate models are our best tools for computing future temperature, wind, and precipitation (or other climatological variables), but their limitations do not let them calculate local details for these quantities. It is simply not adequate to interpolate from model results. However, the models are able to predict large-scale features, such as circulation patterns, El Niño Southern Oscillation (ENSO), and the global mean temperature. The local temperature and precipitation are nevertheless related to conditions taking place over a larger surrounding region as well as local geographical features (also true, in general, for variables connected to weather/climate). This, of course, also applies to other weather elements.
Downscaling makes use of systematic dependencies between local conditions and large-scale ambient phenomena in addition to including information about the effect of the local geography on the local climate. The application of downscaling can involve several different approaches. This article will discuss various downscaling strategies and methods and will elaborate on their rationale, assumptions, strengths, and weaknesses.
One important issue is the presence of spontaneous natural year-to-year variations that are not necessarily directly related to the global state, but are internally generated and superimposed on the long-term climate change. These variations typically involve phenomena such as ENSO, the North Atlantic Oscillation (NAO), and the Southeast Asian monsoon, which are nonlinear and non-deterministic.
We cannot predict the exact evolution of non-deterministic natural variations beyond a short time horizon. It is possible nevertheless to estimate probabilities for their future state based, for instance, on projections with models run many times with slightly different set-up, and thereby to get some information about the likelihood of future outcomes.
When it comes to downscaling and predicting regional and local climate, it is important to use many global climate model predictions. Another important point is to apply proper validation to make sure the models give skillful predictions.
For some downscaling approaches such as regional climate models, there usually is a need for bias adjustment due to model imperfections. This means the downscaling doesn’t get the right answer for the right reason. Some of the explanations for the presence of biases in the results may be different parameterization schemes in the driving global and the nested regional models.
A final underlying question is: What can we learn from downscaling? The context for the analysis is important, as downscaling is often used to find answers to some (implicit) question and can be a means of extracting most of the relevant information concerning the local climate. It is also important to include discussions about uncertainty, model skill or shortcomings, model validation, and skill scores.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Climate Science. Please check back later for the full article.
Dynamical downscaling (DD) consists of the use of physical models to downscale the large-scale climate information produced by coupled Atmosphere-Ocean Global Climate Models (AOGCMs). This can be achieved with global high-resolution atmospheric GCMs (HIRGCMs), variable resolution GCMs (VARGCMs) and limited area Regional Climate Models (RCMs). Borrowing from numerical weather prediction, DD techniques originated in the late 1980s from the need to produce high-resolution regional climate information for application to impact studies. The philosophy behind DD is that the AOGCM can simulate the response of the global circulation to large-scale forcings (e.g., due to greenhouse gases) and the DD tools can regionally enhance this response to account for the contribution of fine-scale processes and forcings, for example, due to aerosols and complex topography, coastlines, and vegetation cover.
Since the 1990s the use of DD for climate studies, and principally RCMs, has grown tremendously, to the point that DD techniques, along with Empirical-Statistical Downscaling (ESD), are considered key elements in the production of climate information for regions. In fact, the use of DD is justified to the extent that it adds useful and robust high-resolution information to that produced by AOGCMs, and considerable research has gone into investigating this central issue, often referred to as “added value,” which is still often debated. Today a number of flexible and portable RCM systems are available, which can be routinely run for up to centennial-scale experiments over domains distributed worldwide for a wide range of applications, from process studies to paleo and future climate simulations. The model resolution has steadily increased up to grid spacings of ~10–25 km, and a new generation of non-hydrostatic RCMs is being developed and tested for use in very-high-resolution (~ few km) convection-permitting simulations. In addition, the development of coupled regional earth system models is a new frontier area of research aimed at exploring the importance of air-sea-land interactions at regional scales.
A fundamental step toward a better understanding of DD techniques has been the inception of multimodel intercomparison studies. These were originally regional in nature, which prevented the application of common protocols and thus hindered the transfer of know-how across projects. However, this problem was addressed through the creation in the late 2000s of the Coordinated Regional Climate Downscaling Experiment (CORDEX), which provided a common simulation protocol across regions worldwide, representing a fundamental growth step for the DD community.
Often different DD and ESD techniques have been seen in competition with each other, and with AOGCMs. However the realization is growing that they all represent complementary pieces to compose the puzzle of generating robust and credible climate services to address the needs and concerns of different regions, countries, and societal sectors. DD will continue to be increasingly used in the generation of actionable climate information, but a solid understanding of the advantages and limitations of DD is paramount to its use in this process.
Christopher K. Wikle
The climate system consists of interactions between physical, biological, chemical, and human processes across a wide range of spatial and temporal scales. Characterizing the behavior of components of this system is crucial for scientists and decision makers. There is substantial uncertainty associated with observations of this system as well as our understanding of various system components and their interaction. Thus, inference and prediction in climate science should accommodate uncertainty in order to facilitate the decision-making process. Statistical science is designed to provide the tools to perform inference and prediction in the presence of uncertainty. In particular, the field of spatial statistics considers inference and prediction for uncertain processes that exhibit dependence in space and/or time. Traditionally, this is done descriptively through the characterization of the first two moments of the process, one expressing the mean structure and one accounting for dependence through covariability.
Historically, there are three primary areas of methodological development in spatial statistics: geostatistics, which considers processes that vary continuously over space; areal or lattice processes, which considers processes that are defined on a countable discrete domain (e.g., political units); and, spatial point patterns (or point processes), which consider the locations of events in space to be a random process. All of these methods have been used in the climate sciences, but the most prominent has been the geostatistical methodology. This methodology was simultaneously discovered in geology and in meteorology and provides a way to do optimal prediction (interpolation) in space and can facilitate parameter inference for spatial data. These methods rely strongly on Gaussian process theory, which is increasingly of interest in machine learning. These methods are common in the spatial statistics literature, but much development is still being done in the area to accommodate more complex processes and “big data” applications. Newer approaches are based on restricting models to neighbor-based representations or reformulating the random spatial process in terms of a basis expansion. There are many computational and flexibility advantages to these approaches, depending on the specific implementation. Complexity is also increasingly being accommodated through the use of the hierarchical modeling paradigm, which provides a probabilistically consistent way to decompose the data, process, and parameters corresponding to the spatial or spatio-temporal process.
Perhaps the biggest challenge in modern applications of spatial and spatio-temporal statistics is to develop methods that are flexible yet can account for the complex dependencies between and across processes, account for uncertainty in all aspects of the problem, and still be computationally tractable. These are daunting challenges, yet it is a very active area of research, and new solutions are constantly being developed. New methods are also being rapidly developed in the machine learning community, and these methods are increasingly more applicable to dependent processes. The interaction and cross-fertilization between the machine learning and spatial statistics community is growing, which will likely lead to a new generation of spatial statistical methods that are applicable to climate science.
Benjamin Mark Sanderson
Long-term planning for many sectors of society—including infrastructure, human health, agriculture, food security, water supply, insurance, conflict, and migration—requires an assessment of the range of possible futures which the planet might experience. Unlike short-term forecasts for which validation data exists for comparing forecast to observation, long-term forecasts have almost no validation data. As a result, researchers must rely on supporting evidence to make their projections. A review of methods for quantifying the uncertainty of climate predictions is given. The primary tool for quantifying these uncertainties are climate models, which attempt to model all the relevant processes that are important in climate change. However, neither the construction nor calibration of climate models is perfect, and therefore the uncertainties due to model errors must also be taken into account in the uncertainty quantification.
Typically, prediction uncertainty is quantified by generating ensembles of solutions from climate models to span possible futures. For instance, initial condition uncertainty is quantified by generating an ensemble of initial states that are consistent with available observations and then integrating the climate model starting from each initial condition. A climate model is itself subject to uncertain choices in modeling certain physical processes. Some of these choices can be sampled using so-called perturbed physics ensembles, whereby uncertain parameters or structural switches are perturbed within a single climate model framework. For a variety of reasons, there is a strong reliance on so-called ensembles of opportunity, which are multi-model ensembles (MMEs) formed by collecting predictions from different climate modeling centers, each using a potentially different framework to represent relevant processes for climate change. The most extensive collection of these MMEs is associated with the Coupled Model Intercomparison Project (CMIP). However, the component models have biases, simplifications, and interdependencies that must be taken into account when making formal risk assessments. Techniques and concepts for integrating model projections in MMEs are reviewed, including differing paradigms of ensembles and how they relate to observations and reality. Aspects of these conceptual issues then inform the more practical matters of how to combine and weight model projections to best represent the uncertainties associated with projected climate change.