Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution.
Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions.
Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day.
The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.
Article
S.C. Pryor and A.N. Hahmann
Winds within the atmospheric boundary layer (i.e., near to Earth’s surface) vary across a range of scales from a few meters and sub-second timescales (i.e., the scales of turbulent motions) to extremely large and long-period phenomena (i.e., the primary circulation patterns of the global atmosphere). Winds redistribute momentum and heat, and short- and long-term predictions of wind characteristics have applications to a number of socioeconomic sectors (e.g., engineering infrastructure). Despite its importance, atmospheric flow (i.e., wind) has been subject to less research within the climate downscaling community than variables such as air temperature and precipitation. However, there is a growing comprehension that wind storms are the single biggest source of “weather-related” insurance losses in Europe and North America in the contemporary climate, and that possible changes in wind regimes and intense wind events as a result of global climate non-stationarity are of importance to a variety of potential climate change feedbacks (e.g., emission of sea spray into the atmosphere), ecological impacts (such as wind throw of trees), and a number of other socioeconomic sectors (e.g., transportation infrastructure and operation, electricity generation and distribution, and structural design codes for buildings). There are a number of specific challenges inherent in downscaling wind including, but not limited to, the fact that it has both magnitude (wind speed) and orientation (wind direction). Further, for most applications, it is necessary to accurately downscale the full probability distribution of values at short timescales (e.g., hourly), including extremes, while the mean wind speed averaged over a month or year is of little utility. Dynamical, statistical, and hybrid approaches have been developed to downscale different aspects of the wind climate, but have large uncertainties in terms of high-impact aspects of the wind (e.g., extreme wind speeds and gusts). The wind energy industry is a key application for right-scaled wind parameters and has been a major driver of new techniques to increase fidelity. Many opportunities remain to refine existing downscaling methods, to develop new approaches to improve the skill with which the spatiotemporal scales of wind variability are represented, and for new approaches to evaluate skill in the context of wind climates.
Article
Aristita Busuioc
Empirical-statistical downscaling (ESD) models use statistical relationships to infer local climate information from large-scale climate information produced by global climate models (GCMs), as an alternative to the dynamical downscaling provided by regional climate models (RCMs). Among various statistical downscaling approaches, the nonlinear methods are mainly used to construct downscaling models for local variables that strongly deviate from linearity and normality, such as daily precipitation. These approaches are also appropriate to handle downscaling of extreme rainfall.
There are nonlinear downscaling techniques of various complexities. The simplest one is represented by the analog method that originated in the late 1960s from the need to obtain local details of short-term weather forecasting for various variables (air temperature, precipitation, wind, etc.). Its first application as a statistical downscaling approach in climate science was carried out in the late 1990s. More sophisticated statistical downscaling models have been developed based on a wide range of nonlinear functions. Among them, the artificial neural network (ANN) was the first nonlinear regression–type method used as a statistical downscaling technique in climate science in the late 1990s. The ANN was inspired by the human brain, and it was used early in artificial intelligence and robotics. The impressive development of machine learning algorithms that can automatically extract information from a vast amount of data, usually through nonlinear multivariate models, contributed to improvements of ANN downscaling models and the development of other new, machine learning-based downscaling models to overcome some ANN drawbacks, such as support vector machine and random forest techniques. The mixed models combining various machine learning downscaling approaches maximize the downscaling skill in local climate change applications, especially for extreme rainfall indices.
Other nonlinear statistical downscaling approaches refer to conditional weather generators, combining a standard weather generator (WG) with a separate statistical downscaling model by conditioning the WG parameters on large-scale predictors via a nonlinear approach. The most popular ways to condition the WG parameters are the weather-type approach and generalized linear models.
This article discusses various aspects of nonlinear statistical downscaling approaches, their strengths and weaknesses, as well as comparison with linear statistical downscaling models. A proper validation of the nonlinear statistical downscaling models is an important issue, allowing selection of an appropriate model to obtain credible information on local climate change. Selection of large-scale predictors, the model’s ability to reproduce historical trends, extreme events, and the uncertainty related to future downscaled changes are important issues to be addressed.
A better estimation of the uncertainty related to downscaled climate change projections can be achieved by using ensembles of more GCMs as drivers, including their ability to simulate the input in downscaling models. Comparison between more future statistical downscaled climate change signals and those derived from dynamical downscaling driven by the same global model, including a complex validation of the RCMs, gives a measure of the reliability of downscaled regional climate changes.
Article
Lars-Otto Reiersen and Robert W. Corell
This overview of climate observation, monitoring, and research for the Arctic region outlines the key elements essential to an enhanced understanding of the unprecedented climate change in the region and its global influences. The first recorded observation of sea ice extent around Svalbard date back to the whaling activities around 1600. Over the following 300 years there are periodic and inadequate observations of climate and sea ice from explorers seeking a northern sea route for sailing to Asia or reaching the North Pole. Around 1900 there were few fixed meteorological stations in the circumpolar North. During the Second World War and the following Cold War, the observation network increased significantly due to military interest. Since the 1970s the use of satellites has improved the climate and meteorological observations of Arctic areas, and advancements in marine observations (beneath the sea surface and within oceanic sediments) have contributed to a much improved network of climate and meteorological variables.
Climate change in the Arctic and its possible effects within the Arctic and on global climate such as extreme weather and sea level rise were first reported in the ACIA 2005 report. Since then there has been a lot of climate-related assessments based on data from the Arctic and ongoing processes within the Arctic that are linked to global systems.
Article
H.E. Markus Meier and Sofia Saraiva
In this article, the concepts and background of regional climate modeling of the future Baltic Sea are summarized and state-of-the-art projections, climate change impact studies, and challenges are discussed. The focus is on projected oceanographic changes in future climate. However, as these changes may have a significant impact on biogeochemical cycling, nutrient load scenario simulations in future climates are briefly discussed as well. The Baltic Sea is special compared to other coastal seas as it is a tideless, semi-enclosed sea with large freshwater and nutrient supply from a partly heavily populated catchment area and a long response time of about 30 years, and as it is, in the early 21st century, warming faster than any other coastal sea in the world. Hence, policymakers request the development of nutrient load abatement strategies in future climate. For this purpose, large ensembles of coupled climate–environmental scenario simulations based upon high-resolution circulation models were developed to estimate changes in water temperature, salinity, sea-ice cover, sea level, oxygen, nutrient, and phytoplankton concentrations, and water transparency, together with uncertainty ranges. Uncertainties in scenario simulations of the Baltic Sea are considerable. Sources of uncertainties are global and regional climate model biases, natural variability, and unknown greenhouse gas emission and nutrient load scenarios. Unknown early 21st-century and future bioavailable nutrient loads from land and atmosphere and the experimental setup of the dynamical downscaling technique are perhaps the largest sources of uncertainties for marine biogeochemistry projections. The high uncertainties might potentially be reducible through investments in new multi-model ensemble simulations that are built on better experimental setups, improved models, and more plausible nutrient loads. The development of community models for the Baltic Sea region with improved performance and common coordinated experiments of scenario simulations is recommended.
Article
Filippo Giorgi
Dynamical downscaling has been used for about 30 years to produce high-resolution climate information for studies of regional climate processes and for the production of climate information usable for vulnerability, impact assessment and adaptation studies. Three dynamical downscaling tools are available in the literature: high-resolution global atmospheric models (HIRGCMs), variable resolution global atmospheric models (VARGCMs), and regional climate models (RCMs). These techniques share their basic principles, but have different underlying assumptions, advantages and limitations. They have undergone a tremendous growth in the last decades, especially RCMs, to the point that they are considered fundamental tools in climate change research. Major intercomparison programs have been implemented over the years, culminating in the Coordinated Regional climate Downscaling EXperiment (CORDEX), an international program aimed at producing fine scale regional climate information based on multi-model and multi-technique approaches. These intercomparison projects have lead to an increasing understanding of fundamental issues in climate downscaling and in the potential of downscaling techniques to provide actionable climate change information. Yet some open issues remain, most notably that of the added value of downscaling, which are the focus of substantial current research. One of the primary future directions in dynamical downscaling is the development of fully coupled regional earth system models including multiple components, such as the atmosphere, the oceans, the biosphere and the chemosphere. Within this context, dynamical downscaling models offer optimal testbeds to incorporate the human component in a fully interactive way. Another main future research direction is the transition to models running at convection-permitting scales, order of 1–3 km, for climate applications. This is a major modeling step which will require substantial development in research and infrastructure, and will allow the description of local scale processes and phenomena within the climate change context. Especially in view of these future directions, climate downscaling will increasingly constitute a fundamental interface between the climate modeling and end-user communities in support of climate service activities.
Article
Shuiqing Yin and Deliang Chen
Weather generators (WGs) are stochastic models that can generate synthetic climate time series of unlimited length and having statistical properties similar to those of observed time series for a location or an area. WGs can infill missing data, extend the length of climate time series, and generate meteorological conditions for unobserved locations. Since the 1990s WGs have become an important spatial-temporal statistical downscaling methodology and have been playing an increasingly important role in climate-change impact assessment. Although the majority of the existing WGs have focused on simulation of precipitation for a single site, more and more WGs considering correlations among multiple sites, and multiple variables, including precipitation and nonprecipitation variables such as temperature, solar radiation, wind, humidity, and cloud cover have been developed for daily and sub-daily scales. Various parametric, semi-parametric and nonparametric WGs have shown the ability to represent the mean, variance, and autocorrelation characteristics of climate variables at different scales. Two main methodologies including change factor and conditional WGs on large-scale dynamical and thermal dynamical weather states have been developed for applications under a changing climate. However, rationality and validity of assumptions underlining both methodologies need to be carefully checked before they can be used to project future climate change at local scale. Further, simulation of extreme values by the existing WGs needs to be further improved. WGs assimilating multisource observations from ground observations, reanalysis, satellite remote sensing, and weather radar for the continuous simulation of two-dimensional climate fields based on the mixed physics-based and stochastic approaches deserve further efforts. An inter-comparison project on a large ensemble of WG methods may be helpful for the improvement of WGs. Due to the applied nature of WGs, their future development also requires inputs from decision-makers and other relevant stakeholders.