Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Climate Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 29 September 2023

Downscaling Climate Informationfree

Downscaling Climate Informationfree

  • Rasmus BenestadRasmus BenestadNorwegian Meteorological Institute

Summary

What are the local consequences of a global climate change? This question is important for proper handling of risks associated with weather and climate. It also tacitly assumes that there is a systematic link between conditions taking place on a global scale and local effects. It is the utilization of the dependency of local climate on the global picture that is the backbone of downscaling; however, it is perhaps easiest to explain the concept of downscaling in climate research if we start asking why it is necessary.

Global climate models are our best tools for computing future temperature, wind, and precipitation (or other climatological variables), but their limitations do not let them calculate local details for these quantities. It is simply not adequate to interpolate from model results. However, the models are able to predict large-scale features, such as circulation patterns, El Niño Southern Oscillation (ENSO), and the global mean temperature. The local temperature and precipitation are nevertheless related to conditions taking place over a larger surrounding region as well as local geographical features (also true, in general, for variables connected to weather/climate). This, of course, also applies to other weather elements.

Downscaling makes use of systematic dependencies between local conditions and large-scale ambient phenomena in addition to including information about the effect of the local geography on the local climate. The application of downscaling can involve several different approaches. This article will discuss various downscaling strategies and methods and will elaborate on their rationale, assumptions, strengths, and weaknesses.

One important issue is the presence of spontaneous natural year-to-year variations that are not necessarily directly related to the global state, but are internally generated and superimposed on the long-term climate change. These variations typically involve phenomena such as ENSO, the North Atlantic Oscillation (NAO), and the Southeast Asian monsoon, which are nonlinear and non-deterministic.

We cannot predict the exact evolution of non-deterministic natural variations beyond a short time horizon. It is possible nevertheless to estimate probabilities for their future state based, for instance, on projections with models run many times with slightly different set-up, and thereby to get some information about the likelihood of future outcomes.

When it comes to downscaling and predicting regional and local climate, it is important to use many global climate model predictions. Another important point is to apply proper validation to make sure the models give skillful predictions.

For some downscaling approaches such as regional climate models, there usually is a need for bias adjustment due to model imperfections. This means the downscaling doesn’t get the right answer for the right reason. Some of the explanations for the presence of biases in the results may be different parameterization schemes in the driving global and the nested regional models.

A final underlying question is: What can we learn from downscaling? The context for the analysis is important, as downscaling is often used to find answers to some (implicit) question and can be a means of extracting most of the relevant information concerning the local climate. It is also important to include discussions about uncertainty, model skill or shortcomings, model validation, and skill scores.

Subjects

  • Climate Systems and Climate Dynamics
  • Statistics
  • Future Climate Change Scenarios
  • Climate Impact: Extreme Events

What Is Downscaling?

The rain that falls over any location is connected to weather phenomena such as frontal systems, clouds, and wind patterns that stretch over a larger region. It is influenced by the amount of moisture in the air, which is linked to how water evaporates from the ocean, lakes, and land surfaces. The winds affect the areas where it rains. Hence, there is a link between the large scales—the conditions that stretch over a larger space—and the local rain. Likewise, local temperature is linked to other large-scale conditions.

Downscaling tries to make use of the information we have on the dependency between large and local scales. Experience provides clear examples of a dependency between large-scale conditions and local weather elements such as rainfall and temperature. For instance, the rainfall in many places is connected with the El Niño Southern Oscillation phenomenon (ENSO).

The large-scale wind pattern associated with the North Atlantic Oscillation (NAO) influences both temperature and rainfall (Figure 1 illustrates the connection between the NAO and rainfall) over Europe, and the South-East Asian Monsoon is linked with local rainfall over India, Bangladesh, and Myanmar.

Figure 1. The correlation between the December–February mean precipitation amount and the North Atlantic oscillation index (NAOI) illustrates how the rainfall is influenced by the large-scale wind patterns. High NAOI index is associated with enhanced westerly winds and more rain in northern Europe, while low NAOI is linked with wet winters in southern Europe.

Source: The figure was generated with the R-package ‘esd’ and the EOBS data (http://www.ecad.eu/download/ensembles/ensembles.php).

Furthermore, a global climate change has consequences for local climates around the globe. The principle of downscaling was illustrated by Victor Starr in 1942, in terms of weather forecasting, by the following words:

The general problem of forecasting weather Conditions may be subdivided conveniently into two parts. In the first place, it is necessary to predict the state of motion of the atmosphere in the future; and, secondly, it is necessary to interpret this expected state of motion in terms of the actual weather which it will produce at various localities. The first of these problems is essentially of a dynamic nature, inasmuch as it concerns itself with the mechanics of the motion of a fluid. The second problem involves a large number of details because, under exactly similar conditions of motion, different weather types may occur, depending upon the temperature of the air involved, the moisture content of the, air, and a host of “local influences”.

(Starr, 1942, p. 1)

Downscaling can follow different approaches, which in empirical-statistical downscaling can be described as (a) a physical connection between synoptic meteorological states (the prevailing Großwetterlage) and some local/regional observable condition; or (b) a statistical problem of climate inversion (i.e., deriving probabilities of regional or local phenomena conditional upon a large-scale state). While the former assumes an instantaneous connection between the scales (Starr, 1942), the latter implies a link between statistical characteristics (Kim, Chang, Baker, Wilks, & Gates, 1984). The latter approach in downscaling may also involve using stochastic models conditioned by some large-scale information, generating, for instance, daily precipitation time series (Busuioc & von Storch, 2003).

Why Do We Need Downscaling?

Why do we not get the local temperature and rainfall from the global climate models? Global climate models (GCMs) are based on atmospheric models, used in the numerical weather predictions, and extended with schemes to account for the effect of gases that affect the radiative energy transfer. Over time, the GCMs have also included oceanic and other components. One big challenge has been the need for running the model over many years, to generate sufficient statistical samples of weather states needed in climate research.

The GCMs are designed to simulate the large-scale flow of energy and mass, but not to calculate local details. One reason is that the earth is described in terms of a grid, and the size of each grid box typically is approximately 100 kilometers. It is currently not possible to use a finer grid, because it would require more computer resources than is presently available.

A limitation to the output of the GCMs is caused by an interruption of the energy cascade when it reaches a spatial scale equivalent to twice the minimum grid point separation, where it then introduces significant errors for the smallest scales. The effect is seen in the energy spectra, which show irregular behavior at smallest scales (Pielke, 1991).

Another reason is that the climate models calculate the characteristics of phenomena such as clouds very crudely, partly because we have limited knowledge about clouds. The modeling is also complicated by the fact that the physical processes simulate the range of phenomena from particles and cloud drops, sized in micrometers, to circulation patterns that stretch to thousands of kilometers. Furthermore, digital computers cannot describe continuous numbers perfectly, but need to make some shortcuts and use discrete numbers instead. The models describing the atmosphere and oceans also make a number of approximations in order to be efficient. It has been acknowledged that the GCMs have a minimum skillful spatial scale, which requires the use of downscaling (von Storch, Zorita, & Cubasch, 1993).

In summary, the global climate models are able to predict the characteristics of large-scale conditions reasonably well. They tend to reproduce global warming and features that look like observed phenomena such as El Niño, the westerlies, the North Atlantic Oscillation, the Hadley Cell, the jet stream, major ocean currents, and so on. But they do not provide good estimates for the temperature or rainfall in a particular valley.

The History of Downscaling

The origin of downscaling is connected with the early development of numerical weather forecasting, which was first carried out on digital computers in the 1950s, although its use was described by Starr (1942) before the advent of digital computers. The Joint Meteorology Committee in the United States decided, for the first time in 1955, to go operational with a weather model developed at Princeton according to Shuman (1989), but there are also publications on the use of an empirical-statistical downscaling (ESD) technique, known as the “the analog method,” in weather prediction since 1951, and in seasonal forecasting since 1988 (Matulla et al. 2008). The term downscaling was probably first used in a study by von Storch, Zorita, and Cubasch (1991).

The early numerical models of the atmosphere were only able to compute the atmospheric motion on a scale of hundreds of kilometers and for a limited number of vertical levels in the atmosphere. Since 1965, however, a downscaling approach known as Perfect Prog (Klein, Lewis, & Enger, 1959) was used in numerical weather forecasting at the National Weather Service to objectively forecast maximum and minimum temperature (Baker, 1982). It was replaced by another scheme called model output statistics (MOS) in 1973.

The downscaling concept later widened to include estimating conditional probabilities by making use of the idea that regional/local states are mapped in a low-dimensional space, which is a function of few “predictors.” In a sense, the approach can be compared to Bayesian statistics, where the probabilities are conditioned by some “priors” (predictors). Different ways of choosing predictors have been proposed, such as the analog or kriging method (Biau, Zorita, von Storch, & Wackernagel, 1999; von Storch, 1999).

In 1971, the U.S. National Meteorological Center (NMC) started using a Limited Area Fine Mesh (LFM) model to refine the results from a global model with a 53×57 point grid that computed atmospheric conditions on three different heights. The LFM model was similar to the global model, but with half the mesh size and time step and a smaller model domain. The development of limited area models (LAMs) continued, and in 1978, a moveable fine-mesh model was introduced to forecast hurricanes. A nested grid model called Regional Analysis and Forecast Systems (RAFS) was set up in 1985, by the NMC, which also made use of a multivariate optimum interpolation.

In Europe, a research program called HIRLAM (HIgh Resolution Limited Area Model) was established in 1985, to develop and maintain a numerical short-range weather forecasting system for operational use by the participating meteorological institutes. Nested high-resolution models were also used to forecast urban air quality in the latter part of the 1990s (Baklanov, Rasmussen, Fay, Berge, & Finardi, 2002).

Downscaling in Climate Research

The use of empirical-statistical downscaling in climate research became widespread in the early 1990s, in connection with growing concerns about a global warming and its consequences for water resources (Wilby & Wigley, 1997). For instance, a relationship between the mean sea-level pressure pattern and local rainfall formed the basis for a statistical downscaling analysis, by von Storch, Zorita, and Cubasch (1993), of rainfall over the Iberian peninsula in Europe.

Nested high-resolution models, too, were being adapted for climate research, and regional climate models (RCMs) were developed, such as the Regional Climate Model system (RegCM). RegCM was constructed at the National Center for Atmospheric Research (NCAR) and first appeared in 1989, and has undergone major updates since then.

The concept of downscaling, that is, deriving probabilities of regional or local phenomena conditional upon a large-scale state, is not necessarily a given in terms of boundary-value RCMs, and a genuine “large → smaller scale” link is only ensured if a large-scale constraint is implemented (Yoshimura & Kanamitsu, 2008). Indeed, RCM modeling was not originally framed as a downscaling problem.

In Europe during the 1990s, the Dutch Meteorological institute (KNMI) and the Danish Meteorological institute (DMI) built the Regional Atmospheric Climate Model (RACMO) based on the High Resolution Limited Area Model (HIRLAM). Another related regional climate model was HIRHAM established in 1992 (Christensen, Christensen, Lopez, van Meijgaard, & Botzet, 1996), which was based on a subset of the regional HIRLAM and global ECHAM models (Roeckner et al., 2003), combining the dynamics of the former with the parameterization schemes of the latter. Other regional models include Weather Research and Forecasting (WRF) and the HadRCM3 at the U.K. Hadley Centre, and RCA from the Swedish Rossby Centre. A more complete list of RCMs is provided by CORDEX, and U.S. federally funded science organizations provide a more detailed overview of some RCMs used in the United States.

One of the first international projects on downscaling was the European project “Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects” (PRUDENCE) (Christensen, Carter, Rummukainen, & Amanatidis, 2007) over the period November 1, 2001–October 31, 2004. Its aim was to use RCMs to derive high-resolution climate change scenarios for Europe, and one of its legacies was establishing a common design of the model simulations, using more than one downscaling model (ensemble), analyses of climate model performance, and evaluation and intercomparison of simulated climate changes. The PRUDENCE effort represents the first comprehensive, continental-scale intercomparison and evaluation of high-resolution climate models and their applications. It brought together climate modeling, impact research, and social sciences expertise on climate change, as it also addressed impacts on water resources, agriculture, ecosystems, energy, and transport.

While the downscaling activity within the PRUDENCE project was limited to the RCM community, it was connected with two complementary European projects “Statistical and Regional dynamical Downscaling of Extremes for European regions” (STARDEX) and “Modeling the Impact of Climate Extremes” (MICE). The downscaling activity in both PRUDENCE and STARDEX included an investigation of extreme weather events, and were hence connected to the MICE project.

STARDEX intended to carry out a systematic inter-comparison of statistical and dynamical downscaling methods and to evaluate their abilities to represent extremes. It was part of a European effort to explore future changes in extreme events in response to global warming. One of the legacies of STARDEX was a set of standard extremes indices; however, STARDEX did not succeed in some of its ambitions, namely to identify more robust techniques for the production of future scenarios of extremes for European case-study regions for the end of the 21st century.

One limitation of the European projects PRUDENCE, STARDEX, and MICE was that they did not embrace the wider downscaling communities and did not include the wider range of expertise such as the statistics community. Their efforts were succeeded by the ENSEMBLES project (van der Linden & Mitchell, 2009), an EU FP6 integrated research project, running from 2004 to 2009, that to a large extent included the same partners and, like previous EU projects, included only part of the European downscaling community.

ENSEMBLES produced both observational data sets, such as the ECA&D and its gridded counterpart EOBS, and large archives of downscaled results. The downscaling in ENSEMBLES was mainly concentrated on RCMs, and ESD was not fully utilized or integrated. Its strength, however, was derived from the use of several different global and regional climate models, and the different combinations of running different RCMs with different GCMs was described in terms of a “GCM-RCM matrix.” The ambitious aim of ENSEMLBES was to develop an ensemble climate prediction system on seasonal to centennial time scales to quantify and reduce the uncertainty in modeling climate, and to link the outputs of the ensemble prediction system to a range of applications. It is probably fair to say that this goal has not yet been achieved.

In North America, the North American Regional Climate Change Assessment Program (NARCCAP) was established in parallel to the European efforts to produce high-resolution climate change simulations for the North American region, and in 2004, it launched experiment 0.0 and 0.1 to compare (among other things) temperature and precipitation from the models with observations. NARCCAP was driven by the RCM community, and the value of its output was limited as it excluded empirical-statistical downscaling. One of NARCCAP’s main objectives was to investigate uncertainties in regional scale projections of future climate and to generate climate change scenarios for use in impacts research. Its first user workshop was held in 2008, at NCAR in Boulder, Colorado.

The Regional Climate Model Intercomparison Project (RMIP) (Fu et al., 2005) was established in Asia to examine and compare different climatological drivers to those of its American and European counterparts. The drivers in question included the Asian monsoon and the effect of the Tibetan Plateau on the large-scale flows crossing the Eurasian continent.

There has also been parallel downscaling work carried out in Africa, Iran, China, Japan, Latin America, Australia, and South Korea. Europe and North America, however, have involved the most concerted efforts. The strongest contribution to the progress in downscaling has been in countries with a good network of observations (Europe, North America, Australia, South Africa, Japan). However, the first common experiment set up within the CORDEX-ESD framework involved the CLARIS data set, mainly from Argentina. There have also been a number of downscaling projects connected to the Himalayan region, which feeds a large part of the world population with water.

Although these projects made some steps towards their goals, the questions about uncertainties and regional climate models are still unresolved, and downscaling is still a work in progress in the framework of the World Climate Research Programme (WCRP) “Coordinated Regional Climate Downscaling Experiment” (CORDEX).

Downscaling is used to describe regional co-variations associated with natural variability; it is also used to complement re-analyses of recent decades with regional components (e.g., Feser, Rockel, von Storch, Winterfeldt, & Zahn, 2011) and for assessing risks (Weisse et al., 2009).

Downscaling Skeptics

There have been some questions concerning the merit of downscaling (Pielke Sr & Wilby, 2012; Pielke Sr et al., in press; also see RealClimate.org), and the Intergovernmental Panel on climate Change (IPCC) fifth assessment report (AR5) chose to take a critical view on downscaling:

Downscaling of global climate reconstructions and models has advanced to bring the climate data to a closer match for the temporal and spatial resolution requirements for assessing many regional impacts, and the application of downscaled climate data has expanded substantially since AR4. {21.3.3, 21.5.3} This information remains weakly coordinated, and current results indicate that high-resolution downscaled reconstructions of the current climate can have significant errors. The increase in downscaled data sets has not narrowed the uncertainty range. Integrating these data with historical change and process-based understanding remains an important challenge.

(Field et al., 2014, pp. 1137–1138)

An even harsher view, which questioned whether “regional projections are based on sound science,” was reported by Rosen (2010). That report also portrayed “dynamical downscaling methods, which use both statistics and the physics of climate to produce regional climate-change projections, are a better alternative to purely statistical downscaling.” However, this sentiment may be due to a selective view based on a few unfortunate examples of downscaling.

A number of skeptical claims are addressed later on, in the section “Fixing the Shortcomings of Downscaling,” and include issues such as bias adjustment and blind application (“black box” usage), where the chosen model is inappropriate in terms of the nature of the data. Pielke Sr. et al. (2012) were skeptical towards some use of downscaling, which they referred to as “Type 4,” meaning for downscaling climate change (multidecadal global climate model prediction based on prescribed radiative forcing). They argued that type 4 dynamic downscaling fails to improve accuracy beyond what could be achieved by interpolating global model predictions onto a finer scale terrain or landscape map.

Reliable information is nevertheless needed on local scales by the impact, adaptation, and vulnerability (IAV) community (Disaster and Emergency Planning for Preparedness, Response, and Recovery), despite the merit of downscaling being viewed as limited by the authors of Chapter 21 of the fifth assessment in the IPCC’s second working group.

Why Has Downscaling Been So Difficult?

One reason for the lack of success in modeling regional climates may be due to the narrow focus on regional climate models rather than embracing the range of different methods that are available. Regional climate models and empirical-statistical downscaling are complementary in terms of their skills, and they have different strengths and weaknesses that are independent of each other (Benestad, 2011; Takayabu et al., 2016).

The downscaling community has made progress at different speeds. For instance, the EU project CLIPC still includes only regional climate model results, but aims to “bias adjust” (See “Biases and Bias Adjustment”) them so that they are more realistic. The post-processing in CLIPC is connected to the “Bias Correction Intercomparison Project” (BCIP), but it doesn’t address the core problem of downscaling and the reasons for the model shortcomings. Other recent European projects, such as SPECS, EUPORIAS, EU-Circle, and COST­VALUE, include both regional climate models and empirical-statistical downscaling.

The Downscaling Community Is Still Divided

Downscaling is still a young discipline, but there may also be some persisting prejudice against statistics­based approaches. As a consequence, the limited range of file formats of the Earth System Grid Federation (ESGF) is tailored towards global and regional climate model results, and does not accommodate for the type of information that can be obtained from empirical-statistical downscaling. There are other aspects such as probabilities, events, and storm tracks for which conventions are lacking.

Ideally, downscaling is a process of adding information about the effect of the local geography on the local climate, which is an effect that is often well documented. From an analytics perspective, it will become possible to derive better information about extreme events, especially if ensembles of global climate models increase in size, their resolution is improved, the range of natural variability is better represented (Deser, Knutti, Solomon, & Phillips, 2012), and improved tools use the latest statistical methods, and hence attribute probabilities.

Downscaling and Big Data Analysis

The climate research community has generated large volumes of data, but has not used state-of-the-art statistical know-how when it comes to distilling relevant information from the archives. A Stats+Climate workshop was organized in Oslo, Norway, October 2013,with the intention to get statisticians more involved in climate research to make better use of their expertise in terms of making sense of the data (Thorarinsdottir, Sillmann, & Benestad, 2014).

There is more information available than embedded in the global climate models, and the challenge is to separate the signal from noise and model biases. A reason for optimism is that the global climate models are extensions of weather models that are able to predict extreme weather events. One question, then, is whether downscaling can make use of the added information without distorting it and introducing spurious biases; a strategy for avoiding misspecification of local effects is to make use of various independent approaches, such as regional climate models and empirical-statistical downscaling, with different strengths and weaknesses.

Coordination of Downscaling and Progress

The attempt to move the research into new directions includes the expansion of the CORDEX program into CORDEX-ESD and CORDEX analysis, in addition to placing more emphasis on a “bottom-up” approach (Pielke Sr. et al., in press). There may also be benefits with a better dialogue between the GCM and the downscaling communities. The CORDEX program joined World Climate Research Programme’s (WCRP) coupled model intercomparison project (CMIP) initiatives in 2015, and downscaling has become a more visible activity within the world climate research community.

Downscaling and Climate Services

Downscaling offers a way to understand connections in nature and a way to make most use of available information. It is one of the activities that may add information to climate services that aim to assist decision makers concerned with climatic influences. Most decision makers have not accounted for climate change on a local scale, and often different sources provide conflicting numbers. One of the big questions is how to use downscaled results, a topic brought up at a CORDEX-ESD workshop in Cape Town, South Africa (2015). Often the user of climate information needs to know the actual future outcome, the sum of both a forced global climate change and natural internal variations associated with regional phenomena. Only ensembles of climate models are able to provide sufficient information about the range of possible outcomes (Deser et al., 2012). Different methods exist with different strengths and weaknesses. It is important to use more than one method to identify robust signals, but also to exclude poor methods that provide misleading results. Hence, model validation should also be an integral part of the downscaling exercise.

The Future of Downscaling

The downscaling community is presently wrestling with question about where downscaling is heading in the future, as GCMs are moving to higher spatial resolutions (Palmer, 2014). For some questions, however, a spatial resolution of 4 km may not be sufficient, for example, for the simulation of summer-time convective rainfall and climate extremes (Prein et al., 2015).

However, both RCMs and GCMs should be regarded as tools for carrying out research, and their usefulness depends on the scientific questions that one attempts to address. Some of the outstanding questions are described in the WCRP “grand challenges.” Another future use for downscaling may be to address questions concerning climate change in future megacities.

The Idea Behind Downscaling, Loosely Speaking

Scales Are Central to Downscaling

Weather systems such as winds, fronts, and clouds often have a regional extent, as can be seen from photos taken from satellites (Figure 2).

Figure 2. It is possible to see weather systems from a photo taken from space, such as from a satellite. We can see that both clouds and snow-covered regions tend to extend over distances, influencing the local temperature, precipitation, wind, and other climate variables. Credit: NASA.

Many of these phenomena are connected to the dynamical nature of the atmosphere (e.g., prevailing circulation patterns) and wave phenomena, such as Rossby waves, for which spatial and temporal scales are given properties derived from the laws of physics. Scale is a concept that is at the heart of downscaling. It relates to the physical size of such phenomena or processes (e.g., the monsoon, ENSO, the NAO), not to be confused with the statistical term with the same name (the scale parameter). Downscaling in meteorology and climate usually makes use of the connection between mesoscales and synoptic (~1000 km) or larger scales.

The spatial scale associated with a phenomenon also has a scale in time, and the scales in the two dimensions (time and space) are linked with the description of weather and climate. The spatial and temporal scales are connected because large-scale conditions last longer and local processes involve more rapid fluctuations (time scales as distance overflow velocity, as if a volume of air were to be replaced: T ~ L/V). If this connection between spatial and temporal scales can be assumed, then the problem of downscaling can be split into the downscaling of weather and the downscaling of climate. The former is often used in the climate change context, where an RCM and most ESD methods are used to downscale large-scale fields on an hour-by­hour or day-by-day basis. Downscaling climate involves estimating some parameter describing the weather statistics over some time interval (statistical sample).

The Difference Between Weather and Climate

There is an important difference between the two concepts weather and climate. Weather can defined as an instantaneous description of the atmosphere at given location and time (the situation that we see in the photo taken from a satellite in Figure 2). Climate, on the other hand, can be defined as weather statistics—or the expected weather characteristics. It can, for all intents and purposes, be characterized from a set of parameters (e.g., the mean, standard deviation, shape, and scale parameters) describing a mathematical curve (known as probability density function, or pdf) that quantifies the probabilities associated with temperature or precipitation (or any variable).

The most common representation of climate is the average temperature of precipitation, as shown in Figure 3.

Figure 3. Climate may be defined as the mean (upper) temperature (left) and precipitation rate (right) from NCEP/NCAR; but also the standard deviations (lower) are important parts of their statistical characteristics. The reanalyses provide large-scale characteristics of such aggregated statistical parameters, which tend to vary smoothly in space and are usually influenced by geographic conditions such as latitude, altitude, distance from the coast, and prevailing winds.

Source: NOAA Earth System Research Laboratory, NCEP/NCAR Reanalysis Monthly Means and Other Derived Variables.

Other important aspects include variability, minimum values, maximum values, expected frequencies (probabilities), or persistence (e.g., autocorrelation, duration of rainy season, multi-annual variations). There are other weather elements too, such as wind, pressure, humidity, cloudiness, and so on.

The local climate tends to differ from the maps presented in Figure 3 in terms of dependency on local geographical features, such as elevation, valleys, mountain ranges, and distance to lakes and coast. The maps in Figure 3 were generated from a set of re-analyses, produced by a global atmospheric model, that do not account for such local details. The maps therefore only exhibit the large-scale geographical structures seen in temperature and precipitation (or any other weather/climate related variable ) GCMs tend to produce similar results.

Downscaling Viewed as Making Optimal Use of Available Information

The purpose of downscaling climate information is to find the probability density functions (pdfs) representing temperature, precipitation, or some other element for a location or a region affected by the local geography. Such links are not directly process based, but exploit indirect co-variations. For example, the NAO has no observable wind pattern, although it is associated with a typical time-mean wind field, within which synoptic disturbances travel, and which cause rain and storm surges (Branstator, 1995).

When it comes to downscaling weather, on the other hand, the objective is to account for local influences (e.g., geography, such as nearby local surface conditions, lakes, mountains, and valleys) not included in the numerical atmospheric model and to correct for biases through MOS or a nested limited area model (LAM). In both cases, the idea is to make use of all relevant available information to get the best description of the local conditions; however, the nature of this information and the value of the downscaling exercise depends on the question at hand and the answers one seeks.

Downscaling should be exercised with caution because misapplication of methods and techniques or misinterpretations can result in misguided decisions and even maladaptation. These issues will be discussed later on.

Downscaling in Tropical Regions as Opposed to the Mid-Latitudes

One interesting question is whether downscaling is equally efficient in the tropics as in the mid-latitudes. The answer to this question may depend on purpose, approach (RCM vs. ESD), method, location, and which element is being downscaled. On the one hand, there are tropical phenomena with extensive geographical reach, such as ENSO and the monsoon, some of which are reproduced by the global models. Furthermore, there are locations where the regional geography influences the local climate in a predictable way. On the other hand, the observational network is not well developed in most of the tropics, which precludes an assessment of the merits of downscaling compared to the mid-latitudes.

How Is Downscaling Done?

Using Mathematical Expression

Downscaling can be expressed in mathematical terms, where a local variable (y) at a given location (r) is a function of the surrounding large-scale conditions or a teleconnection (X), effects from the local geography (g), and local noise (n):

y(r) = f(X,g(r))+ n(r) (1)

The concept is that the output y(r), which may be the temperature in some valley, includes more information than that provided in X, but also includes information embedded in the link between large and small scales f(X,.) and the effects of local geography g(r).

Different Approaches in Downscaling

There are different ways to downscale rainfall and temperature, and the most common involve regional climate models (RCMs) and empirical-statistical downscaling (ESD; also referred to as “statistical downscaling”). The former, also known as “dynamical downscaling,” calculates both the effects of large scales f(X,g(r)) and local conditions n(r) simultaneously, whereas these two aspects must be treated separately for the latter. These limited area models need a description of the atmosphere on their edges (boundary conditions), which is typically taken from a global atmospheric model. Furthermore, the surface conditions are also a given for these models, unless they involve coupling between the regional ocean and atmosphere. Hence, RCMs are “nested” within a larger scale (global) model, which specifies the large-scale picture. They compute regional weather states with finer details based on the information from the embedding model results and the laws of physics (the Navier-Stokes equations, the laws of thermodynamics, the ideal gas law, etc.).

Traditional ESD does not account for the influence from local processes n(r), but it is possible to include the local effects from such sources by adding a “noise” component. The sum of downscaled results and noise can provide a description that is similar to the original observations. In addition to RCMs and ESD, there are hybrid dynamical-statistical approaches, as well as spatial disaggregation techniques, as well as stretched grid and high-resolution global time slice approaches.

The two approaches to downscaling, namely dynamical and statistical, are based on different philosophies, making use of different sources of information. The RCM will calculate the rainfall and temperature from differential equations describing how pressure affects winds (geostrophic dynamics) and the movement of energy and mass through the atmosphere. The ESD approach, on the other hand, can capture dependencies between processes that are not explicitly coded into models, and makes use of the information embedded in the observational data.

The different nature of RCMs and ESD in terms of utilizing different sources of information is the main reason why they have different strengths and weaknesses. RCMs and ESD should be regarded as different tools that complement each other. However, there are situations where one may not be suitable. One limitation of ESD, for instance, is that it can only provide results for which there are corresponding observations. The results from RCMs, on the other hand, are strictly not directly comparable to observations made from thermometers and rain gauges, since the former represents a 3-dimensional grid-box volume average, whereas the latter involve zero-dimensional point measurements.

Regional Climate Models Are Inspired by Physics First Principles

While RCMs make use of equations to solve a set of equations representing physical laws, they cannot be considered as being based on physics. Some parts are based on first principles; otherwise they operate with many semi-empirical closures, named parameterizations. Whereas RCMs tend to involve similar conceptual models in terms of solving the atmospheric dynamics and thermodynamics, ESD may involve a wide range of different statistical methods and strategies (Wilby et al., 1998). The input in ESD representing the large-scale conditions is referred to as the “predictor” (i.e., X in eq. 1), whereas the output describing the local conditions is the “predictand” (y(r) in eq. 1). Furthermore, the large-scale predictors are often formulated in terms of empirical orthogonal functions (EOFs).

Statistical Downscaling Techniques

The statistical methods may involve linear regression-like models (Huth, 2002), analog models (Timbal & Jones, 2008), or nonlinear methods such as self-organizing maps (Hewitson & Crane, 2002). The choice of method and approach depends on the context and the nature of the problem at hand. In addition to different methods, ESD may also involve different strategies, such as “Perfect Prog,” model output statistics (MOS), a hybrid that utilizes a common data space (e.g., using common EOFs), or weather generators.

Downscaling Climate Information, Natural Variability, and Minimum Skillful Scale

The Importance of Regional Climate Phenomena and Natural Variability

The local temperature and precipitation (which may also apply to a number of other variables) are affected by winds and incoming weather systems; however, the exact location of clouds, fronts, and storms may depend on a range of different factors (e.g., the sea surface temperature, mean sea level pressure, the position of the polar jet). The resulting motion of weather systems is complex and gives rise to variations with a nature of arbitrariness, often referred to as natural or internal variability.

There also tends to be some persistence over seasons (e.g., the monsoon, ENSO) or over years (the NAO), which has implications for the minimum length of any data record needed to sample the weather statistics in order to get a good description of the climate and its probability density function.

The natural variability is strongest on local scales because the effects of different weather systems tend to cancel each other when an average is taken over space. Computations with global climate models (GCMs) also suggest that a given evolution in the global mean temperature can be accompanied by a wide range of possible outcomes for future temperature and precipitation on a regional scale, depending on the starting point of the model simulations (Deser et al., 2012). This is also true for other elements.

The Use of Ensembles in Downscaling Climate Projections

The exact starting point, the initial conditions, for the climate model simulation is not well defined in terms of the true state of the climate system on the earth, which means that the computations must be carried out multiple times in order to get a representative description of the temperature or precipitation statistics (known as ensembles). We do not have a complete picture of the heat distribution in the oceans and the subsurface water conditions.

The take-home message is that GCMs themselves indicate that there is an inherent bottleneck in the large­scale climate dynamics in terms of deriving regional information about future temperatures and precipitation. The detailed trajectories (referring to sequences of states describing the evolution in the atmospheric conditions) of model outcomes are infinitesimally sensitive to the initial conditions, even though the global state is constrained by the greenhouse gas concentrations, known as the boundary conditions. The statistics (means, extremes) derived from extended segments of these trajectories are nevertheless predictable, despite the fact that exact details of the trajectories in phase space are “infinitesimally sensitive.” There is a minimum skillful scale connected to natural variability.

Aggregating Output From LAMs and RCMs to Describe the Climate

LAMs and RCMs are the strategies best suited for providing weather type information, and can only indirectly provide a description of the local climate (the weather statistics or the probability distribution function) if the simulation is sufficiently long for generating a representative statistical sample. Similarly, the statistical description is estimated indirectly in most cases, from empirical-statistical downscaling applied on a daily or monthly basis, but it may also be possible to estimate the parameters aggregated over longer time scales and assume that natural variability is a stochastic process whose information is embedded in the data sample.

Local and Large-Scale Processes

Pure downscaling only captures the part of the local variation that is dependent on the processes acting over a larger spatial region, as in f(X,g(r)) in eq. 1. A more complete picture is the sum of the large scale f(X,g(r)) and local factors n(r) not affected by the larger scales. ESD is a more pure form of downscaling, whereas RCMs capture the dependency to large scales and account for local effects to some extent through simulation of internal variations caused by local dynamics. This difference makes it difficult to compare ESD and RCM results directly on a day-to-day basis. However, the ESD results may be used in conjunction with a weather generator, and then weather statistics can be disaggregated to produce daily or sub-daily data.

Weather Generators

The idea of the weather generator is to mimic the effects from local processes (Field et al., 2014; Richardson, 1981; Wilby et al., 1998), assuming that these have no connection to the global and the large scales. If there is such a dependency, it should be part of the downscaling problem. The basic weather generators assume a stationary stochastic process, meaning that the probability density function (pdf) is constant, but with random fluctuations. A weather generator may also specify the autocorrelation function, describing the persistence of the process, or it may use a Markov chain to describe the transition from wet to dry conditions. It is also possible to combine a weather generator with downscaling, where the latter is used to predict the parameters describing the pdf used to generate stochastic weather states.

Predictability

Downscaling is often used as part of predicting future climate. Here the term prediction is used to estimate an unknown state, given a different condition and a dependency between the two, whereas projection refers to the outcome of model simulations given a plausible storyline for future greenhouse gas emissions. Projections therefore are conditional predictions that assume an unknown and uncertain future evolution in terms of the world economy (a scenario). The term prediction is used here with a wider range of meanings, and is used in the context of downscaling as it may be applied both to the past and to future projections.

A central question is: What is predictable, and what is random? While the exact state of the atmosphere (and trajectory in phase space) is known to become unpredictable after some time due to extreme sensitivity to initial conditions (Lorenz, 1963), the statistics are nevertheless predictable. Trivial examples of this type of predictability include the seasonal cycle and the dependency of temperature and precipitation (which may also apply to other variables, such as wind, pressure, etc.) on the geography (e.g., latitude, distance from the coast, altitude; see Figure 3). Systematic differences in some of the factors on which the weather depends often affect the shape of its pdf in a predictable fashion.

The relationships between weather, random climate variables, statistics, and scales (sample size) are illustrated by the maps shown in Figure 4. The daily data can be considered as random variables/samples (beyond the weather forecasting time horizon), whereas the aggregation provides predictable parameters describing the rainfall characteristics. The maps showing daily precipitation for two random days (upper) show that it varies from day to day. The lower panels show aggregated values for a year and 15 years, exhibiting a more stable picture with features extending over larger distances and more fixed to the local geography.

Climate Change Is a Shift in Weather Statistics

Climate variability and climate change involve variations in the parameters describing the probability density function, and the causes for their variability can be described in terms of physical processes. Such processes may include slow persistent changes in the ocean state, variations in the earth’s orbit around the sun, variations in the solar output, changes in the land surface, or changes in the atmospheric composition. However, the effect of such changes may differ on global and local scale and may vary from location to location.

It is important to reiterate that different runs with GCMs tend to provide different accounts of future regional climate (Deser et al., 2012), implying that downscaling needs to be applied to a range of different model simulations to predict the range of natural variability. That climate models have a tendency to form different trajectories in a non-constrained RCM simulation needs to be mentioned (Rinke & Dethloff, 2000; Weisse, Heyen, & von Storch, 2000). Moreover, projections based on a single model simulation provide a poor account of natural and internal variations.

Model Skill, Evaluation, and Validation

The purpose of downscaling is often to provide a useful description of the local climate for the past, present. or future. It is therefore important to have an idea of how reliable the downscaled results are. This depends on both the given scenarios, GCMs, and the downscaling (Busuioc, von Storch, & R. Schnur, 1999). The skill may be estimated in different ways, through model evaluation or validation, and its exact nature may depend on the context. Typically, if we are interested in future change in the climate, we want to know if the models are able to account for past changes that can be quantified from historical observations. The topic model evaluation and validation is vast and has been the central theme for the European project COST-VALUE.

The Models’ Minimum Skillful Scale

Another issue concerning predictability is the model’s minimum skillful (spatial) scale. Whereas the global climate models reproduce the global mean temperature, they do not account for local trends for the next decades on the grid box scale (Deser et al., 2012). The large-scale predictors or domains in downscaling are expected to be similar or larger than the models’ minimum skillful scale. The predictors also need to involve variables that the GCMs are able to predict skillfully for reliable results. The minimum skillful scale and variables may vary from model to model, region to region, and phenomena to phenomena. Examples of the latter include the El Niño Southern Oscillation, the Southeast Asian monsoon, the North Atlantic Oscillation, or the spatial extent of cyclones. One unresolved research question is whether the spatial and temporal scales associated with such physical phenomena may change in a warmer world.

Predictors in ESD and Climate Signal

Another requirement for ESD is that the large-scale predictor must contain the relevant climate signal, such as changes in thermodynamic properties reflecting an increased greenhouse effect. This criterion may exclude fields with a clear local effect such as the mean sea-level pressure.

Fields containing a climate signal tend to include temperature or moisture information. Hence, the decision about the predictors needs to be based on the physical understanding of the problem.

The predictors must also be skillfully predicted by GCMs in terms of the statistical characteristics of the large scales (e.g., mean values, spatial structure, magnitudes, and temporal power spectra). Ideally, they should also exhibit a strong link with the local variable; however, some aspects may not always have a strong dependency to the large scales. The local climate variable can then be expressed as the sum of two terms, where ESD can describe the term representing the large-scale dependency and weather generators of the remaining variability.

Figure 4. Illustration of scale differences in relation to weather and climate. Weather may be regarded as a randomly selected atmospheric state, whereas climate can be described by parameters aggregated over a sample of such states. Whereas the random states “weather” may vary from day to day, the aggregated results—“climate”—tend to show a more stable picture. They are more predictable and tend to exhibit more consistent spatial structure. The maps show EOBS daily precipitation for two random days (upper) and aggregated over different time scales to the rainfall statistics (lower).

Downscaling Extremes

Most of the past work on extremes and their dependency on climate change has involved RCMs or ESD, using some index representing extremes (e.g., STARDEX). However, ESD-based approaches can involve a number of different methods and may be set up to estimate parameters of the probability distribution function describing the local climate. Regression can capture, for instance, links with large-scale features if there is a systematic link between the statistics of extremes and the large-scale conditions. However, some methods may not be well suited for downscaling extremes, such as analog models, which are unable to prescribe values outside the historical sample on which it is trained.

Limitations of Downscaling

Validation

All downscaling is prone to errors and needs to be subject to proper validation. It is important to assess whether the method can skillfully predict the same aspects as the modeling and analysis are expected to predict (e.g., change in temperature, or how warming influences extreme precipitation). Downscaling always relies on a number of assumptions, and it is important to test and ensure that the methods work as intended as far as possible (Estrada, Guerrero, Gay-García, & Martínez-López, 2013). For this reason, testing and validation are an important part of the process. When it comes to model valuation tools, many methods use cross-validation to assess their skill (e.g., COST-VALUE experiment 1; Maraun et al., 2015).

Added Value

One question is whether the RCMs provide added value compared to GCM output, and Kerr (2013) noted that the RCM community tends to look at how well the RCMs reproduce the climatology rather than at past climate changes. It is important to validate those aspects that are relevant for how the results are used, such as their ability to predict historical trends. The critical stance presented in Kerr (2013) was based on an assessment by Racherla et al. (2012), who had assessed changes predicted by one RCM, driven by one GCM. Their evaluation was weak, however, since it involved only short time slices and only one GCM. Nevertheless, their idea of how RCMs should be validated is important. Feser et al. (2011) reviewed a number of articles that had assessed the benefit of dynamical downscaling and showed that a RCM tends to performs better for the medium spatial scales, but not always for the larger spatial scales. The conclusion of the review was that RCMs can add value for certain variables and locations, especially for situations influenced by factors like coasts or mesoscale dynamics.

Different Approaches in Validation

The validation approach has traditionally been different in the RCM and ESD communities, where the former often tends to focus on biases and annual cycles, whereas the latter has relied more on cross validation. One motivation for assessing biases and annual cycles in RCMs is that their calculations involve solving equations that represent the essential physics; hence, shortcomings are expected to manifest themselves as differences in the mean state or in the magnitude of the annual variations. But there is also some degree of tuning involved in terms of parameterization schemes, and it is important to assess whether the RCMs really are skillful when it comes to predicting local changes associated with changes on the large scales.

ESD is often validated through cross-validation analysis, where part of the data (independent) is withheld, and the rest (dependent) is used to train the model. A prediction is made, based on a model trained with dependent data, but it is validated using large-scale predictors associated with independent data (not used in model training). This process is repeated leaving a different segment out each time, and a set of predictions is assembled based on the predictions of the independent data.

Physical Inconsistencies

One concern is whether artificial drifts are present, as mass, momentum, and energy fluxes across the boundaries of an RCM may not be consistent with corresponding fluxes of the driving GCM. For instance, if increased resolution gives a different precipitation climate in the RCM than in the GCM (e.g., topographic rain), then different surface evaporation (moisture) and rain initiation (condensation aloft) implies a changed vertical energy flow compared to the driving GCM. Different accounts of clouds also give a different picture of the radiative energy flow in the RCM and GCM. Also, higher resolution may change the structure of the wind field and hence the surface evaporation, which is often estimated using a bulk formula that is wind speed dependent. The question is whether these inconsistencies matter for the use of the downscaled results.

Stationarity

The question of stationarity is a concern in all-modeling approach: GCM, RCM, and ESD. Statistical schemes describing the connection between large and small scales or the parameterization schemes may not stay the same in a warmer climate. A limitation for ESD is the difficulty of finding clear dependencies across spatial scales. Often the link between the small scales and large scales only accounts for part of the local variations.

Data Scarcity

The most severe limitation for ESD is data limitation, as statistical models cannot be constructed if there are no data for model training. Data scarcity is a real concern for data with sub-daily time scales, but a lack of data is also a problem for RCMs in terms of validation.

Imperfect Data

The predictors used in ESD are often taken from re-analyses from various meteorological centers, based on the available data, an atmospheric model, and ways to combine the information of observation and model (a technique known as “assimilation”). However, different re-analyses may give different accounts of the large­scale situation in data-sparse regions (Benestad, 2011; Brands et al., 2012). Hence, ESD for data-sparse regions should make use of more than one set of re-analyses as predictor.

Fixing the Shortcomings of Downscaling

Spectral Nudging

The constraints imposed on RCMs spanning large domains (spatial extent) by the driving GCM may sometimes be insufficient in terms of providing a unique description in its interior large-scale flow and wind pattern. In these cases, the RCMs may produce a picture that is inconsistent with the original GCM, unless further constraints are provided. To ensure consistency between the RCM and GCM, a method called spectral nudging may be introduced, which makes the large-scale features follow those in the GCM (von Storch, Langenberg, & Feser, 2000). In this case, the problem has nothing to do with artificial drifts or inconsistent boundary fluxes, but with the dynamical nature of the “internal state.”

Biases and Bias Adjustment

Biases in the climate models pose another problem, and may be present in both the GCM and RCM. One unresolved question is whether the results from the GCM should be bias adjusted before being used as boundary conditions for the RCM. If the reasons for model biases are understood, then post-processing techniques such as bias adjustment can be used to make the results look more similar to corresponding observations (Gudmundsson, Bremnes, Haugen, & Engen-Skaugen, 2012). However, a range of different adjustment methods exists, and these may give different answers for the future.

Biases may not be a problem for ESD if the climatology is specified from the observations and any reduction of variance is accounted for by including a weather generator.

Reduced Variance and Inflation

Some statistical models used in ESD tend to capture only part of the variance in the local variable because ESD often only accounts for the large-scale dependencies and not for the sum of both small and large scales. There have been attempts to provide a remedy for the reduced variance using inflation, which multiplies the results by a factor that ensures a similar variance, but such a fix is both flawed (von Storch, 1999) and unnecessary. A better approach is to view the local variable as a function with both a large-scale and small-scale dependency and treat the latter as a noise term that can be simulated using a weather generator.

“Black Box” Usage

Estrada et al. (2013) cautioned against “black box” statistical downscaling tools and argued that usage of such tools are prone to be incorrectly specified as statistical models, with a risk of deriving spurious results. They raised a number of important issues:

The underlying probabilistic model assumed by an ESD method may not necessarily reflect the distribution of the underlying data.

ESD focuses on the short-term variability.

Trend or autocorrelation in the data may produce spurious results.

Statistical downscaling is an empirical method, and its adequacy should be evaluated using the proper tools.

Downscaling toolboxes seldom give any information regarding the significance of the estimated coefficients.

These issues are easy to overcome, and it is possible to design the statistical downscaling analysis so that these concerns are taken into account. One way statistical downscaling can bypass some of issues is to apply it to samples of data aiming at predicting parameters describing the sample distribution. The sample size for seasonal temperatures involves approximately 90 data points, and for an annual sample of precipitation, a typical sample size has approximately 100 data points, depending on the wet-day frequency. In other words, to downscale climate is defined as weather statistics, rather than weather itself, and where the statistical models are trained on aggregated parameters rather than on day-to-day situations. The primary parameters are the seasonal means for temperature, wet-day means, and frequency of precipitation. According to the central limit theorem, the distribution of mean estimates is expected to converge to normal with sample size, and the normal distribution may be assumed when downscaling climate. It is also trivial to test whether the parameters follow a normal distribution.

Estrada et al. (2013) also made a more questionable statement—that the method requires the variables to be stationary. This is a misguided view, as the issue is that the dependency between large and small scales is assumed to be stationary (not changing over time), not the variables themselves. A climate change implies non­stationary climatic variables, such as long-term trends in temperature and precipitation. Moreover, the usual requirement for ESD is that the function describing the link between the large and small scales are stationary, and not the variables themselves.

A strategy to alleviate concerns about trends influencing the results in ESD is to detrend the data (both predictor and predictand) before model calibration. A test of the model’s ability to capture long-term change is to use the original predictors as input for a model calibrated on detrended data. Furthermore, if the models are trained on successive winter seasons or annual means, for example, then the autocorrelation is typically low; Forecasts for elements such as the local temperature and precipitation, aggregated for a year ahead, are notoriously difficult as a result (true also for the range of weather-related variables, such as pressure, wind, etc.), which implies that autocorrelation has little effect on the outcome of the downscaled results in the default setting.

A cross-validation skill score is estimated based on the combined set of predictions and the corresponding independent data. While the cross-validation often provides information about the skill in terms of year-by-year variability, for instance, it does not necessarily provide information about how well the models predict the long­term trends. Different strategies are required to address the latter, by detrending the data before model training and then using the original data as input to predict past trends. Furthermore, input from GCMs too should be used to test whether the complete analysis is able to account for past trends. For this reason, it is essential to estimate long-term trends from downscaled results for the past and to compare with corresponding trends from the observations.

Another question is whether downscaled projections reproduce the magnitude of the year-to-year variations in the local temperature and precipitation. This can be tested through the use of ensembles and their 90% confidence interval. The number of observed values falling outside this interval is expected to follow a binomial distribution, with p = 0.1, given that they have the same statistical characteristics.

The main bottleneck of downscaling, however, is that both RCMs and ESD require skillful GCMs to make plausible climate projections. An assessment of the GCMs can be included in the ESD strategy by making use of common EOFs (Benestad, 2001) and by comparing the statistics of the principal components derived from the GCM and re-analyses respectively.

Results from many model runs are known as ensembles. Moreover, to derive the numbers describing the rage of possible outcomes, it is important to make use of ensembles and several methods. It is easy to carry out bad downscaling, for instance, basing the analysis on only one model run.

Initial Value and Boundary Condition Problems

One issue concerning simulations of future evolution is the memory of the system and the degree that initial values set the outcome. The nonlinear chaotic dynamics eventually erase the information from the initial conditions. This does not mean that the resulting trajectories converge, only that they go on to vary within a certain range of outcomes (Weisse et al., 2000). Another set of conditions is the boundary value whereby physical changes in the climate system, such as the atmospheric composition due to future greenhouse gas emissions, has an influence of the output statistics, that is, the range of outcomes. The use of ensembles emphasizes the effect from the boundary conditions, but uses simulations with different initial conditions or model set-up to provide a statistical sample for the set of boundary conditions.

Future Direction of Downscaling

In September 2015, the IPCC organized a workshop in Brazil on regional climate projections and their use in impacts and risk analysis studies, where downscalers also were invited. This is a recognition of the need for regional climate modeling and downscaling for IAV analysis as well as in climate services. However, there remain a number of unsolved problems, and the downscaling science is still work in progress.

Downscaling can provide added value compared to taking a subset from global climate models; however, it is important to understand how the information about the local climate is synthesized from different sources. One take-home message from Estrada et al. (2013) is that downscaling should not be done blindly. It requires knowledge of both climatology and meteorology as well as a good understanding of statistics. Furthermore, to get added value of downscaling, it is important to have established well-defined research questions and a solid framework for validating the methods.

Downscaling, however, should be seen as only one step in a more comprehensive analysis, and it should be applied to ensembles of simulations with global climate models. Robust information provided by the analysis needs to be extracted in a final synthesis. There are various modern statistical techniques for extracting relevant information from data, such as regression, which can provide a synthesis from a number of different approaches that identify similarities and differences, rather than just a bias correction. The current catch phrase for such efforts is “distillation of climate information.” The physics involved in the scale dependencies and geography dependencies is also an important part to assess, in additional to the statistical analysis.

It is expected that the resolution of future GCMs will increase and eventually catch up with the current RCMs. One question, therefore, is whether regional climate models will still have a role, or whether they too will have a finer resolution.

Much of the downscaling done so far has involved learning by doing; with the accumulation of experience, the next generations can benefit from courses dedicated to downscaling. A search on Amazon for textbooks on regional climate modeling or downscaling does not yet provide many hits. It is also important to establish practical matters such as common conventions, formats, methods, and terminology. This is still a work in progress within the CORDEX framework.

Further Reading

Online Resources

R-package “esd” (empirical-statistical downscaling/extreme simple data esd). Created by Benestad, R. E., Mezghani, A., & Parding, K. M. A tool to analyze and apply climate and weather data analysis, empirical-statistical downscaling, and visualization.

References

  • Baker, D. G. (1982). Synoptic-scale and mesoscale contributions to objective operational maximum-minimum temperature forecast errors. Monthly Weather Review, 110(3), 163–169.
  • Baklanov, A., Rasmussen, A., Fay, B., Berge, E., & Finardi, S. (2002). Potential and shortcomings of numerical weather prediction models in providing meteorological data for urban air pollution forecasting. Water, Air, and Soil Pollution: Focus, 2(5–6), 43–60.
  • Benestad, R. E. (2001). A comparison between two empirical downscaling strategies. International Journal of Climatology, 21(13), 1645–1668.
  • Benestad, R. E. (2011). A new global set of downscaled temperature scenarios. Journal of Climate, 24(8), 2080–2098.
  • Biau, G., Zorita, E., von Storch, H., & Wackernagel, H. (1999). Estimation of precipitation by kriging in EOF space of the sea level pressure field. Journal of Climate, 12(4), 1070–1085.
  • Brands, S., Gutiérrez, J. M., Herrera, S., & Cofiño, A. S. (2012). On the use of reanalysis data for downscaling. Journal of Climate, 25(7), 2517–2526.
  • Branstator, G. (1995). Organization of storm track anomalies by recurring low-frequency circulation anomalies. Journal of the Atmospheric Sciences, 52, 207–226.
  • Busuioc, A., & von Storch, H. (2003). Conditional stochastic model for generating daily precipitation time series. Climate Research, 24, 181–195.
  • Busuioc, A., von Storch, H., & Schnur, R. (1999). Verification of GCM generated regional precipitation and of statistical downscaling estimates. Journal of Climate, 12(1), 258–272.
  • Christensen, J. H., Christensen, O. B., Lopez, P., van Meijgaard, & Botzet, M. (1996). The HIRHAM4 regional atmospheric climate model (Research Report, 96–4). Danish Meteorological Institute.
  • Christensen, J. H., Carter, T. R., Rummukainen, M., & Amanatidis, G. (2007). Evaluating the performance and utility of regional climate models: The PRUDENCE project. Climatic Change, 81(1), 1–6.
  • Deser, C., Knutti, R., Solomon, S., & Phillips, A. S. (2012). Communication of the role of natural variability in future North American climate. Nature Climate Change, 2(11), 775–779.
  • Estrada, F., Guerrero, V. M., Gay-García, C., & Martínez-López, B. (2013). A cautionary note on automated statistical downscaling methods for climate change. Climatic Change, 120(1–2), 263–276.
  • Feser, F., Rockel, B., von Storch, H., Winterfeldt, J., & Zahn, M. (2011). Regional climate models add value to global model data: A review and selected examples. Bulletin of the American Meteorological Society, 92(9), 1181–1192.
  • Field, C. B., Barros, V. R., Dokken, D. J., Mastrandrea, M. D., Mach, K. J., Bilir, T. E., et al. (Eds.). (2014). Climate change 2014: Impacts, adaptation, and vulnerability. Part B: Regional aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, U.K.: Cambridge University Press.
  • Fu, C., Wang, S., Xiong, Z., Gutowski, W. J., Lee, D.-K., McGregor, J. L., et al. (2005). Regional climate model intercomparison project for Asia. Bulletin of the American Meteorological Society, 86, 257–266.
  • Gudmundsson, L., Bremnes. J. B., Haugen, J. E., & Engen-Skaugen, T. (2012). Technical note: Downscaling RCM precipitation to the station scale using statistical transformations–a comparison of methods. Hydrology and Earth System Sciences, 16, 3383–3390.
  • Hewitson, B. C., & Crane, R. G. (2002). Self-organizing maps: Applications to synoptic climatology. Climate Research, 22, 13–26.
  • Huth, R. (2002). Statistical downscaling of daily temperature in Central Europe. Journal of Climate, 15, 1731–1742.
  • Kerr, R. A. (2013). Forecasting regional climate change flunks its first test. Science, 339(6120), 638.
  • Kim, J-W., Chang, J-T., Baker, N. L., Wilks, D. S., & Gates. W. L. (1984). The statistical problem of climate inversion: Determination of the relationship between local and large-scale climate. Monthly Weather Review, 112(10), 2069–2077.
  • Klein, W. H., Lewis, B. M., & Enger, I. (1959). Objective prediction of five-day mean temperatures during winter. Journal of Meteorology, 16, 672–681.
  • Lorenz, E. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20, 130–141.
  • Maraun, D., Widmann, M., Gutiérrez, J. M., Kotlarski, S., Chandler, R. E., Hertig, E., et al. (2015). VALUE: A framework to validate downscaling approaches for climate change studies. Earth’s Future, 3(1), 1–14.
  • Matulla, C., Zhang, X., Wang, X. L., Wang, J., Zorita, E., Wagner, S., et al. (2008). Influence of similarity measures on the performance of the analog method for downscaling daily precipitation. Climate Dynamics, 30(2–3), 133–144.
  • Palmer, T. (2014). Climate forecasting: Build high-resolution global climate models. Nature, 515(7527), 338–339.
  • Pielke, R. A. (1991). A recommended specific definition of “resolution.” Bulletin of the American Meteorological Society, 72, 1914.
  • Pielke Sr., R. A., & Wilby, R. L. (2012). Regional climate downscaling: What’s the point? Eos, 93(5), 52–53.
  • Pielke Sr., R. A., Wilby, R., Niyogi, D., Hossain, F., Dairuku, K., Adegoke, J., et al. (in press). Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. AGU Monograph on Complexity and Extreme Events in Geosciences.
  • Pielke Sr., R. A., Wilby, R., Niyogi, D., Hossain, F., Dairuku, K., Adegoke, J., Kallos, G., Seastedt, T., & Suding, K. (2012). Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. Extreme Events and Natural Hazards: The Complexity Perspective Geophysical Monograph Series 196 © 2012. American Geophysical Union. All Rights Reserved. 10.1029/2011GM001086.
  • Prein, A. F., Langhans, W., Fosser, G., Ferrone, A., Ban, N., Klaus, G., et al. (2015). A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges: Convection-Permitting Climate Modeling. Reviews of Geophysics, 53(2), 323–361.
  • Racherla, P. N., Shindell, D. T., & Faluvegi, G. S. (2012). The added value to global model projections of climate change by dynamical downscaling: A case study over the continental U.S. using the GISS-ModelE2 and WRF models. Journal of Geophysical Research, 117(D20).
  • Richardson, C. W. (1981). Stochastic simulation of daily precipitation, temperature, and solar radiation. Water Resources Research, 17(1), 182–190.
  • Rinke, A., & Dethloff, K. (2000). On the sensitivity of a regional Arctic climate model to initial and boundary conditions. Climate Research, 14, 101–113.
  • Roeckner, E., Bäuml, G., Bonaventura, L., Brokopf, R., Esch, M., Giorgetta, M., et al. (2003). The atmospheric general circulation model ECHAM5. Part I: Model description. Max Planck Institute for Meteorology Report no. 349. Hamburg, Germany: Max Planck Institute.
  • Rosen, C. (2010). Mexican climate reports under fire. Nature: International Weekly Journal of Science, December 2.
  • Shuman, F. G. (1989). History of numerical weather prediction at the National Meteorological Center. Weather and Forecasting, 4(3), 286–296.
  • Starr, V. P. (1942). Basic principles of weather forecasting. New York: Harper & Brothers.
  • Takayabu, I., Kanamaru, H., Dairaku, K., Benestad, R., von Storch, H., & Christensen, J. H. (2016). Reconsidering the quality and utility of downscaling. Journal of the Meteorological Society of Japan, 94(A), 31–45.
  • Thorarinsdottir, T., Sillmann, J., & Benestad, R. (2014). Studying statistical methodology in climate research. Eos, Transactions American Geophysical Union, 95(15), 129.
  • Timbal, B., & Jones, D. A. (2008). Future projections of winter rainfall in Southeast Australia using a statistical downscaling technique. Climatic Change, 86, 165–187.
  • van der Linden, P., & Mitchell, J. F. B. (Eds.). (2009). ENSEMBLES: Climate change and its impacts: Summary of research and results from the ENSEMBLES project. Exeter, U.K.: Hadley Centre
  • von Storch, H. (1999). On the use of “inflation” in statistical downscaling. Journal of Climate, 12, 3505–3506.
  • von Storch., H. (1999). Representation of conditional random distributions as a problem of “spatial” interpolation. In J. Gòmez-Hernàndez, A. Soares, & R. Froidevaux (Eds.), GeoENV II: Geostatistics for environmental applications (pp. 13–23). Boston: Kluwer Adacemic.
  • von Storch, H., Langenberg H., & Feser, F. (2000). A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review, 128, 3664–3673.
  • von Storch, H., Zorita, E., & Cubasch, U. (1991). Downscaling of global climate change estimates to regional scales: An application to Iberian rainfall in wintertime. Max Planck Institute for Meteorology Report no. 64. Hamburg, Germany: Max Planck Institute.
  • von Storch, H., Zorita, E., & Cubasch, U. (1993). Downscaling of global climate change estimates to regional scales: An application to Iberian rainfall in wintertime. Journal of Climate, 6, 1161–1171.
  • Weisse, R., Heyen, H., & von Storch, H. (2000). Sensitivity of a regional atmospheric model to a sea state dependent roughness and the need of ensemble calculations. Monthly Weather Review, 128(10), 3631–3642.
  • Weisse, R., von Storch, H., Callies, U., Chrastansky, A., Feser, F., Grabemann, I., et al. (2009). Regional meteo-marine reanalyses and climate change projections: Results for Northern Europe and potentials for coastal and offshore applications. Bulletin of the American Meteorological Society, 90, 849–860.
  • Wigley, T. M. L., Jones, P. D., Briffa, K. R., & Smith, G. (1990). Obtaining sub-grid-scale information from coarse-resolution general circulation model output. Journal of Geophysical Research: Atmospheres, 95(D2), 1943–1953.
  • Wilby, R. L., & Wigley, T. M. L. (1997). Downscaling general circulation model output: A review of methods and limitations. Progress in Physical Geography, 21, 530–548.
  • Wilby, R. L., Wigley, T. M. L., Conway, D., Jones, P. D., Hewitson, B. C., Main, J., et al. (1998). Statistical downscaling of general circulation model output: A comparison of methods. Water Resources Research, 34, 2995–3008.
  • Yoshimura, K., & Kanamitsu, M. (2008). Dynamical global downscaling of global reanalysis. Monthly Weather Review, 136(8), 2983–2998.
  • Zorita, E., & von Storch, H. (1997). A survey of statistical downscaling results. GKSS Report, p. 42.