Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Climate Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 10 April 2021

Analog Models for Empirical-Statistical Downscalingfree

  • María Laura BettolliMaría Laura BettolliDepartment of Atmospheric and Ocean Sciences, University of Buenos Aires & the National Scientific and Technical Research Council, Argentina

Summary

Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution.

Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions.

Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day.

The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.

The Analog Method: Definition

The analog method is a particular implementation of the well-known K-nearest neighbors (KNN) method in the statistical literature (Hastie, Tibshirani, & Friedman, 2009). Given a positive integer K and a target observation of an input variable X0, the KNN method first identifies the K neighbor points in the training data set that are closest to X0. It then uses the information of the K points to make estimations of different properties of an output variable Y. The standard analog algorithm corresponds to the KNN method with K equal to 1 (Figure 1), and corresponds to the analog method described by Zorita and von Storch (1999) for climate downscaling purposes.

Figure 1. Illustration of the KNN method for observations X in a two-dimensional space. Observations of the training set (blue) and the target observation (red). Dotted circle illustrates the KNN approach using K = 3. Full line circle illustrates the KNN approach using K = 1 and corresponds to the classical or standard analog implementation.

The term analog was first adopted by Lorenz (1969), who noted that “two states of the atmosphere which are observed to resemble one another are termed analogues.” (p. 636) The method assumes that these analog atmospheric configurations lead to similar local meteorological outcomes. Statistically speaking, the atmospheric configurations are often called the predictors, or input variables generally represented by large-scale atmospheric variables, X, such as sea-level pressure, temperature, or humidity at certain levels of the atmosphere. The local meteorological outcomes, Y, are the predictands, or output variables such as the concurrent precipitation, surface temperatures, or wind observations. Figure 2 illustrates the standard analog algorithm. Given the atmospheric large-scale pattern on a target day (X0), similar large-scale configurations are identified in the historical record (the K situations in Xj) together with their corresponding local observations of the predictand variable (Yj). Then, the estimation of the local predictand on the target day (Y^0) corresponds to the local observations that occurred on the closest analog atmospheric configuration (the nearest neighboror in Figure 1). That is,

Y^0=Yjwithj:Xj=analog(X0),

following the notation proposed by Maraun and Widmann (2018).

It should be noted that several analogs K can be found for a target day (Figure 2), but in the classical or standard analog procedure only the information of one single analog (the closest one) is used to estimate the local predictand variable. This process is schematically represented in Figure 2.

Different variants of the method are found in the literature that basically use the information of the K analogs to estimate Y^0. For example, a common variant of the method considers a statistic of the sample composed of the predictand values Yj corresponding to the K nearest neighbors (e.g., the sample mean; Gutiérrez, San Martín, Brands, Manzanas, & Herrera, 2013; San Martin et al., 2017). Other configurations of the method consider the weighted average using weights that are inversely proportional to the dissimilarity metric between the target day and each K analogs (Fernández & Saenz, 2003).

Another commonly used variant is the random selection of one analog out of the K nearest neighbors. This implementation is usually referred to as nearest-neighbor resampling, or simply analog resampling (Brandsma & Buishand, 1998; Lall & Sharma, 1996), and can be considered a stochastic variant of the analog methodology (San Martin et al., 2017). More sophisticated implementations consider constructing analogs from linear combinations of historical large-scale atmospheric patterns. These are referred to as constructed analogs (Hidalgo, Dettinger, & Cayan, 2008; van den Dool, 1994).

Other elaborate configurations of the method use, for instance, multilinear regressions between predictors and predictands using the information of the K analogs (Ribalaygua et al., 2013); approaches in which the analogs are searched using the canonical correlation phase space taking into account the relationship between predictors and predictands (Fernández & Saenz, 2003); or a multistep construction of the method to select the analogs and to perform the estimation of the local variables (Radanovics, Vidal, Sauquet, Ben Daoud, & Bontron, 2013).

Figure 2. Schematic representation of the standard analog algorithm using sea-level pressure as large-scale predictor X over southern South America and temperature at local stations over central and northern Argentina as local predictand Y.

Source: Sea-level pressure maps were extracted from the Interactive Map Tool of the National Climatic Data Center—National Oceanic and Atmospheric Administration.

Advantages and Disadvantages of the Analog Method

The experimental design of the classical analog method is a straightforward algorithm. In spite of its simplicity, the method has several advantages due to its structural setup (Matulla et al., 2008; Turco, Quintana-Seguí, Llasat, Herrera, & Gutiérrez, 2011; van den Dool, 1989; Zorita & von Storch, 1999):

It is a nonparametric method; that is, no assumptions are made about the form of the probability distribution of the variables involved, and therefore it is easily applied to non-normally distributed variables such as daily precipitation or wind intensities.

It is a nonlinear method, so it is able to reproduce a nonlinear relationship between predictors and predictands.

As it is based on observed values, it is capable of providing realistic simulations without introducing any simplifications in the physics of the atmosphere.

If the same analog is selected for a set of stations, the method also preserves the spatial covariance structure because, by construction, it uses the simultaneous occurrence for all target locations.

When applied to different predictand variables, it preserves the physical consistency among them as a consequence of using the same analog day for all predictand variables.

Like most of the statistical downscaling methods, it is easy to implement with low computational cost.

The method also has four main shortcomings (Benestad, 2010; Castellano & De Gaetano, 2017; Gutierrez et al., 2013; Zorita & von Storch, 1999):

The success of the method relies on a sufficiently long record of observations so that a reasonable analog of the target large-scale circulation always cannot be found.

It is not able to simulate predictand values outside the range of the existing historical records.

Like all statistical downscaling methods and related to the previous limitation, the analog method works under the stationarity assumption; that is, the statistical relationship between predictors and predictands must hold in a changing climate.

It may not ensure a consistency in the order of consecutive days.

Method Development: Considerations

The development of the method requires several decisions to be made during the entire process that may affect the final results. For instance, what large-scale variables should be considered to represent the large-scale atmospheric conditions? Which is the best domain size for those variables? How is the most similar large-scale atmospheric situation found? What are the different ways to estimate local weather conditions using the information from K analogs? All these decisions are based on theoretical considerations of the knowledge of the climate system in combination with the specialist’s experienced judgment. But in most cases, sensitivity tests to the choice of predictors, similarity metrics, and estimation of local variables are necessary for the method to be optimized.

Predictors: Atmospheric States

The analog method is based on the selection of similar atmospheric states. Defining a state of the atmosphere is a difficult task that involves a complex multivariate and multidimensional system—that is, multi-fields that are interrelated and vary in time and space. The challenge is to extract the necessary information from this system in order to search and find analogs. To do this, the system is usually simplified through discrete representations that describe the general state of the atmosphere. Large-scale configurations are usually represented at grid points from pseudo-observations given by reanalyses. However, some studies consider meteorological variables of analyses derived from radio-sounding data (Martin, Timbal, & Brun, 1997; Obled, Bontron, & Garcon, 2002). Also, some circulation indices have been considered as predictors (Livezey, Barnston, Gruza, & Ran’kova, 1994) because they are simple descriptions of the situation over a large area (Benestad, Hanssen-Bauer, & Chen, 2008).

Predictor Variables

An important aspect of the statistical downscaling process is the selection of suitable predictors. Predictors should be physically linked to the predictands; that is, they need to explain a large fraction of the local variability on a range of timescales, including long-term trends. Therefore, taking all relevant physical factors into account, if the selection of predictors is based on theoretical considerations the linkages will in theory not change in a climate change scenario and the stationarity assumption will be satisfied (Benestad et al., 2008; Ribalaygua et al., 2013). Additionally, predictors must be realistically reproduced by global climate models (GCMs). This requirement is particularly important if the analog method is applied under the perfect prognosis (PP) approach, because realistic and bias-free “perfect” predictors are among its assumptions (Maraun & Widmann, 2018). Ribalaygua et al. (2013) argued that the predictor selection should also be carried out taking into account the final use of the methodology. If the analog method is used for short-range forecast, many predictors can be considered that cannot be used in climate simulations. Examples of these predictor variables are the vorticity or the vertical velocity. This is because operational numerical models for short-term forecast are able to simulate for the next few days variables that are highly dependent on initial conditions, which is difficult to achieve for GCM simulations for the next few decades. Moreover, predictor variables also depend on the region, the season considered, and the predictand variable (Gutiérrez et al., 2013; San Martin et al., 2017).

Large-scale atmospheric configurations may be defined by large-scale circulation controls such as mean sea-level pressure and geopotential heights, which are commonly used for downscaling precipitation and temperatures using the analog method (Bettolli & Penalba, 2018; Brands, Taboada, Cofiño, Sauter, & Schneider, 2011; Timbal, Dufour, & McAvaney, 2003). Apart from the dynamic predictors, humidity and temperature predictors are also needed in order to account for the radiation-induced changes and therefore to include physical forcings of the predictands, especially in climate change studies. Timbal and McAvaney (2001) tested combinations of several predictors and found that two predictors gave better predictive skill than one predictor alone for downscaling temperature in Australia. An exception is the combination of mean sea-level pressure and geopotential height because no additional synoptic signal was added when the two fields were combined. Similar results were found for the combination of air temperature at 850 hPa and 500–1000 thickness. Timbal and McAvaney (2001) conclude that the combination of a thermal and a dynamic predictor is the most useful combination for downscaling daily temperature in Australia. In France, Timbal et al. (2003) found that the most suitable combinations were mean sea-level pressure combined with air temperature at 850 hPa for daily maximum temperature, meanwhile mean sea-level pressure, air temperature at 850 hPa and precipitable water for daily minimum temperature and rainfall. Accordingly, Brands et al. (2011) found that the optimal combination to downscale daily temperatures (mean, maximum, and minimum) in Spain was air temperature at 850 hPa and mean sea-level pressure. They also found that accuracy was enhanced if humidity predictors were added to temperature and circulation predictors, however they did not consider them among the predictors as they are not reliably reproduced by GCMs. Regarding the accuracy of GCMs, Ribalaygua et al. (2013) suggested using field variables as predictor variables rather than point values because the former are more reliably simulated by GCMs. Similarly, free-atmosphere fields are better simulated by GCMs than boundary layer variables.

This consideration is particularly important when the analog method is conducted under the PP approach. The success of these approaches also depends on the reanalysis choice. Horton and Brönnimann (2018) assessed the impact of 10 reanalyses on the performance of 7 variants of analog methods for statistical precipitation downscaling in Switzerland. They found that the significant differences between reanalyses impacted the performance of the methods and this impact was even higher than the choice of the predictor variables. Manzanas (2017) found that the standard analog method was more skillful for downscaling precipitation and maximum temperature in Senegal when direct surface variables were considered instead of combinations of upper-air variables. Hidalgo et al. (2008) also found that the most suitable predictor for downscaling precipitation using the constructed analog method was the reanalysis precipitation fields. This implementation of the analog method may be highly beneficial because the same physical variable is considered as predictor and predictand (in this case, precipitation) and therefore no predictor screening is required. GCMs, however, have to accurately simulate large-scale precipitation. This kind of predictor-predictand setting is called homogeneous (Maraun & Widmann, 2018). With this setting, Turco et al. (2011) evaluated the performance of the analog method under the model output statistic (MOS) approach for downscaling precipitation from regional climate models (RCMs) and found that the method improved the RCM results.

Domains

Taking these considerations into account, the choice of the predictor domain is not a minor issue in the search for analog atmospheric states. This choice is tightly linked to the predictor variable selection. The domain of action of the predictor variables is usually defined by considering the regional circulation features concentrating the regional processes that also control temperature variations and humidity advection. Centers of action and key areas of influence can also be identified by means of correlation.

Using geopotential heights over the Northern Hemisphere, Lorenz (1969) concluded that analogs at hemispheric scales of acceptable quality are highly unlikely to be found, given the relatively short historical records of observations and the high number of degrees of freedom of atmospheric circulations. Van den Dool (1989) discussed the effect of the domain size on the search for analog atmospheric states in weather forecasting, concluding that natural and informative analogs are hard to find. He argued that over a small area it is easy to find good analogs even if the data set available for the analog search is short. The extreme case is a single variable in a single point where it is easy to find several perfect historical analogs. However, this would not be an informative analog because boundary and remote effects, which are physically meaningful for the local variable, can travel rapidly through a small area. Atmospheric patterns over large areas are more robust for characterizing the atmospheric state, but some patterns may have no close analogs in the database to make a skillful prediction (Gutierrez et al., 2004). Horton and Brönnimann (2018) found that reanalyses with longer archives allow the pool of potential analogs to be increased, resulting in better performance of different analog-based methods. Based on these arguments, many studies have performed sensitivity tests on different combinations of predictors and domains during the development and optimization of the analog-based methods in different versions and applications (Bettolli & Penalba, 2018; Fernández-Ferrero, Sáenz, Ibarra-Berastegi, & Fernández, 2009; Gutierrez et al., 2004; Radanovics et al., 2013; Timbal & McAvaney, 2001; Timbal et al., 2003). Fernández-Ferrero et al. (2009) found that results were improved when using dynamical variables over a wide region around the area of study and thermal and moisture variables in smaller regions neighboring Bilbao (Spain). Bettolli and Penalba (2018) found similar results for the Pampas region (Argentina). Using the constructed analog approach, Hidalgo et al. (2008) found that daily temperature patterns in California are best captured from continent‐wide predictor patterns fields, whereas daily precipitation patterns are best captured using predictors over smaller domains, which are closer to the typical size of storm systems, or roughly the state scale.

Predictor Processing

The definition of an atmospheric state implies the selection of suitable predictor variables and domains that encompass the most meaningful physical linkages with the predictand. Once a data set describing atmospheric states or large-scale configurations has been chosen for a given domain, they are mathematically represented and handled through matrices that represent the complex multivariate and multidimensional system. The question here is how to extract the information from this data set to look for the analogs.

These matrices are representative of the atmospheric states and are characterized by large degrees of freedom and high collinearity, which can lead to redundant information. Moreover, if a large number of predictors are considered, a large pool of historical records must be searched to find a good analog match (Timbal et al., 2003). This requires high computational costs, particularly when the analogs are searched for weather prediction purposes (Gutierrez et al., 2004). Therefore, it is advisable to reduce the dimensionality of the data and to remove high-frequency spatial noise but retain as much information as possible. To this end, instead of comparing the raw predictor values when searching the analogs, this search is usually limited to the subspace of the leading empirical orthogonal functions (EOF) of the data set (Zorita & von Storch, 1999). Thus, each atmospheric state is projected onto the n-dimensional space, defined by the n leading EOFs, and corresponds to a point in this space. The KNN method is then performed in this new phase space.

The input data to the EOF analysis would be Xtv, a matrix with t time steps in its rows and v grid points in its columns when a single predictor variable in a single level is considered (see, e.g., Zorita & von Storch, 1999). This structure of the input data corresponds to the S-mode in the principal component analysis (PCA; Lattin, Carroll, & Green, 2003). Different predictor variables (e.g., sea-level pressure and temperature at different vertical levels of the atmosphere) may be combined in the input matrix Xtv so that v would be the number of atmospheric variables multiplied by the number of horizontal grid points multiplied by the number of pressure level or surface levels (see, e.g., Brands et al., 2011; Matulla et al., 2008). In any case, the input data may be preprocessed according to the purposes of the downscaling. This preprocessing may include the removal of periodic components from the system (daily and annual cycles) before searching for analogs on the anomalies as well as standardization, detrending, or geographical balance by the cosine of their corresponding latitude (Benestad et al., 2008; van den Dool, 2007). Imbert and Benestad (2005) analyzed the sensitivity of the analog method results to different choices in the implementation of the EOF analysis. They suggest that the use of principal components (PCs) weighted by their corresponding eigenvalues yield more realistic results than the unweighted ones. Matulla et al. (2008) found that the best performance of the analog method for downscaling daily precipitation was achieved at EOF truncation accounting for 85–90% of explained variability and the inclusion of more EOFs does not improve results. However, such a truncation will depend on the predictor variables, their combinations and domains, and the predictand variable. For instance, Brands et al. (2011) used an EOF-truncation accounting for 99% of the explained variance when performing the analog method for downscaling daily maximum and minimum temperatures. Fernández and Saenz (2003) applied the canonical correlation analysis (CCA) to obtain projection patterns that reduce the dimensionality and noise, and then searched the analogs using this CCA phase space to downscale monthly precipitation. This approach allowed a search for analogs in a space with a topology that maximized the predictor-predictand relationship and obtained good results. Zorita and von Storch (1999), Imbert and Benestad (2005), Benestad et al. (2008), and Brands et al. (2011) use the eigenvector and eigenvalue analysis to perform the analog method.

The analog method may be also applied to the raw predictor data—that is, without performing any data reduction using PCA. Accordingly, Ribalaygua et al. (2013) argued that their analog method does not make use of PCA in order to consider the full range of data variability. However, they assigned different weights to the grid points in the domain depending on the influence of the predictors on the study area. Turco et al. (2011) used raw daily precipitation fields in the search for analogs when they implemented an MOS version of the analog method. Timbal and McAvaney (2001) found that in small domains raw data led to better results than using PCs because PCs are constrained by their orthogonal nature and the shape and size of the geographical domain (Jolliffe, 2002). However, PCs were more effective in large domains because they filtered the synoptic signal from remanent noise.

When the analog method is applied to GCM outputs, the search for the analogs of the GCM simulations is an important issue to take into account, particularly if the method is applied under the perfect prognosis approach. Typically, GCM-simulated fields are projected onto the n-leading EOFs derived from the observations and reanalysis and the search for analogs is restricted to these projections (Zorita, Hughes, Lettenmaier, & von Storch, 1995). Another variant proposed by Imbert and Benestad (2005) is the use of EOFs that are common between the observations and simulations from GCM in the search for analogs (Benestad, 2001). This approach allows for an adjustment of systematic biases in the GCM by forcing the mean value and standard deviation of the PCs that describe the GCM control period to be the same as in the observations and reanalysis. The same offset and scaling is then used for the GCM future simulation. A number of analog-based studies removed the GCM biases by centering (subtracting the mean) or normalizing (centering followed by division by the standard deviation) predictors with respect to the reanalysis values (see, e.g., Frost et al., 2011; Martin et al., 1997).

Similarity Measures

The analog method is based on finding similar atmospheric states for a given target day. Exact analogs should not be expected because that would imply a periodic perfect system. States that are initially close with increasing differences over time progresses are more feasible (van den Dool, 2007). Similarity, then, implies a metric to quantify closeness between points in the phase space defined either by the PCs or the raw data. The Euclidean distance is the most common similarity measure used when searching the analogs. However, there are some other measures that can be considered to evaluate similarity between atmospheric configurations. Zorita and von Storch (1999) suggested introducing different weights on the EOF coordinates when performing the Euclidean distance. With the aim of finding the best distance function to use for producing analog height forecasts, Toth (1991) examined nine different similarity metrics using anomalies of geopotential height at 700 hPa covering the Northern Hemisphere area. The similarity measures comprised the root mean square difference (Euclidean distance), the mean absolute difference in height, the standard correlation, different variants of metrics that included the gradient of height, and differences in vorticity. Toth (1991) found that the difference in the gradient of height performed better than the other measures considered, and that the root mean square difference outperformed the correlation. However, the author advised that depending on the purpose, different similarity measures might be more appropriate. Matulla et al. (2008) evaluated the influence of five similarity measures on the performance of the analog method for downscaling daily precipitation over two regions with complex topography (California’s Central Valley and the European Alps). In these cases, the Euclidean distance, the sum of absolute differences, and the angle between two atmospheric states performed better than measures that introduced additional weightings to PCs. Overall, the Euclidean distance performed satisfactorily in most cases, whereas the Mahalanobis distance seemed to be less efficient. Lengths of wet spells, however, were best simulated by using the angular similarity measure. Wetterhall, Halldin, and Xu (2005) compared the analog method’s performance in downscaling daily and monthly precipitation in central Sweden using two similarity measures: (a) the sum of the weighted square differences in the principal component phase space of sea level as proposed by Zorita and von Storch (1999) and (b) the Teweles-Wobus score (TWS) applied to the raw sea-level pressure data. The TWS, defined by Teweles and Wobus (1954), compares the sea level pressure (SLP) fields by considering its zonal and meridional gradients emphasizing analogy in shape and, therefore, circulation. Wetterhall et al. (2005) showed that both measures produced similar results, but TWS was superior in simulating precipitation duration and intensity. This metric was also used by Obled et al. (2002), Wetterhall et al. (2007), and Teutschbein, Wetterhall, and Seibert (2011) to implement the analog method.

Restricted or Unrestricted Analogs

The definition of analogs indicates that two atmospheric states should be so close that they can be called each other’s analog. These two states should be far apart in time, well beyond the decorrelation time, to avoid artificial predictive skills due to the system’s memory (van den Dool, 2007). In the classic analog method (Zorita & von Storch, 1999), the search for the analog atmospheric states is usually restricted to the season of the target day. That is, the analog atmospheric patterns must belong to the target day’s season of the year. An example of this setting is the downscaling of daily precipitation and temperatures during summer and winter over France conducted by Timbal et al. (2003). Moreover, in some cases the training data is restricted to a specific time window, such that the analog has to be selected from the same time of year as the simulated one. This strategy was adopted by Martin et al. (1997), who used a 30-day window centered at the target date, and Matulla et al. (2008), who considered a 61-day window centered at the target date for all years except for the target year. With a similar setting, Wetterhall et al. (2005) showed that the downscaling was improved when seasonality was included. In fact, restricted analogs are usually adopted when the main aim is to reproduce the observed seasonality in the data.

One of the limitations of the method is that it is not able to simulate nonobserved values and therefore it cannot predict new events that might occur. One way to overcome this limitation is to expand the search for analogs to all seasons, as suggested by Imbert and Benestad (2005). This unrestricted search for analogs allows for the simulation of a large variety of possible situations and has been adopted to downscale daily precipitation and temperatures (Bettolli & Penalba, 2018; Brands et al., 2011; Ribalaygua et al., 2013).

However, this approach would not be appropriate for simulating the extreme seasons if new records are expected (Imbert & Benestad, 2005). For instance, it would not be possible to simulate extreme high (low) temperatures during summer (winter). Also, although the best analog could be found in a totally different season, the actual radiative forcing and other seasonally varying factors may be different, thus introducing errors to deviations in the downscaling (Lorenz, 1969; Timbal & McAvaney, 2001).

Overall, the choices in this kind of setting would be subject to the different applications. For climate change applications, the restricted analogs would not be advisable because the climatic characteristics of the calendar seasons may change in a warming climate, and therefore the relationships found for the 20th-century climate would not apply in the future (Ribalaygua et al., 2013). For seasonal prediction instead, the use of restricted analogs may improve the method’s performance (Shao & Li, 2013).

Reproducing the Time Structure

The temporal structure of a local variable is a feature of great interest for many applications. ”Short-term temporal dependence” refers to the way the local variable changes over time (Benestad, 2010). Such features are the persistence (usually quantified by autocorrelations lagged by one day) or transition probabilities, such as wet-dry probability. Consecutive sequences deriving in extreme events such as cold or hot waves or lengths of dry or wet spells are also examples of these temporal aspects.

The simulation of the time structure of the local variable will depend on the information about the temporal dependence captured by or contained in the predictors. This concept is valid for any statistical downscaling method, but it particularly affects the analog method in that its performance is intrinsically linked with how well the large-scale structure is defined and, therefore, how well the weather trajectory is captured. To overcome this limitation, the inclusion of more distant past atmospheric states has been proposed in many configurations of the method. This can be done by considering the extended EOF analysis in the data reduction for the analog search. The propagation of the atmospheric patterns is, therefore, considered in the phase space defined by the associated PC (Benestad, 2010; Benestad et al., 2008). Another approach for taking the atmospheric evolution during the days prior to the target day into account would be to consider it in the similarity measure when searching for the analogs (Matulla et al., 2008; Timbal & McAvaney, 2001). Also, the evolution before and after the target day can be considered in order to take into account the way the synoptic system further evolves (Timbal & McAvaney, 2001). Matulla et al. (2008) used different analog sequences of up to seven days to evaluate the performance of the analog method in simulating precipitation. They found that the inclusion of bygone days was beneficial, depending on the considered aspect of the local predictand. Monthly precipitation totals or categorical estimates of precipitation intensities are generally more successful simulated when a sequence of up to three days is considered. However, the simulation of dry and wet spells is improved when considering larger sequences.

The Analog Method in Weather and Climate Research

Regional climate downscaling can be categorized into four types according to its applications: short-term numerical weather prediction, regional climate simulation, seasonal prediction, and climate prediction (Pielke & Wilby, 2012). This categorization is valid for both dynamic and statistical downscaling. In particular, analog-based methods have been applied for different time scales and purposes from the beginning of downscaling (Benestad, 2016).

The Analog Method in Weather Forecasting

For short-range weather forecasting, it can be assumed that if two atmospheric states are close initially, then their subsequent behavior should be similar for some time after. Consequently, these similar synoptic situations lead to similar local responses. These are the basic assumptions of analog-based methods and concepts that a human weather forecaster intrinsically puts into practice when making a subjective prognosis.

Lorenz (1963) showed that slightly different initial conditions would lead to an increasingly large change in the evolution of the atmosphere with time. Lorenz (1969) introduced the concept of analogs to study the atmospheric predictability for operational weather prediction. He pointed out that a pair of analogs may be regarded as equal to the other state plus a small superposed error. The growth rate of the error may be determined from the behavior of the atmosphere following each state, thus empirically assessing the atmospheric predictability.

In short-term forecast applications, two analog-based strategies can be applied. In the first one, analogs can be searched in the past record of the variable of interest to be predicted (for example, geopotential height at a certain level) and then the forecast on the target day (for the subsequent time steps) relies on the string of weather maps following the analog. That is, the past evolution of the analogs gives the prediction of the same variable of interest (van den Dool, 2007). The second strategy involves downscaling applications in weather prediction where the analog search is performed for the large-scale variable to predict a local variable on the target day by using the concurrent local variables of the analog date and time.

The former scheme was applied by Lorenz (1969) for height forecast at 200, 500, and 850 hPa levels over the Northern Hemisphere. Gutzler and Shukla (1984), Ruosteenoja (1988), and van den Dool (1989) extended the study for a more extended record of geopotential height fields at 500 hPa over the extratropical Northern Hemisphere and for different limited domains. The overall results of these studies showed that they were able to find only a few fairly good analogs (considering their occurrence and lifetime), mainly due to the sparse weather records and the large number of degrees of freedom. They also found that the best analogs are most common in winter (particularly at roughly the same time of the year) and when the analog search is further restricted to limited areas. Moreover, the persistence model outperformed the analog model, and therefore its use for operative short-term forecasting was inefficient. However, this approach has gained renewed interest since the 1990s due to the availability of huge data sets and advances in computational efficiency (Ayet & Tandeo, 2018). For instance, Hamill and Whitaker (2006) used the analog method in a numerical weather prediction ensemble to obtain probabilistic precipitation forecasts. Singh, Dimri, and Ganju (2009) applied this analog scheme for predictions of multiple surface weather parameters (maximum, minimum, and mean temperatures; average wind speed; surface pressure; relative humidity) three days in advance at a specific location in the western Himalayas. The potential of atmospheric analogs using ground radar data for precipitation nowcasting was explored in the central Alps and the eastern United States (Atencia & Zawadzki, 2015; Panziera, Germann, Gabella, & Mandapaka, 2011). Forecasting of global horizontal irradiance using spatial cloud patterns from hourly images of satellite-derived irradiance is another example of this scheme (Ayet & Tandeo, 2018).

Advances in computing resources have led to improvements in numerical weather forecasting approaches based on ensembles of model runs, however the second strategy of using analogs for short-term forecasting has been evaluated. The idea behind the use of the analog method for short-range operative downscaling relies on the fact that the method is considered a benchmark that has to be surpassed (Matulla et al., 2008). In this sense, some national meteorological services have adopted analog-based methods for forecasting (Fernández-Ferrero et al., 2009; Matulla et al., 2008; Timbal & McAvaney, 2001). Examples of this scheme are the use of analogs for short-range operative forecasts of daily precipitation and maximum wind speed (lead time 1–5 days) over the Iberian Peninsula (Gutiérrez, Cofiño, Cano, & Rodriguez, 2004); precipitation averages for a range of up to six hours ahead in Bilbao, Spain (Fernández-Ferrero et al., 2009); and hourly quantitative precipitation forecasts for the Reno River Basin in Italy (Diomede, Nerozzi, Paccagnella, & Todini, 2006).

The Analog Method in Seasonal Prediction

Analog-based methods have been used for seasonal prediction applications, although less frequently than for weather forecasting and climate change studies. The use of the analog method for short-term climate prediction goes back to Barnett and Preisendorfer (1978). Their scheme was based on the persistence concept, which assumes that interseasonal changes in the climate system occur similarly from one instance to another, and therefore, for the current state, a sequence of events analogous to those that occurred in the past can be expected. In other words, if the states of the ocean-atmosphere system in two seasons are quite similar, the following season is expected to be similar as well. This approach was implemented in the United States for seasonal surface temperature predictions using different sets of predictors, including geopotential height at 700 hPa, 700–1,000 hPa thickness, and ENSO (El Niño-Southern Oscillation)-related indices (Barnston & Livezey, 1988; Bergen & Harnack, 1982; Livezey & Barnston, 1988; Livezey et al., 1994). Livezey and Barnston (1988) used the anti-analogs strategy introduced by van den Dool (1987), with the aim of enhancing the opportunity to find relevant cases from a small historical record along with the use of groups or composites of several best analogs rather than the single best (Bergen & Harnack, 1982). Mullan and Thompson (2006) used this strategy to predict monthly and seasonal anomalies of temperature and precipitation in New Zealand using mean sea-level pressure and sea surface temperature as predictors.

The advent of fast computer processors has allowed for operative global seasonal predictions that are initialized a few times per week and provide forecasts for time horizons for weeks to several months ahead. Coordinated initiatives to provide multimodel subseasonal to decadal reforecasts have made it possible to advance the exploration of statistical downscaling for seasonal prediction purposes. These initiatives include ENSEMBLES, Development of a European Multimodel Ensemble system for seasonal to interannual prediction (DEMETER), and the Climate System Historical Forecast Project, Diez, Primo, García-Moya, Gutiérrez, and Orfila (2005) showed that the analog method helped to improve precipitation predictions over Spain when the GCM showed some initial skill, for instance, when the predictability is enhanced by an El Niño event. However, it showed low skill during dry episodes. The enhanced skill induced by ENSO events was later further confirmed by Sordo, Frías, Herrera, Cofiño, and Gutiérrez (2008) for precipitation prediction in Peru (a tropical region directly influenced by El Niño) and Spain (an extratropical region). Frías, Herrera, Cofiño, and Gutiérrez (2010) extended this work to temperatures in Spain. Wu et al. (2012) applied the analog method for seasonal predictions of monthly precipitation in the southeastern Mediterranean. Tian and Martinez (2012) compared the performance of natural analog and constructed analog methods to produce both probabilistic and deterministic downscaled daily reference evapotranspiration forecasts in the southeastern United States. They found that constructed analogs showed slightly higher skill than natural analogs for deterministic forecasts, but for probabilistic forecasts it was the other way around. Shao and Li (2013) incorporated a bias correction method to solve the mismatch between the historical data and the Predictive Ocean Atmosphere Model for Australia in the search for analogs to downscale precipitation in the Murray Darling Basin of Australia, while Charles, Timbal, Fernandez, and Hendon (2013) conducted a similar study for the region with the standard analog method. Manzanas and Lucero et al. (2017) applied the analog method and other statistical downscaling techniques to downscale precipitation from a multimodel seasonal hindcast from ENSEMBLES in the Philippines. They found that the bias correction methods maintain or worsen the skill of the raw model forecasts, whereas perfect prognosis methods can produce significant improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation, highlighting that the choice of a convenient downscaling approach depends on the region and the season. For Senegal, Manzanas (2017) used the analog method with different predictor settings to downscale precipitation and maximum temperature and found that the model added noteworthy value in terms of interannual variability.

Multimodel (using several GCMs) and multimethod (using several statistical and dynamical approaches) coordinated experiments to explore the added value of statistical and dynamical downscaling in seasonal predictions are important for the improvement of climate services. Analog-based methods have been evaluated in this manner in eastern Africa (Nikulin et al., 2017) and on the continental scale in Europe (Manzanas, Gutiérrez et al., 2017). These studies showed that the analog method exhibited performance that was similar to other statistical downscaling methods as well as to dynamic models. They also yielded similar conclusions about the ability of both downscaling approaches to reduce global model biases, but there was no clear added value in downscaling for seasonal forecast of summer temperature over Europe and summer precipitation over eastern Africa. This highlights the challenge for the regional climate modeling community at this temporal scale.

The Analog Method in Climate Change

Since Zorita and von Storch (1999), analog-based methods have been increasingly used for regional climate research and climate change applications. Zorita and von Storch (1999) compared the standard analog method to more complicated statistical downscaling techniques in replicating different aspects of precipitation over the Iberian Peninsula. They showed that the analog method performed as well as a canonical correlation analysis (as representative of linear methods), a method based on classification and regression trees (as representative of a weather generator based on classification methods), and a neural network (as an example of deterministic nonlinear methods). They also identified its main limitations. These results inspired many studies around the world focused on the exploration of the analog method’s potential for regional climate studies in a changing climate along with strategies to overcome its identified shortcomings. In this way and due to its simple design and great advantages, the method became a benchmark for any statistical downscaling strategy.

Temperature and precipitation are the climate system variables that have direct impact on human health and well-being, ecosystem dynamics, and socioeconomic activities. They have been the focus of diverse climate variability and change studies and the subject of analog-based analysis of their regional behavior. Zorita et al. (1995) used the analog method to simulate daily precipitation over two regions of the United States in a climate change scenario from two GCMs. Martin et al. (1997) optimized the method to simulate daily precipitation and temperature over the French Alps with the aim of providing snow cover projections. Dehn (1999) applied the analog method to downscale local precipitation scenarios that fed into a slope hydrological/stability model to derive future landslide activity in the Dolomites in Italy. Timbal and McAvaney (2001) and Timbal and Jones (2008) provided temperature and precipitation projections, respectively, in some regions of Australia, and Timbal et al. (2003) provided the same for western France. Brands et al. (2011) used the analogue method to generate ensemble projections of local daily mean, maximum, and minimum air temperatures in the northwest of the Iberian Peninsula until the middle of the 21st century. Teutschbein et al. (2011) compared three statistical approaches to downscale precipitation from two GCMs to study the variability of seasonal streamflow and flood-peak projections for a meso-scale catchment in southeastern Sweden. Horton, Obled, and Jaboyedoff (2017) applied the method to evaluate its potential for downscaling subdaily precipitation in Switzerland.

A number of studies have compared statistical and dynamic downscaling approaches, including the analog method, to identify their relative merits and shortcomings. For example, Schmidli et al. (2007) conducted the intercomparison of precipitation scenarios for the European Alps. Frost et al. (2011) compared both approaches for southeastern Australia, Casanueva, Herrera, Fernández, and Gutiérrez (2016) in the framework of the EURO-CORDEX initiative, and Gutierrez et al. (2018) in the VALUE initiative.

Performance of the method varied depending on the region, configurations of the method, and the local predictand. Some studies concluded that although the method is an algorithm that does not explicitly calibrate the mean or the temporal correlation, it is able to capture these features (Casanueva et al., 2016). Other studies have shown that the method tends to display mean biases, underestimate the variance (Gutierrez et al., 2018), and have difficulty in representing the time structure (Benestad, 2010). Some studies also showed that spatial correlation structure is preserved and the intervariable relationships are well represented (Bettolli & Penalba, 2018).

However, most analog-based studies agree on the difficulty of the method in reproducing new extreme records because it cannot simulate values outside the historical range. Another limitation concerning the use of the analog procedure (and common to any statistical downscaling technique) is the assumption that the relation between upper air fields and local meteorological conditions is unchanged in a changing climate. Several attempts have been made to overcome these limitations. Imbert and Benestad (2005) proposed a combined approach that involves superimposing a linear trend from a regression-based model onto the results of the analog model, thus shifting the whole probability density function and being able to extrapolate higher values than in the calibration data sample. Later, Benestad (2010) proposed a recalibration of the method that involved a quantile–quantile mapping to predict the upper tail of the precipitation distribution, showing that it was possible to predict higher probabilities for heavy precipitation events in the future, except for the most extreme percentiles for which sampling fluctuations give rise to high uncertainties. Another strategy to allow for extrapolation is the constructed analog method proposed by Hidalgo et al. (2008). The method considers a linear combination of several analog atmospheric patterns associated with a target situation. The downscaled pattern is obtained by applying the same regression coefficients to the high-resolution patterns. Although the overall performance was good, the method exhibited limited skill in reproducing wet and dry extremes and some improved skill for extreme temperatures depending on the season (Maurer & Hidalgo, 2008). This approach was later improved with a bias correction method to downscale precipitation and temperature (Maurer et al., 2010) and multivariable aspects in the United States (Abatzoglou & Brown, 2012) and South Korea (Eum, Cannon, & Murdock, 2016). Ribalaygua et al. (2013) proposed a two-step analog procedure for daily precipitation in which the n closest historical analogs are identified for a target day and then the historical precipitation observations associated with the analogs, together with their probabilities of occurrences, are used to estimate precipitation amounts on the corresponding target day. A similar procedure was used by Castellano and De Gaetano (2016, 2017) to downscale daily precipitation extremes in the United States obtaining a reasonable performance.

Regarding the robustness or stationarity assumption, Gutierrez et al. (2013) provided a validation framework to test the suitability of various perfect prognosis statistical downscaling techniques to climate change studies using predictor sets in anomalous warm historical periods. Overall, the authors showed that results were highly dependent on the predictor sets, and, although the analog-based methods were more appropriate for reproducing the maximum and minimum observed distributions in Spain, they underestimated the temperature anomalies of the warm periods. This underestimation was found to be critical when considering the warming signal in the late 21st century. A companion work for the case of precipitation was conducted by San-Martín, Manzanas, Brands, Herrera, and Gutiérrez (2017), with special focus on suitability for extrapolating anomalously dry conditions. They showed that the extrapolation capability for anomalously dry conditions depended on the considered predictor data set. However, the test for robustness was similar for the different families of statistical downscaling techniques, and therefore no indication of the variability in future precipitation projections for the different methodologies could be provided.

Conclusion

This article has addressed the analog method and its use in different weather and climate applications. The method has been extensively used for more than 50 years and has proven to be satisfactorily efficient, so efforts to improve its performance are worthwhile.

The standard analog method is an algorithm that does not require calibration as such, but rather optimization. Some of its more elaborate variants do require calibration, however. Overall, the optimization or training process will depend on the different configurations and the construction procedure as well as the purpose of its application. The performance of the method also depends on the study region, the time of year, and the local variable of interest. Figure 3 shows the geographical locations and number of studies referenced in this article where analog-based methods were used for different applications. The figure makes evident that there are regions in which the potential of the method has not yet been evaluated. Increasing the available data and expanding access to it will allow more regions to explore the potential of the analog method.

Figure 3. Geographical locations and number of studies where analog-based methods were used for different applications.

Despite their apparent simplicity, analog-based methods perform relatively well in simulating different aspects of local climates. However, overcoming their two greatest limitations—the extrapolation of new records and the stationarity assumption—remains a challenge. Future studies should address these limitations and present new strategies for dealing with them.

Further Reading

References