1-4 of 4 Results

  • Keywords: uncertainty x
Clear all

Article

Journalistic Depictions of Uncertainty about Climate Change  

James Painter

Media research has historically concentrated on the many uncertainties in climate science either as a dominant discourse in media treatments measured by various forms of quantitative and qualitative content analysis or as the presence of skepticism, in its various manifestations, in political discourse and media coverage. More research is needed to assess the drivers of such skepticism in the media, the changing nature of skeptical discourse in some countries, and important country differences as to the prevalence of skepticism in political debate and media coverage. For example, why are challenges to mainstream climate science common in some Anglophone countries such as the United Kingdom, the United States, and Australia but not in other Western nations? As the revolution in news consumption via new players and platforms causes an increasingly fragmented media landscape, there are significant gaps in understanding where, why, and how skepticism appears. In particular, we do not know enough about the ways new media players depict the uncertainties around climate science and how this may differ from previous coverage in traditional and mainstream news media. We also do not know how their emphasis on visual content affects audience understanding of climate change.

Article

Uncertainty Quantification in Multi-Model Ensembles  

Benjamin Mark Sanderson

Long-term planning for many sectors of society—including infrastructure, human health, agriculture, food security, water supply, insurance, conflict, and migration—requires an assessment of the range of possible futures which the planet might experience. Unlike short-term forecasts for which validation data exists for comparing forecast to observation, long-term forecasts have almost no validation data. As a result, researchers must rely on supporting evidence to make their projections. A review of methods for quantifying the uncertainty of climate predictions is given. The primary tool for quantifying these uncertainties are climate models, which attempt to model all the relevant processes that are important in climate change. However, neither the construction nor calibration of climate models is perfect, and therefore the uncertainties due to model errors must also be taken into account in the uncertainty quantification. Typically, prediction uncertainty is quantified by generating ensembles of solutions from climate models to span possible futures. For instance, initial condition uncertainty is quantified by generating an ensemble of initial states that are consistent with available observations and then integrating the climate model starting from each initial condition. A climate model is itself subject to uncertain choices in modeling certain physical processes. Some of these choices can be sampled using so-called perturbed physics ensembles, whereby uncertain parameters or structural switches are perturbed within a single climate model framework. For a variety of reasons, there is a strong reliance on so-called ensembles of opportunity, which are multi-model ensembles (MMEs) formed by collecting predictions from different climate modeling centers, each using a potentially different framework to represent relevant processes for climate change. The most extensive collection of these MMEs is associated with the Coupled Model Intercomparison Project (CMIP). However, the component models have biases, simplifications, and interdependencies that must be taken into account when making formal risk assessments. Techniques and concepts for integrating model projections in MMEs are reviewed, including differing paradigms of ensembles and how they relate to observations and reality. Aspects of these conceptual issues then inform the more practical matters of how to combine and weight model projections to best represent the uncertainties associated with projected climate change.

Article

Empirical-Statistical Downscaling: Nonlinear Statistical Downscaling  

Aristita Busuioc

Empirical-statistical downscaling (ESD) models use statistical relationships to infer local climate information from large-scale climate information produced by global climate models (GCMs), as an alternative to the dynamical downscaling provided by regional climate models (RCMs). Among various statistical downscaling approaches, the nonlinear methods are mainly used to construct downscaling models for local variables that strongly deviate from linearity and normality, such as daily precipitation. These approaches are also appropriate to handle downscaling of extreme rainfall. There are nonlinear downscaling techniques of various complexities. The simplest one is represented by the analog method that originated in the late 1960s from the need to obtain local details of short-term weather forecasting for various variables (air temperature, precipitation, wind, etc.). Its first application as a statistical downscaling approach in climate science was carried out in the late 1990s. More sophisticated statistical downscaling models have been developed based on a wide range of nonlinear functions. Among them, the artificial neural network (ANN) was the first nonlinear regression–type method used as a statistical downscaling technique in climate science in the late 1990s. The ANN was inspired by the human brain, and it was used early in artificial intelligence and robotics. The impressive development of machine learning algorithms that can automatically extract information from a vast amount of data, usually through nonlinear multivariate models, contributed to improvements of ANN downscaling models and the development of other new, machine learning-based downscaling models to overcome some ANN drawbacks, such as support vector machine and random forest techniques. The mixed models combining various machine learning downscaling approaches maximize the downscaling skill in local climate change applications, especially for extreme rainfall indices. Other nonlinear statistical downscaling approaches refer to conditional weather generators, combining a standard weather generator (WG) with a separate statistical downscaling model by conditioning the WG parameters on large-scale predictors via a nonlinear approach. The most popular ways to condition the WG parameters are the weather-type approach and generalized linear models. This article discusses various aspects of nonlinear statistical downscaling approaches, their strengths and weaknesses, as well as comparison with linear statistical downscaling models. A proper validation of the nonlinear statistical downscaling models is an important issue, allowing selection of an appropriate model to obtain credible information on local climate change. Selection of large-scale predictors, the model’s ability to reproduce historical trends, extreme events, and the uncertainty related to future downscaled changes are important issues to be addressed. A better estimation of the uncertainty related to downscaled climate change projections can be achieved by using ensembles of more GCMs as drivers, including their ability to simulate the input in downscaling models. Comparison between more future statistical downscaled climate change signals and those derived from dynamical downscaling driven by the same global model, including a complex validation of the RCMs, gives a measure of the reliability of downscaled regional climate changes.

Article

Downscaling Wind  

S.C. Pryor and A.N. Hahmann

Winds within the atmospheric boundary layer (i.e., near to Earth’s surface) vary across a range of scales from a few meters and sub-second timescales (i.e., the scales of turbulent motions) to extremely large and long-period phenomena (i.e., the primary circulation patterns of the global atmosphere). Winds redistribute momentum and heat, and short- and long-term predictions of wind characteristics have applications to a number of socioeconomic sectors (e.g., engineering infrastructure). Despite its importance, atmospheric flow (i.e., wind) has been subject to less research within the climate downscaling community than variables such as air temperature and precipitation. However, there is a growing comprehension that wind storms are the single biggest source of “weather-related” insurance losses in Europe and North America in the contemporary climate, and that possible changes in wind regimes and intense wind events as a result of global climate non-stationarity are of importance to a variety of potential climate change feedbacks (e.g., emission of sea spray into the atmosphere), ecological impacts (such as wind throw of trees), and a number of other socioeconomic sectors (e.g., transportation infrastructure and operation, electricity generation and distribution, and structural design codes for buildings). There are a number of specific challenges inherent in downscaling wind including, but not limited to, the fact that it has both magnitude (wind speed) and orientation (wind direction). Further, for most applications, it is necessary to accurately downscale the full probability distribution of values at short timescales (e.g., hourly), including extremes, while the mean wind speed averaged over a month or year is of little utility. Dynamical, statistical, and hybrid approaches have been developed to downscale different aspects of the wind climate, but have large uncertainties in terms of high-impact aspects of the wind (e.g., extreme wind speeds and gusts). The wind energy industry is a key application for right-scaled wind parameters and has been a major driver of new techniques to increase fidelity. Many opportunities remain to refine existing downscaling methods, to develop new approaches to improve the skill with which the spatiotemporal scales of wind variability are represented, and for new approaches to evaluate skill in the context of wind climates.