Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Economics and Finance. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 28 September 2023

Econometrics for Modelling Climate Changefree

Econometrics for Modelling Climate Changefree

  • Jennifer L. CastleJennifer L. CastleMagdalen College, University of Oxford
  •  and David F. HendryDavid F. HendryNuffield College, University of Oxford

Summary

Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.

Subjects

  • Econometrics, Experimental and Quantitative Methods
  • Economic Theory and Mathematical Models
  • Environmental, Agricultural, and Natural Resources Economics

Why Change Matters

The four words in this article’s title, “econometrics,” “modeling,” “climate,” and “change,” are obviously key to the article, both singly and jointly, but are closely connected because of change: See Hepburn and Schwarz (2020) for clear answers to most common questions about climate change. Greenhouse gas (GHG) emissions, especially carbon dioxide (CO2), methane, and nitrous oxide, have been increasing at a rapid rate, cumulating in the atmosphere at more than 3 parts per million (ppm) per annum over the decade commencing in 2010, which amounts to about 24 gigatons of CO2. Figure 1(a) records monthly atmospheric CO2 levels from January 1958, denoted 1958(1) to 2021(2) in parts per million (ppm), taken from records measured at Mauna Loa. The single linear trend is to emphasize that the increases are also increasing. If humanity is to avoid catastrophic climate change, the present increasingly upward trend in GHG emissions must be reversed to become a rapid downward trend. However, such changes will not be smooth, but rather erratic as different sources of emissions can be more easily reduced and perhaps eliminated, as the United Kingdom has almost done with coal, one of the worst polluters and CO2 emitters, shown in Panel (b). Figure 1 also illustrates the shifts and changing trends in two other climate time series, namely ocean heat content in Panel (c), and global mean surface temperature in Panel (d).

Figure 1. (a) Monthly atmospheric CO2 measured at Mauna Loa in parts per million (ppm). From The Keeling Curve. (b) Annual U.K. coal use in millions of tonnes (Mt), since 1860. From CarbonBrief. (c) Annual global ocean heat content to a depth of 700 m from 1957 to 2019 in 1022 Joules. From National Oceanic and Atmospheric Administration (NOAA). (d) Annual global mean surface temperature deviations in degrees Kelvin since 1880. From NASA – Global climate change.

It is not all bad news, as the data on U.K. coal use (measured in millions of tonnes, Mt) show that its near complete elimination occurred over a period when the U.K. economy actually grew considerably in per capita terms. The other aspect of Figure 1 is to emphasize how great and how fast climate change is occurring, especially since 1980, with substantial rises in ocean heat and surface air temperatures.

Atmospheric CO2 concentrations lay within the range of roughly 175 ppm to 300 ppm over 800,000 years of ice ages before the Industrial Revolution. It took thousands of years to move from its lowest level of atmospheric CO2 concentrations to its highest level and back down again (see, e.g., Castle & Hendry, 2020, section 6). This is seen in Figure 2(a), which records changes in atmospheric CO2 concentrations, where the x-axis is given in 1,000-year intervals to the present. However, the last observation in Panel (a) records changes over the 250 years since 1760 and shows in dramatic detail the impact of humanity. Almost 100 ppm have been added in the 60 years to 2020.

The SARS-Cov-2 pandemic was first reported in December 2019 in the Hubei province of China and impacted globally in early 2020, albeit with slightly different timings across countries. It was characterized by frequent occurrences of intermittent “lockdowns” in attempts to control the pandemic, with associated sudden drops in toxic emissions like nitrogen oxides and in CO2. This can be seen in Figure 2(b), which records daily global CO2 reductions relative to the previous year. Panel (b) highlights just how important it is to handle both location shifts and trend breaks when modeling climate change data. However, it would take an eagle’s eye to see the small fall during 2020 of less than 20 Mt in Panel (b) relative to the trend in CO2 emissions in Figure 1(a), as 3 ppm correspond to 24,000 Mt.

Figure 2. (a) Thousand-year changes in parts per million (ppm) of CO2 in the atmosphere over 800,000 years of ice ages before the Industrial Revolution, ending with CO2 changes in the past 250 years. (b) Reductions in CO2 during 2020 from the SARS-Cov-2 pandemic, in millions of tonnes (Mt). From World Meteorological Organization.

The Earth’s climate may seem relatively stable over the past few hundred years, but it is always in flux, and always has been. GHGs in the atmosphere, especially water vapor and carbon dioxide, are crucial in maintaining life. When GHGs are too depleted, the planet cools, once being a “snowball” (see, e.g., Hoffman & Schrag, 2000) with glaciation in Death Valley, whereas excessive GHGs lead to very warm periods, as in the Permian, Cretaceous, and most “recently,” the Paleocene-Eocene Thermal Maximum (PETM), about 50 million years ago.

Past climate change was from natural forces, including plate tectonics, volcanism, and developments like photosynthesis; thus, before the Anthropocene, planet Earth had experienced a wide range of climates. Given the present proliferation of life, many forms of life had to have survived despite these great changes. Indeed, the fossil record suggests life thrived in global temperatures much higher and lower than that in the 20th and early 21st centuries. Nevertheless, that same record reveals that large numbers of species have disappeared, becoming extinct in the process of climate change, even if, much later, new species evolved from survivors. In particular, the “mass extinctions” visible in fossil records seem due to major climate changes arising from a variety of causes. Global cooling occurred at the end of the Ordovician and Devonian periods, the latter from increased photosynthesis reducing atmospheric CO2, whereas temperatures were far higher during the worst mass extinction at the Permian–Triassic (P/Tr) boundary from massive volcanism, creating the large igneous province (LIP) in Siberia. The mass extinction at the end of the Triassic was probably due to the massive LIP formation of the Central Atlantic Magmatic Province, whereas the well-known extinction of non-avian dinosaurs at the Cretaceous–Tertiary (K/T) boundary is attributed to the impact from a large meteor at Chicxulub near the Yucatan peninsula, leading to large changes in climate, perhaps exacerbated by another LIP forming the Deccan Traps in India over the same time period. Although all of these events occurred many millions of years ago, the central message that climate change was the key determinant in every case is still relevant: Large-scale warming or cooling both lead to major species extinctions.

Climate science has established a vast body of knowledge about the processes and causal links in the Earth’s climate system. The climate of planet Earth depends on the energy balance between incoming radiation from the Sun and re-radiation from the planet, mediated by the differential absorption and reflection properties of the land, atmosphere (especially clouds), ice, and oceans, respectively, discussed later in terms of earth, air, fire, and water (italicized to distinguish their conceptual role). Climate change analysis is primarily based on physical process models which embody laws of conservation and energy balance at a global level.1 Such well-established climate theories can be retained in econometric models as their core theory (see, e.g., Brock & Miller, 2020; Kaufmann et al., 2013; Pretis, 2020). For example, Kaufmann et al. (2013) link statistical models driven by stochastic trends to physical climate systems, and Pretis (2020) establishes an equivalence between a cointegrated vector autoregressive system (CVAR) and two-component (i.e., atmosphere and oceans) energy-balance models of the climate.

Nevertheless, climate science knowledge is incomplete. GHG emissions depend on changeable human behavior, essentially unpredictable volcanic eruptions that can have global climate impacts, and the rate of loss of sea ice, which both alters the Earth’s albedo and the oceans’ uptake and retention of CO2. Moreover, which fossil fuels are burnt matters, as their CO2 emissions per million British thermal units (Btu) of energy produced differ substantially, as Table 1 shows.

Table 1. Pounds of CO2 Emitted per Million British Thermal Units (Btu) of Energy Produced

Coal (anthracite)

228.6

Coal (bituminous)

205.7

Coal (lignite)

215.4

Coal (subbituminous)

214.3

Diesel fuel and heating oil

161.3

Gasoline

157.2

Propane

139.0

Natural gas

117.0

Note. Source: U.S. Department of Energy.

Switching from coal to natural gas almost halves the GHG emitted per Btu, even if it still produces a considerable amount of CO2, so this is hardly a “solution.” Economic, social, and behavioral changes all require empirically modeled relationships, and this brings in the role for econometrics. Indeed, most social, economic, and environmental observational time series are evolving processes with stochastic trends and sudden shifts, and hence are wide-sense nonstationary processes, not just unit-root processes where differencing could create a stationary time series. A wide-sense nonstationary process does not have a constant distribution over time, and viable econometric methods must be able to tackle such changes. Moreover, the data-generating processes (DGPs) of wide-sense nonstationary time series are almost certainly unknown and thus have to be discovered from the available evidence (see Castle et al., in press; Hendry & Doornik, 2014). Any approach to doing so will inevitably be heavily data based, even if guided by subject matter theory (which is plentiful in the climate context), although physics itself has recently been approached in a similar vein (see Qin, 2020).

The approach at Climate Econometrics (capitalized to differentiate it from the general research area) to modeling observational time series is complementary to physical process climate models.2 Indicator saturation estimation is used to locate outliers, shifts, and breaks and thus entails that there are usually more candidate explanatory variables N than observations T. Autometrics, a variant of machine learning that explores multipath block searches when N>T to discover well-specified and undominated models of the processes under analysis (see Doornik, 2009; also available in R by Pretis et al., 2018 and as the Excel Add-in XLModeler) is used.3 This article focuses on first moments, but Engle and Campos-Martins (2020) and Campos-Martins and Hendry (2020) provide studies of risk and volatility.

The remainder of the article elaborates by first setting the stage for the empirical approach to modeling climate change by taking stock in the ancient, but still relevant, concepts of earth, air, fire, and water. This discussion highlights the imperative for rapid action to address climate change and considers where economics and social science enter. Given this context, revealing the rapidly changing state of nature, the article then juxtaposes stationary econometric theory with nonstationary time series before describing the unfortunate implications of shifting distributions for statistical modeling theory and practice. The importance of handling location shifts and parameter changes by saturation estimation, and how that might be achieved, is discussed in the simplest context of a first-order scalar autoregressive DGP, then detecting trend breaks is considered. In both cases, the general approach is aimed to successfully model data even where shifts and breaks occur an unknown number of times at unknown dates by unknown magnitudes and directions relative to the model in use.

The article then describes the approach to jointly tackling all the main problems facing analyses of empirical evidence on wide-sense nonstationary processes in the framework of model discovery, starting from model formulation, through selection, to evaluation, before addressing how to forecast in a wide-sense nonstationary setting. The changing status from endogenous to exogenous regressors is observed in the production of forecasts for ice volume and temperature over 110,000 years into the future. The impact of the U.K.’s Climate Change Act of 2008 is evaluated by updating a U.K. CO2 model with two additional years of data, modeling the impact of the Act by a step dummy in 2010. The forecasts show that the policy was key to achieving a level shift down in U.K. CO2 emissions. The article concludes by emphasizing the importance of a discovery approach to modeling empirical climate phenomena. Data definitions and sources are recorded in the Appendix.

Taking Stock of Climate Change: Earth, Air, Fire, and Water

Over the past 800,000 years, ice ages have induced large switches in climate from cold to cool.4 Figure 3 graphs (a) the ice volume measure; (b) atmospheric CO2; (c) Antarctic temperature; and (d) a 3D plot of ice, Antarctic temperature, and CO2. The first three panels are recorded at 1,000-year intervals, where the x-axes in such graphs are labeled by the time before the present, starting 800,000 years ago. Visually, one can see that ice volumes cumulate more slowly than they melt, suggesting a nonlinear relation. The changing albedo of ice coverage and the increasing release (rather than absorption) of CO2 as oceans warm help explain the relative rapidity with which glacial periods switched. A second notable feature is the considerable increase in the variation of temperature and atmospheric CO2 after about 440,000 years ago, with much higher peaks and corresponding deeper troughs in ice volume. The 3D plot shows how the impact of a given temperature on ice volume changes with the level of atmospheric CO2.

Figure 3. Ice age time series. (a) Ice volume; (b) atmospheric CO2 in parts per million (ppm); (c) Antarctic temperature; (d) 3D plot of ice, Antarctic temperature, and atmospheric CO2 levels.

Variations in the Earth’s orbital trajectory around the Sun were first hypothesized to drive ice ages by Croll (1875) and confirmed by Milankovitch (1969) (after whom the glacial cycles are usually named). Milankovitch calculated the resulting solar radiation at different latitudes and corrected Croll’s assumption that minimum winter temperatures precipitated ice ages by showing that cooler summer maxima were key to glaciation.

Figure 4 records (a) the eccentricity of the Earth’s orbit (where zero denotes circularity); (b) the “tilt” of Earth relative to its axis (obliquity, measured in degrees); (c) the precession of the equinox (which determines whether the northern hemisphere is closest to the Sun during its summer or winter, also measured in degrees); and (d) the resulting summertime insolation calculated at 65º south (how much solar radiation reaches the planet at that latitude). The first three variables have periodicities of 100,000 years for eccentricity (varying from the gravitational influences of other planets in the solar system); 41,000 years for obliquity; and two cycles of 23,000 and 19,000 years for precession (partly due to the Earth not being an exact sphere). While these orbital series are strongly exogenous in any models of Earth’s climate, they seem to be nonstationary from shifting distributions, not unit roots. Interactions between these orbital variations also affect the lengths of glacial and interstadial periods as well as the timing of switches conditional on the existing extent of ice coverage.

As just stressed, “change” is the key word—and humanity is changing the climate by its vast emissions of greenhouse gases, especially from burning fossil fuels, dramatically highlighted by Figure 2(a). The following subsections consider the Earth’s limited available land, atmosphere, and water resources to show that humanity really can alter the climate, and is doing so in myriad ways. Earth, air, fire, and water (italicized to distinguish their conceptual role) have been ubiquitous concepts nearly globally from ancient times. Although they are not “elements,” as once believed, all four are “essential ingredients” of life. Their roles in climate change are discussed, along with the dangers of precipitating an anthropogenic mass extinction and actions humanity could take to avoid that.

Figure 4. Ice age orbital drivers. (a) Eccentricity (Ec); (b) obliquity (Ob); (c) precession (Pr); (d) summertime insolation at 65º south (St).

Earth

Continents and their topography are shaped by plate tectonics and the resulting volcanic eruptions, both of which also affect climate and have played a key role in past great extinctions. Earth is being used here as a place carrier for land, both as available living space and as providing soil for forests, other “wild” areas, and agriculture for food supply, which accounts for roughly 40% of the planet’s land area, or 50 m km2. Vegetation and soil together lock in about three times as much CO2 as the atmosphere currently holds, as well as absorbing around one third of emissions. However, higher temperatures and greater rainfall will release some of that carbon (see Eglinton et al., 2021). Inner-city vertical and underground farms seem viable and may become essential providers of food as the climate warms (see Asseng et al., 2020).

Crops are frequently grown using artificial fertilizers (often made from methane and leaching nitrous oxide in runoff) and based on farmland created by deforestation, wetland draining, and mangrove removal, all adding to GHG emissions. Together with the methane given off by animal husbandry, especially by cattle, sheep, and goats, these all lead to substantial GHG emissions—humanity’s climate change “food print.” Reducing such emissions will not be easy, but steps can be taken.5 Land around volcanoes is fertile, so basalt dust could be added as a fertilizer and also absorbs atmospheric CO2 (see Beerling et al., 2020; Nunes et al., 2014), and Biochar produced by pyrolysis of biomass and added to soil could also increase crop yields while reducing GHG emissions (see Woolf et al., 2010).

Climate change is increasing extreme land flooding from “rivers in the sky,” which can hold 15 times more water than the Mississippi River: Witness the massive floods in Australia during mid-March 2021 spanning a large area, with almost a meter of rain falling at Nambucca Heads over 6 days. There have also been massive floods recently in Europe and East Africa, bringing locust swarms and diseases like cholera.

Climate change is also increasing extreme drought, leading to a loss of crops, with the resultant stress on some plants like sorghums producing toxic hydrogen cyanide (see Shehab et al., 2020). Flooding and drought both lead to loss of soil, either from erosion or dust storms. Sea level rises cause coastal flooding, reducing usable land area, as well as forcing migration (see Figure 5).

Air

Air is again a place carrier, here denoting the atmosphere, comprising mainly nitrogen (78%) and oxygen (21%), with greenhouse gases like water vapor (0.4%), carbon dioxide (CO2), nitrous oxide (N2O), and methane (CH4), as well as ozone and some noble gases. Earth’s atmospheric blanket is essential to life, but seen from the space station, the atmosphere is a thin blue line round the planet, not much thicker than a sheet of paper around a soccer ball, so even small additional volumes of GHGs can have a large impact. Earth’s gravity and magnetic field are essential to retain its atmosphere against the solar wind and also to protect the ozone layer from damaging radiation.

Atmospheric gases have changed greatly over deep time, especially from volcanism and the exchange of CO2 for oxygen through photosynthesis, so Earth’s range has included ice ages and tropical conditions, but Mars and Venus warn that atmospheric protection needs to be “just right.” Increased greenhouse gases generated by human activity, especially burning fossil fuels, are now the major cause of climate change. GHGs receive, then reradiate energy at different wavelengths between ultraviolet and longwave infrared: This reradiation is responsible for the atmospheric greenhouse effect. Eunice Foote (1856) showed that a flask of CO2 heated greatly by the Sun, whereas those of water vapor and dry air did not, closely followed by the independent experimental evidence of John Tyndall (1859). Ortiz and Jackson (2020) analyzed Foote’s contribution and Levendis et al. (2020) suggested an alternative experiment.

Nitrous oxide emissions from nitrogen and phosphate fertilizers have doubled since 1970, so are about 7% of greenhouse gas emissions in 2020 (see Tian et al., 2020), but N2O is nearly 300 times more potent per molecule than CO2 as a greenhouse gas. Catalytic converters actually add to this growing problem while reducing toxic carbon monoxide (CO) vehicle exhaust emissions.

Atmospheric methane is now double the highest level over the past 800,000 years. CH4 is about 20 times as powerful as CO2 as a GHG, with a half-life in the upper atmosphere of around 15 years, gradually getting converted to CO2. Estimates of methane in hydrates in 2020 are over 6 trillion tonnes, roughly twice the GHG equivalent in all fossil fuels: The release of even a small proportion of that could be disastrous.

Chlorofluorocarbons (CFCs) were destroying the ozone layer before the highly successful Montreal Protocol in 1987 greatly reduced their use. That agreement is a potential role model for a far more ambitious global commitment beyond the Paris Accord at the 21st Conference of the Parties (CoP21) to the 1992 United Nations Framework Convention on Climate Change. However, replacement refrigerants like halons and halocarbons, including hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs), though less damaging to the ozone layer, are powerful greenhouse gases, meriting research for safer alternatives.

Fire

Fire is the place carrier for energy, currently obtained from burning vast volumes of fossil fuels. Humanity cannot continue to consume fossil fuels on the scale it does in the 21st century yet stay within the “carbon budget” required to achieve “net zero” GHG emissions, which is essential to prevent dangerous climate change. The resulting changing global temperatures are leading to increased frequency and severity of wild fires, from Australia, the Amazon, and California to Siberia, which is a potential tipping point from tundra melting and releasing methane. Wild fires create fire-induced thunderclouds, known as pyrocumulonimbus clouds, which increase aerosol pollutants trapped in the stratosphere and upper atmosphere, can generate fire tornados, and lead to flash floods while also igniting further spot fires and temporarily cooling the planet, like a moderate volcanic eruption. There is increased deforestation, especially in the Amazon and other tropical rainforests, with a loss of biodiversity, as well as increased GHG emissions from the burning of some of the forests.

The main hope is to replace fossil fuel-based energy by renewable sources from earth (using thermal energy such as ground-heat and air-source heat pumps), air (utilizing wind energy from onshore and offshore wind turbines), fire (from sunlight via solar cells, and nuclear, potentially including small modular reactors), and water (using hydroelectric energy from dams, diversion facilities, or pumped storage facilities). Table 2 records estimates of electricity-generating costs in £/MWh by different technologies. The costs of all renewable sources of electricity have been falling rapidly and seem likely to continue to do so. The share of U.K. electricity generated by renewables reached a peak of 60.5% in April 2020, according to National Grid data.

Table 2. Power-Generating Technology Costs in £/MWh

Source

2015

2025

2040

Solar large-scale PV (photovoltaic)

80

44

33

Wind onshore

62

46

44

Wind offshore

102

57

40

Biomass

87

87

98

Nuclear pressurized water reactor

93

93

93

Natural gas combined cycle gas turbine (CCGT)

66

85

125

CCGT with carbon capture and sequestration (CCS)

110

85

82

Note. Nuclear power guaranteed price of £92.50/MWh for Hinkley Point C in 2023. Lowest cost in italic; and underlined if less than 2015. Figures assume increasing carbon taxes and falling CCS costs over time. Source: Electricity Generation Costs 2020, UK Department for Business, Energy and Industrial Strategy (BEIS).

Zero GHG electricity generation from renewables is technically feasible but requires a huge increase in output and storage capacity (for windless cloudy periods), dependent on a very large investment. Sufficient supply could sustain electric transport, removing emissions from oil, and replace much household use of gas for heating and cooking. However, an electricity grid needs second by second balancing of electricity flows in an otherwise increasingly nonresilient system, dependent on highly variable renewable supplies, so both backup and instantaneously accessible storage are essential, suggesting an intelligent system that could exploit electric vehicle-to-grid storage (see Noel et al., 2019).

Water

Water may seem limitless and it surprises many people that in fact there is very little water, especially freshwater, on our planet. It is common to think that Earth is the “Blue Planet” where everyone is surrounded by an abundance of water, but widespread shallow oceans fool people: The Atlantic is only about 2.25 miles deep on average. The Pacific is wider and is deeper, at about 2.65 miles on average: At its deepest in the Challenger Deep of the Mariana Trench, it is roughly 6.8 miles down. In total, the Pacific holds 170 million cubic miles of water, just over half the 330 million cubic miles of water on Earth. If all sources of water on the planet were collected in a sphere, it would be just 860 miles in diameter. Consequently, it is easy to heat the oceans by emitting excessive volumes of CO2 into the atmosphere (see Figure 1[c]), pollute them, fill them with plastic waste, and turn seawater into a weak carbonic acid. Ocean acidification impacts on many ocean species, especially calcifying organisms like oysters and corals, but also on fish and seaweeds, affecting the entire ocean food chain.

The worldwide ocean “conveyor belt” circulates heat and nutrients and carries oxygen to depths, maintaining the health of the oceans. A key driver is that warm water from the Gulf Stream moves north, evaporates, becomes saltier and denser, cools, and so sinks and flows south. Melting northern hemisphere ice could disrupt this circulation by diluting the denser salty water, as well as by increasing sea levels by about 18 cm by 2100 (see Hofer et al., 2020). Added to increasing volume from thermal expansion, rising sea levels have serious implications for coastal flooding, although the rises are not uniform, and recent coastal elevation measures have tripled estimates of global vulnerability to sea level rises (see Kulp & Strauss, 2019).

Conversely, southern ocean sea ice can dramatically lower ocean ventilation by reducing the atmospheric exposure time of surface waters and by decreasing the vertical mixing of deep ocean waters, leading to holding 40 ppm atmospheric CO2 less at its maximum (see Stein et al., 2020). Currently, oceans absorb almost one third of anthropogenic CO2 emissions, but as they warm, they will hold less.

Freshwater is a hugely important commodity and needs to be treated carefully. A total of about 19% of California’s electricity consumption goes toward water-related applications, such as treating, transporting, pumping, and heating. Additionally, about 15% of its in-state electricity generation comes from hydropower, yet the frequency of both high- and low-flow extreme streamflow events have increased significantly across the United States and Canada over the 20th and 21st centuries (see Dethier et al., 2020).

Kelp “forests” and seagrass “meadows” absorb and store CO2, help offset increasing carbonic acidification as well as removing some pollutants, provide nurseries for young sea life, and help protect coasts against rising sea levels. Improving seaweed farming and raising aquaculture production by marine protection areas seem sensible, noting that offshore wind farms also act as marine reserves.

While the subsections have discussed each of earth, air, fire, and water separately, the interactions between them are obvious. Given their roles in climate change, they form the basis of many of the time series that are used to analyze climate change. The next section explores the problems of assuming such data are stationary when in fact they are highly nonstationary, in part due to the many changes already discussed.

Stationary Econometric Theory and Nonstationary Time Series

In contrast to the shifting observational data it needs to analyze, much of econometric theory and its applications to time series data implicitly or explicitly assumes that the data-generating process (DGP) is stationary. Theory derivations all too often rely on proofs about the properties of estimators and tests that are invalidated by failures of stationarity that cannot be resolved by differencing (even after taking logs) to “remove” stochastic trends because the means and variances of the differenced series under analysis remain nonconstant. For example, the widely used theorem that the conditional expectation of a variable is the minimum variance unbiased predictor is false when distributions shift, and the famous law of iterated expectations also fails (see Hendry & Mizon, 2014). Before these fundamental issues are considered in detail, the outcomes under stationarity are first described as a contrast.

In a stationary setting, key results about asymptotic distributions of estimators rely on the fact that “later data” can improve estimates based on earlier observations as follows. Consider the simple DGP in Equation 1 over a period t=1,,T:

yt=β+ϵtwhereϵtIN[0,σϵ2],(1)

where β is a constant and IN[μ,σϵ2] denotes an independent Normal distribution with mean μ and variance σϵ2. Based on a subsample t=1,,Tk<T, the least-squares estimator of β denoted β˜Tk is

β˜Tk=1Tkt=1TkytN[ β,Tk1σϵ2 ].(2)

The precision of estimation will be higher for larger T, so is improved by later data (i.e., future data relative to Tk). Importantly, under the assumptions outlined, all subsamples deliver unbiased estimates of β, which is the parameter relevant to all the time periods analyzed.

Another often implicit assumption is that the prespecified model is the DGP, so “optimal estimation” is the main issue rather than the more realistic need to discover a reasonable approximation to that DGP, in which case modeling is essential. If most time series are nonstationary from unanticipated shifts, then there is little likelihood of any prespecified model being complete, correct, and immutable, so finding those shifts is important.

Figure 6. Frequency of Central England Winter Temperatures below 2 ºC per decade. Data from Met Office Hadley Centre.

The impacts of climate change show up in Winter Central England Temperatures (CET) and in seasonal temperature trends. Building on data by Parker et al. (1992) and Manley (1974), Figure 6 shows the frequency of Central England Winter Temperatures below 2 ºC per decade from 1659 to 2019, recorded in a figure by Simon Lee.6 Based on the same data as given for CET and comparing to those for Boston over the period from 1743 to 2015, Figure 7 shows that temperature trends across seasons are very marked.7 In Central England, winters are warming but summers are not, whereas in Boston, warming is similar across all months. These long-term temperature trends are due to climate change, not weather variation, of which there is plenty. Temperature distributions are shifting on both sides of the Atlantic.

Figure 7. Trends across seasons. Left: CET over 1659–2019; right: Boston over 1743–2015. From Hillebrand and Proietti (2017).

The Key Implications of Shifting Distributions

The manifest evidence of nonstationarity visible in the previous graphs modifies the implications of Equations 1 and 2. If instead of a constant mean, the intercept was β1 for t=1,,T1 and β2β1 for t=T1+1,,T, then for Tk=T1, the estimator β˜T1 in Equation 2 would unbiasedly estimate β1, which is the relevant intercept at that time, whereas the full-sample β^T, when Tk=T, would be

β^T=1Tt=1TytN[r1β1+(1r1)β2,T1σϵ2],(3)

where r1=T1T1. Thus, β^T is relevant neither historically nor at the sample end. Moreover, forecasts based on β^T will be systematically biased:

E[yT+1y^T+1|T]=E[β2+ϵT+1β^T]=r1(β2β1).(4)

Thus, any policy using either β˜Tk or β^T would have incorrect implications at T or later, so it is essential to detect such shifts and reformulate empirical models accordingly. Examples highlighting how misleading empirical models can be when large outliers or location shifts are not modeled include Hendry and Mizon (2011) showing a positive price elasticity for the demand for food in the United States and Castle and Hendry (2014) revealing the key role of a long-run mean shift in models of U.K. real wages.

Not only will estimates of coefficients in models be nonconstant, so biased for both the before and after shift parameters, as noted, the basic statistical tools of conditional expectations and the law of iterated expectations also fail.

When the underlying distributions shift, expectations operators need to denote not only the random variables under analysis and their time, but also their distributions at that time and the information set being conditioned on. This requires three-way time dating, as in EDyt[yt+1|It1], which denotes the conditional expectation using the information set It1 formed at time t of the vector random variable yt+1 integrated over the probability density function Dyt() of yt:

EDyt[yt+1|It1]=yt+1Dyt(yt+1|It1)dyt+1.(5)

As knowledge of Dyt+1() is unavailable at t, the fundamental problem is obvious from Equation 5: The expectation is not over Dyt+1(), so if Dyt()Dyt+1() because the distribution has shifted, there is no reason why EDyt[yt+1|It1] should be informative about EDyt+1[yt+1|It1], the correct conditional mean, as Figure 8 illustrates.

Figure 8. The impact of a distributional shift on expectations.

Simply using E to denote the expectation can potentially mislead, as in letting

yt+1=E[yt+1|It]+vt+1,(6)

so that by taking conditional expectations of both sides,

E[vt+1|It]=0.(7)

This merely establishes that at time t, it is expected that the next error will have a mean of zero, but does not prove that the model used for E[yt+1|It] will produce an unbiased prediction of yt+1, as it is sometimes misinterpreted as doing.

Instead, when ytIN[μt,σy2] is unpredictable because future changes in μt are unknowable (like the unanticipated arrival of the SARS-Cov-2 virus driving the 2020 pandemic outbreak), the expectation EDyt[yt+1] need not be unbiased for the mean outcome at t+1. From Equation 5:

EDyt[yt+1]=yt+1Dyt(yt+1)dyt+1=μt(say),(8)

whereas

EDyt+1[yt+1]=yt+1Dyt+1(yt+1)dyt+1=μt+1,(9)

so that EDyt[yt+1] does not correctly predict μt+1μt. Thus, the expectation EDyt[yt+1] formed at t is not an unbiased predictor of the outcome μt+1 at t+1, although the “crystal-ball” predictor EDyt+1[yt+1], based on knowing Dyt+1, would be unbiased.

Returning to Equation 6 at time t and subscripting the expectations operator as in Equation 5:

yt+1=EDyt[yt+1|It]+vt+1,(10)

so Equation 7 becomes

EDyt[vt+1|It]=0,(11)

which does not entail that

EDyt+1[vt+1|It]=0,(12)

whereas Equation 12 is required for an unbiased prediction. The conditional expectation is the minimum mean square error predictor only when the distribution remains constant and fails under distributional shifts.

Moreover, the law of iterated expectations only holds intertemporally when the distributions involved remain the same. When the variables correspond to drawings at different dates from the same distribution, so Dyt=Dyt+1:

EDyt[EDyt+1[yt+1|yt]]=EDyt+1[yt+1].(13)

Thus, if the distributions remain constant, the law of iterated expectations holds, but it need not hold when distributions shift:

EDyt[EDyt+1[yt+1|yt]]EDyt+1[yt+1],(14)

as Dyt+1(yt+1|yt)Dyt(yt)Dyt+1(yt+1|yt)Dyt+1(yt), unlike the situation in Equation 13 where there is no shift in distribution.

Changes that alter the means of the data distributions are location shifts, so DGPs with such shifts are obviously nonstationary. Unfortunately, there is a widespread use of “nonstationary” to refer just to DGPs with stochastic trends, leading to the non sequitur that “differencing induces stationarity,” as well as the need to call time series with shifts and possibly also stochastic trends “wide-sense nonstationary.” Further, stochastic trends are usually assumed to apply constantly to an entire sample, but unit roots in DGPs are not an intrinsic property and can also change. Not only are there strong trends in Figure 1, these vary considerably over time, as shown by their changing rates of growth. Thus, several of the differenced time series are not stationary, as shown by Figure 9. In particular, changes in atmospheric CO2 have continued to trend up (Panel a) despite the Paris Accord, matched by increasing changes in ocean heat (Panel c), and there have been large variance changes in the differenced series for U.K. coal use (Panel b). As documented in Castle and Hendry (2020), the distributions of the U.K.’s CO2 emissions have shifted considerably over time.

Both the “financial crisis” of 2008 and the Sars-Cov-2 pandemic in 2020 were essentially unanticipated but have had major impacts on economic activity (Panel d) as well as on health, lives, and livelihoods, so the importance of nonstationarity replacing stationarity as the standard assumption for time series analysis may at last be realized.8 However, once the comfort blanket of stationarity is discarded, a fundamental problem is that there may be multiple unknown shifts in many different facets of DGPs at different points in time, with differing magnitudes and signs. Many “proofs” and statistical derivations of time series modeling become otiose, and some, like the so-called Oracle principle for model selection algorithms, are irrelevant when parameters change over relatively short periods of time.

Figure 9. (a) Monthly changes in atmospheric CO2; (b) annual changes in U.K. coal use; (c) annual changes in global ocean heat; (d) annual changes in monthly log (GDP).

Since climate change is driven by economic activity—which is wide-sense nonstationary and riddled with abrupt, usually unanticipated changes—these difficulties confront empirical modeling of many observational climate time series. The next section addresses why undertaking econometric modeling of changing climate time series requires handling various forms of shift.

The Importance of Handling Shifts

There are many methods designed either to reveal parameter nonconstancy by estimation or testing, or to tackle the problem by formulating changing-parameter models. The first group uses recursive data samples or moving windows thereof. In the recursive setting, Tk in Equation 2 defines an initial subsample and observations are added sequentially to provide β˜Tk+j for j=1,,TTk. This procedure can be reinterpreted as initially including impulse indicators 1t that are zero at all observations except unity at t for t=Tk+1,,T, then sequentially removing these as the estimation subsample size increases. The values of these impulse indicators can be very informative both about outliers and location shifts in the remaining data, but are usually ignored. Moreover, there may have been shifts in the initial sample 1,,Tk, which will not be detected, so the initial estimate is already biased. Similar comments apply to using moving windows.

A different issue confronts testing for nonconstancy, since the model must be available already, either specified a priori or more likely selected from some set of trial runs. But these estimates ignored the changes that are then tested for, and if constancy is rejected, the model must be reformulated—vitiating the earlier specifications. A similar difficulty faces methods for estimating changing parameters after fitting a prespecified model: Not only are there all the other unknown aspects of nonstationarity, as noted, observational data model specification is also uncertain in terms of the relevant variables, their lags, and their nonlinearities.

To illustrate the issues raised by location shifts, Equation 1 is generalized to the dynamic model:

yt=β0+β1yt1+ϵt,t=1,,T,(15)

where T=100, β1=0.5, with σϵ2=0.1, but β0=1 until T=80, then β0=0 for the remainder of the sample. A draw from Equation 15 is shown in Figure 10(a). The average simulation full-sample estimates from M=1000 replications were

y^t=0.914(0.045)yt1+0.119(0.082)σ^ϵ2=0.14.(16)

The estimates in Equation 16 are far from the DGP parameter values. The estimate of β1 being close to unity is a standard outcome from a failure to model a step shift and warns that near estimated “unit roots” need not signal a stochastic trend rather than a location shift. Also, σ^ϵ2 is 40% larger than σϵ2 on average.

Figure 10. (a) A time series from Equation 15; (b), (c) recursive parameter estimates with estimated standard errors (ESEs) and Monte Carlo standard deviations (MCSDs); (d) forecast Chow tests.

Providing the DGP is known, recursive estimation reveals the nonconstant parameter estimates, as Figures 10(b) and 10(c) show, confirmed by high levels of rejection of the null of constancy on forecast Chow (1960) tests (see Hendry, 1984, for an analysis of Monte Carlo methods). By themselves, the recursive plots show that there was change but do not isolate the source. One “identification” route is to fit the model separately to the data before and after T=80. For a single data draw, this delivers

y^t=0.661(0.084)yt1+0.68(0.168)t=2,,80σ^ϵ2=0.10,(17)
y^t=0.521(0.105)yt1+0.009(0.073)t=81,,100σ^ϵ2=0.10,(18)

revealing the large shift in the intercept with little change in the other estimates. Saturation estimation methods, here step-indicator saturation (SIS) (explained later), are designed to find location shifts and outliers while selecting relevant variables. Applying SIS to the full sample, where Sj=1,tj,j=1,,T1, and zero otherwise, but always retaining the intercept and yt1 when selecting which step indicators to retain at 0.5%, yields

y^t=0.608(0.066)yt10.011(0.072)+0.795(0.138)S80t=2,,100σ^ϵ2=0.10.(19)

Thus, the intercept is estimated as 0.784 up to t=80 and essentially zero thereafter. Despite having started with a candidate variable set of 98 step indicators, all the data (less two degrees of freedom) can be used in Equation 19 to estimate β1, thereby delivering a more precise as well as a constant outcome.

Simulating SIS for an Intercept Shift

The results in Equation 19 are for a single draw of the time series, but Autometrics (see Doornik, 2009) can simulate SIS. Using the same settings as for Figure 10 and selecting step indicators at 0.5% with no backtesting until the final stage, yields the estimates reported in Figure 11, with β˜i denoting coefficient estimates using indicator saturation estimation (ISE) where β^i is used to indicate OLS estimation with no ISE.

Figure 11. (a) One time series from Equation 15; (b), (c) simulated recursive parameter estimates with estimated standard errors (ESEs) and Monte Carlo standard deviations (MCSDs) when applying SIS; (d) resulting forecast Chow tests.

The outcome is dramatically different from that in Figure 10. The estimate of β1 is now relatively constant at between 0.4 and 0.5; the estimate of β0 switches quickly from around unity to zero after t=80, and the forecast tests reject at about their nominal significance levels. The step indicator at t=80 was selected with a probability of 0.84, rising to 0.97 within ±1 of t=80. Irrelevant indicators were selected with probability 0.008 (the empirical gauge), close to the nominal significance level of 0.005, so there is almost no overfitting. The Monte Carlo standard deviations (MCSDs) are somewhat wider than the estimated standard errors (ESEs), so the latter slightly underestimate the uncertainty.

Simulating SIS for a Shift in Dynamics

A legitimate question is what if, instead of β0 shifting, the intercept remained constant and the dynamic parameter β1 changed? For example:

yt=β0+β1yt1+ϵt,t=1,,T1,yt=β0+β1*yt1+ϵt,t=T1+1,,T.(20)

Setting the same parameter values for β0, β1, σϵ, and T1=80 as before, with β1*=0.75, the previous Monte Carlo is repeated, leading to Figure 12 for recursive estimation with no ISE.

Figure 12. (a) A time series from Equation 20; (b), (c) recursive parameter estimates with estimated standard errors (ESEs) and Monte Carlo standard deviations (MCSDs); (d) forecast Chow tests.

Apart from Panel (a), the other panels are similar to those in Figure 10. The average simulation full-sample estimates from M=1000 replications are now

y^t=0.911(0.046)yt1+0.227(0.113),(21)

so are also closely similar to Equation 16. Indeed, for the single data draw, the subsample fits are identical to Equation 17 for the first period, but for the second subperiod,

y^t=0.678(0.125)yt1+1.28(0.466);t=81,,100,σ^ϵ2=0.10,(22)

so the estimate of β1 is barely altered, although it had changed, but β^0 is greatly shifted, reflecting the major break obvious in the data plot in Figure 12(a). Consequently, it may not surprise that SIS can again capture much of the shift in the intercept:

y^t=0.71(0.06)yt1+1.14(0.22)0.568(0.119)S80;t=2,,100,σ^ϵ2=0.10.(23)

The simulation outcomes when applying SIS reinforce these conclusions. Figure 13 records the results when SIS is applied at a significance level of α=0.005. Panel (b) shows that the estimates of β1 increase after t=80, although they remain close to 0.5 rather than 0.75. As before, the estimates of β0 change after t=80, gradually rising to 2 (Panel c), and the forecast Chow tests reject close to their nominal significance levels (Panel d). The step indicator at t=80 was selected with a probability of 0.46, rising to 0.73 within ±1 of t=80, 0.89 within ±2, and 1.0 within ±3 of t=80, so detection is more spread out. Irrelevant indicators were selected with probability 0.010, somewhat above the nominal significance level of 0.005, reflecting the issue being a shift in dynamics, of which only the induced location shift is detected. The MCSDs are wider than the ESEs, so the latter again underestimate the uncertainty.

Why Is SIS Effective for a Shift in Dynamics?

It may seem puzzling that SIS can help facing a shift in the dynamics, but the crucial modeling problems are those that arise from shifts in the long-run, or equilibrium, mean rather than in other parameters, as emphasized in the forecasting context by Clements and Hendry (1998, 1999). Indeed, notice that the dominant visual feature of both Panel (a) data graphs in Figures 12 and 13, respectively, are the sudden departures from their previous average locations. Let μ=β0/(1β1), which shifts to μβ0**=β0*/(1β1) in Equation 15, so it goes from 2 to 0, as seen in Figure 10(a). Then μβ1**=β0/(1β1*) in Equation 20 changes from 2 to 4, as in Figure 12(a). Defining μ*=μ*μ and β1*=β1*β1, the two cases can be written as:

ytμ=β1(yt1μ)+(1β1)μβ0**1{t>T1}+ϵt,(24)
ytμ=β1(yt1μ)+(1β1*)μβ1**1{t>T1}+β1*(yt1μ)1{t>T1}+ϵt.(25)

Figure 13. (a) A time series from Equation 20; (b), (c) simulated recursive parameter estimates with estimated standard errors (ESEs) and Monte Carlo standard deviations (MCSDs) when applying SIS; (d) resulting forecast Chow tests.

Expressed as mean zero deviations about their equilibrium means, when either DGP shifts, the problem is the sudden appearance of a nonzero intercept where previously it was zero. That is what SIS can correct. For the β0 shift, removing the new intercept recreates the previous DGP, whereas the β1 shift also induces a third term that has a mean of zero, so SIS cannot correct that. From Equations 24 and 25, predicted values for the step-indicator magnitudes can be calculated, leading to -1 and 0.5, respectively. For example, from Equation 19, μ=2, so S80 should have a coefficient of unity, not significantly different from the 0.8 found, whereas in Equation 23, it should be –0.5, close to the outcome.

While this illustration is for a simple setting, the principles generalize to more general models, including selecting across variables as well as indicators.

Tackling a Shift in Dynamics by Multiplicative Indicator Saturation

Although SIS offsets much of the effect of the changing dynamics, the more appropriate approach is to model the change in β1, and the tool for doing so is multiplicative indicator saturation, denoted MIS (see, e.g., Castle et al., 2020; Kitov & Tabor, 2015). In MIS, yt+1 is multiplied by almost every step indicator, creating the T2 additional candidate variables Sj×yt1 for j=2,,T1. Thus, SIS is in fact MIS for the constant term, but is a special case of importance given the pernicious effect unmodeled location shifts have on estimated parameters and forecasts.

For one draw, applying selection at 0.1% over the MIS regressors, the resulting model is

y^t=0.74(0.05)yt1+1.02(0.16)0.25(0.04)S80×yt1t=2,,100,σ^ϵ2=0.08,(26)

which almost exactly replicates the DGP. The outcome shows a shift in β^1 from 0.49 before t=80 to 0.74 after, with a constant intercept. The simulation outcomes are reported in Figure 14. The interaction indicator at t=80 was selected with a probability of 0.42, rising to 0.62 within ±1 of t=80,0.81 within ±2, and 0.89 within ±3 of t=80. Irrelevant indicators were selected with probability 0.007, which falls to 0.003 excluding the indicators within ±3 of t=80. The MCSDs are again wider than the ESEs, so the latter underestimate the uncertainty.

Figure 14. (a) A time series from Equation 20; (b), (c) simulated recursive parameter estimates with estimated standard errors (ESEs) and Monte Carlo standard deviations (MCSDs) when applying MIS; (d) resulting forecast Chow tests.

Tackling Changing Trends by Trend-Indicator Saturation

As noted in the section “Why Change Matters,” if humanity is to avoid catastrophic climate change, the present upward trends in GHG emissions must be reversed to become rapid downward trends. Thus, graphs of fossil fuel use and GHG emissions shaped like the in Figure 1(b) may become common, as may less extreme but still marked trend changes like those in Figure 1(a), (c), and (d). Economics, demography, and epidemiology have all experienced sudden large changes in trends, as Figure 15 illustrates.

Figure 15. United Kingdom time series of (a) output per worker per year (productivity); (b) births per thousand of the population; (c) employment; (d) cumulative confirmed COVID-19 cases.

Productivity has increased at varying rates since 1860 but has stagnated since 2008, as highlighted by the ellipse, badly overforecasted by the U.K. Office of Budget Responsibility not adjusting to the “flatlining,” and much improved by the robust predictor proposed by Martinez et al. (2021). The birth rate was steadily increasing until the introduction of oral contraception in the mid-1960s, then fell sharply till the late 1970s and has fluctuated since. Employment has expanded greatly since 1860, with major fluctuations during world wars and severe depressions, but at a much more rapid rate since 2000 (which helps explain the productivity slowdown). Finally, Panel (d) shows dramatic changes in the trend of confirmed COVID-19 cases. To empirically model trends that change by unknown magnitudes at unknown points in time an unknown number of times requires a general tool like trend indicator saturation, denoted TIS (see Castle et al., 2019 and Walker et al., 2019 for an application). TIS is equivalent to applying multiplicative indicator saturation to a deterministic trend by interacting step indicators with the trend, Sj×t, where t is a deterministic trend. However, it merits a separate analysis to that of MIS because it is deterministic (like the constant) and also that t=1Tt2=16T(T+1)(2T+1), so grows at O(T3) as against O(T2) for sums of squares of stationary variables, and therefore has different gauge and potency properties to that of MIS.

Much macroeconomic modeling is in differences of variables or their logs to eliminate trends and possibly unit roots. However, the potency of detecting breaks is much higher when working in levels than in changes. To illustrate, an artificial trend break DGP is created to which TIS is applied to the levels of the time series, and the ability of TIS to detect the trend breaks is compared to the ability of SIS to detect location shifts in the changes of the time series.

Figure 16 records a single representative draw of the resulting time series, denoted yt, zt, Δyt=ytyt1, and Δzt, where

yt=β0,t+β1,tzt+ϵtwhereϵtIN[0,σϵ2](27)

Figure 16. Time series from Equations 27 and 28 of (a) yt; (b) zt; (c) Δyt; (d) Δzt.

and

zt=γ0+γ1t+νtwhereνtIN[0,σν2],(28)

so zt acts as the trend. Let β0,t=10 and β1,t=1 for t=1,,60, then β0,t=70 and β1,t=2 for t=61,,100, with γ0=0, γ1=1, σϵ2=1, and σν2=0.001 deliberately set to a tiny value to mimic a trend, as Figure 16(b) confirms.

“Ocular econometrics” on Figure 16 shows an obvious trend break in Panel (a), but not an obvious shift in the changes Δyt in Panel (c). Applying TIS to the levels with a target nominal significance value of 0.001 picks up the shift, but SIS applied to the changes needs to be at 0.005 to do so. Doubling of the growth rate is a very large change, but smaller changes, such as a 50% increase, barely register in Δyt. Four cases are considered: (i) estimating the relation between yt and zt without allowing for the trend shift; (ii) estimating that relation with TIS at 0.001; (iii) regressing Δyt=ψ0+ψ1Δzt; and (iv) estimating that last equation with SIS. Figure 17 shows the outcomes for (i) and (ii), and Figure 18 for the differenced cases (iii) and (iv).

Figure 17. Actual and fitted values and residuals from (a), (b) regressing yt on zt without TIS, and (c), (d) regressing yt on zt with TIS.

Figure 18. Actual and fitted values and residuals from selecting (a), (b) Δyt from Δzt with SIS; (c), (d) yt with TIS when zt is forced; (e), (f) Δyt on SIS with Δzt forced.

Not handling the break in the trend, as in Figure 17(a) and (b), is disastrous. For example, the deviations from trend would be unrelated to “excess demand” if yt was GDP. However, in Figure 17(c) and (d), the shift is detected by TIS.9 When there are shifts, ISEs aim to reveal what the processes were like in earlier periods. For TIS here, that is the trend that would have been found by an investigator having data up to T=60.

Figure 18 records two cases with SIS and one with TIS. In the top row, Panels (a) and (b) refer to the case where SIS is applied and Δzt is not forced, so it is also selected jointly with the step indicators. Panels (e) and (f) in the bottom row refer to the case where SIS is applied but Δzt is forced, which means that it is always retained in every selection, and only the step indicators are selected over. Panels (c) and (d) in the middle row are for the case with TIS when zt is forced, so only the trend indicators are selected over. SIS at 0.005 detects the shift, dropping Δzt (when only the constant is forced), whereas Δzt is retained with an unrestricted constant. An intercept is equally good here at representing the mean change, but would not work if zt was exactly the trend, so Δzt was constant, whereas Δzt would matter more than the intercept if σν was larger. There is a small loss of fit if SIS is not used, and as the residual autocorrelation test is significant in all three cases, there is little sign of the step shift. Differencing doubles the error variance, so detecting the step shift rather than the trend break is harder, reflected in the need to use a less stringent nominal significance level.

Although the fit is similar between Figure 18(c) and (d) as measured by σ^ϵ, there is nevertheless a substantial impact on forecasts of yt derived from Δyt, as Figure 19(b) and (d) shows for multistep forecasts. There is little difference between Figure 19(a) and (c) in the root mean square forecast errors (RMSFEs) for forecasts of the changes Δyt, but cumulating these forecasts to derive the levels’ forecasts leads to the RMSFE for the model without SIS being more than four times larger than that with SIS and those forecasts from case (iii) being systematically too low.

Figure 19. Actual and forecast values for changes and derived levels from models of Δyt on Δzt without SIS (a), (b) denoted (^), and with (c), (d) ().

Figure 20 records multistep forecast values for the levels from models of yt on a forced constant and forced zt, where Panel (a) is based on the model with TIS, Panel (b) is from the model without TIS, and Panel (c) are forecasts from a model with a step intercept correction from t=84 on. All three graphs also show robustified forecasts (see Hendry, 2006). The forecasts with TIS have dramatically smaller RMSFEs than those without. While the robust device is a great improvement for Panel (b) when TIS was not used, being based on differencing the data, its RMSFEs are similar to those in Figure 19 for the cumulated differenced forecasts without SIS (Panel b).

Models in differenced data face this risk of appearing to have few or no step shifts when estimated and produce respectable forecasts of future changes but suffer systematic forecast failure for the entailed levels of the data.

Figure 20. Actual and forecast values for levels from models of (a) yt on a forced constant and forced zt with TIS (^), and (b) without TIS (), and (c) with a step intercept correction from t=84().

Modeling and Forecasting Wide-Sense Nonstationary Processes

In this section, the approach to jointly tackling all the main problems facing analyses of empirical evidence on wide-sense nonstationary processes is described. The framework is one of model discovery, from model formulation, through selection, to evaluation. Formulation entails commencing from a very large initial specification intended to nest the data-generating process (DGP) as closely as possible while retaining available theory information. Selection requires searching over all nonretained potential determinants jointly with indicator saturation estimation to find which variables, lags, and functional forms are relevant and which observations need separate handling. Evaluation involves testing for a range of possible mis-specifications. Forecasting hinges on the wide-sense nonstationarity of the data, as forecasting methods derived for stationary settings are otiose in the real world.

Selecting Models

The framework begins by embedding the relevant subset of climate theory within a much more general model specification that allows for influences that are not explicitly included in the theory models. This could be due to the theory model itself being incorrect or incomplete, or it could be due to external effects that lie outside of the climate theory (e.g., volcanic eruptions or economic responses to a pandemic). The theory is the object of study, but the target for model selection is the data-generating process for the set of variables under analysis. In order to validly evaluate the objective or the postulated theory, the model must be as robust as feasible to outliers, shifts, potential omitted variables, nonlinearity, mis-specified dynamics, incorrect distributions, nonstationarity, and invalid conditioning.

The approach can orthogonalize all additional variables with respect to the theory variables, so the distributions of the estimators of parameters of the object of interest are unaffected by selection (see Hendry & Johansen, 2015). The search algorithm can retain without selection all the variables in theory model when selecting other features. This enables tighter than conventional selection significance levels, reducing the retention of irrelevant candidates without jeopardizing the retention of theory-relevant variables. This is a win–win situation. When the theory model is complete and correct, the resulting model will deliver precisely the same estimates as directly fitting it to data, even if selecting from a vastly larger specification, but if the theory is not complete and correct, the modeler would discover a better formulation.

The implication of commencing with much larger models than the theory would suggest is that the starting point for model selection would include more candidate variables, N, than the number of observations, T. A search algorithm is needed that can handle such a situation. Autometrics implements a block search algorithm to handle more variables than observations (see Doornik, 2009; Hendry & Doornik, 2014). The regressors are divided into subblocks but the theory variables are retained at every stage, selecting only over the putative irrelevant variables at a stringent significance level. The subblocks include expanding and contracting searches to handle correlated regressors that may need to enter jointly to be significant. It is almost costless to check large numbers of rival variables, so there are huge benefits if the initial specification is incorrect but the enlarged general unrestricted model nests the DGP.

Search is unavoidable as climate and economic variables are all interrelated with high correlations. There are many selection algorithms in the literature that assume a lack of correlation and therefore pursue single path searches. These include forward search and stepwise regression, 1-cut selection and backward elimination, and Lasso (see Tibshirani, 1996) and some of its variants. The benefits of Autometrics include formal tests for congruence to ensure all tests of reduction are valid, multipath search to avoid path dependence and ensure that the initial ordering of regressors does not matter, increased efficiency relative to estimating all 2N possible models, which is infeasible in most cases as N becomes even moderately large, and a well-defined stopping point at terminal models using encompassing tests against the general unrestricted model to ensure there isn’t a substantial loss of information. On the principle of encompassing (see Bontemps & Mizon, 2008; Mizon & Richard, 1986) Doornik (2008) analyzes its role in automatic model selection. Although a multipath search is not as fast and simple as single path procedures and there is some dependence on how the blocking is implemented, increased computing power means that these costs are not large.

Three important automatic generalizations for specifying the initial general unrestricted model include (a) adding in many lags of the regressors to allow for a sequential factorization and the selection algorithm will then apply a lag length reduction stage, ensuring there are no unmodeled dynamics; (b) including nonlinear transformations of the regressors to allow for general unspecified forms of nonlinearity using polynomial and exponential expansions; and (c) a variety of indicator saturation estimators (ISEs) to model many different aspects of wide-sense nonstationarity. As illustrated earlier, each ISE is designed to match a specific problem: IIS to tackle outliers, SIS for location shifts, MIS for parameter changes, TIS for trend breaks, as well as designed indicator saturation for modeling phenomena with a regular pattern, such as detecting the impacts on temperature of volcanic eruptions (DIS; see Pretis et al., 2016). Importantly, saturation estimators can be used in combination, as demonstrated later, where IIS and SIS combined is called super-saturation (see Ericsson & Reisman, 2012; Kurle, 2019). All saturation estimators can be applied when retaining without selection a theory model that is the objective of a study, while selecting from other potentially substantive variables. Saturation estimators have seen applications across a range of disciplines, including dendrochronology, volcanology, geophysics, climatology, and health management, as well as economics, other social sciences, and forecasting. Although theory models are much better in many of these areas than in economics and other social sciences, modeling observational data faces most of the same problems, which is why an econometric tool kit can help.

The Retention Properties of Model Selection

Having established that commencing with large models ensures that many aspects of wide-sense nonstationarity in the data can be captured, it is important to assess the properties of selection from such a starting point. An effective model selection procedure should aim to select a congruent and encompassing model specification that retains the relevant variables, excludes irrelevant variables, and results in small mean square errors (MSE) for the estimated parameters of interest after model selection, regardless of whether additional correlated variables are included initially. This section considers how well such objectives can be achieved in practice.

Let yt be the variable of interest, with zt=(z1,t,,zN,t) including all possible explanatory variables, nonlinear functions, saturation estimators, and deterministic terms such as intercept or seasonals, and (for ease of notation) all lags of explanatory variables and nonlinear functions, such that

yt=i=1Nβizi,t+ϵt,(29)

where N>T, and for convenience, the first n regressors are defined as relevant in that they enter the DGP, and the subsequent Nn regressors as irrelevant since they do not enter the DGP. It is assumed that ϵtIN[0,σϵ2], which is untestable given N>T but should be satisfied by the inclusion of such a broad range of regressors in zt. Apply a selection algorithm at a significance level α to Equation 29 for M replications, and the OLS estimated coefficients retained in the selected model are denoted β˜i (where β˜i=0 when zi,t is not retained in the final model). Letting 1(β˜i,j0) denote an indicator function equal to unity when β˜i,j0 and zero otherwise, the retention rate is given by

p˜i=1Mj=1M1(β˜i,j0),i=1,,N.(30)

The gauge is defined as the average retention frequency under the null hypothesis (i.e., how many irrelevant variables are retained), given by

g=1Nnk=n+1Np˜k,(31)

and the potency is defined as the average retention frequency under the alternative hypothesis, or how often relevant variables are retained:

p=1nk=1np˜k.(32)

The unconditional MSE is given by

UMSEi=1Mj=1M(β˜i,jβi)2,(33)

and the conditional MSE, computed only over retained regressors, is given by

CMSEi=j=1M[(β˜i,jβi)21(β˜i,j0)]j=1M1(β˜i,j0),(βi2 ifj=1M1(β˜i,j0)=0).(34)

The theoretical gauge for IIS under the null hypothesis that there are no outliers is αT. This extends to regressors; under the null that none of the zt are relevant so their noncentralities defined as E[tβi]=ψi are zero, then αN regressors should be retained on average. The empirical gauge, g, closely matches the theoretical gauge for tight α, as shown in Hendry and Doornik (2014). However, if α is too loose, it can result in a higher g than the theoretical gauge due to a downward biased equation standard error by retaining too many irrelevant indicators. This can be corrected using the bias correction of Johansen and Nielsen (2009), but does not arise if α is appropriately controlled at αmin[0.001;1/T]. g will also be higher if SIS is applied, as two steps are needed to capture an outlier, and if IIS and SIS (super-saturation) are applied jointly, the gauge is roughly doubled. This can also be controlled by using a tighter α. For non-indicator regressors, the choice of α depends on their number (r, say), often set as αmin[0.01;1/r], and type, where a more stringent value may be set for selecting nonlinear functions of other included variables.

Finally, there is an additional cost to the gauge when applying mis-specification and encompassing tests; irrelevant regressors with estimated t statistics less than cα may be retained to ensure these tests do not reject, adding to g (on average this can be an additional 1% of gauge). While the diagnostic and encompassing tests could be switched off, they are fundamental to ensuring a congruent and parsimonious specification so that the additional gauge costs are worthwhile.

As just argued, overfitting is not a concern despite commencing from very large N as long as α is controlled. The difficulty is in retaining relevant variables which have low signal-to-noise ratios, denoted by their noncentralities, ψi. Tight α will mean that cα is larger than conventional critical values, so variables at the margin of significance will be missed. However, there are a number of comments to put this into context. First, by applying saturation, the residual distributions will be approximately Normal, even if they were fat-tailed initially. As the Normal distribution has thin tails, critical values increase slowly as α decreases, so even using α=0.0001 only leads to ca=4. Second, theory variables can be retained without selection, so even if they have low ψi, they will always be included in the final selected model. Finally, the retention of low-significance relevant variables is not just a difficulty for model selection, as the same problem of retaining marginally significant regressors would be faced even if the DGP was the starting point. The costs of selection over and above the costs of inference (as measured by conventional t-testing on the DGP) are small and can even be negative (see Castle et al., 2011). The empirical potency p is close to theory powers for 1-off t-tests.

Unconditional estimates of DGP parameters will be downward biased for a variable with ψi near cα, comprising draws in which the variable is retained, so |t^βi|>cα and draws in which the variable is not retained and therefore the estimated parameter is set to 0. However, plotting unconditional distributions, as in Hendry and Krolzig (2005), illustrates the quality of model selection in correctly classifying variables into DGP and non-DGP variables, where the latter have tiny UMSEs. The nonzero-mass distributions of the estimated parameters of DGP variables will be truncated Normals, decreasingly so as |t^βi| increase.

Conditional on retaining the variables, estimates of DGP parameters will be upward biased, consisting only of draws where |t^βi|>cα. This bias can be corrected in orthogonal settings as the truncation value cαis known, and bias correction also reduces CMSEs (see Hendry & Krolzig, 2003). Even without bias correction, the CMSEs are close to those for model selection commencing from the DGP, so again the issue is one of inference rather than selection. As long as α is controlled, the postselection distributions of parameter estimators are relatively unaffected by the search algorithm, being very similar to the corresponding postinference distributions from the DGP.

Forecasting Wide-Sense Nonstationary Time Series

“Conventional” economic forecasting uses a theory-based system that models the main variables of interest. Examples include a dynamic-stochastic general equilibrium (DSGE) model, a variant of a vector autoregression (VAR), or a simultaneous equations model. Some systems are closed in that all variables are modeled, but most are open with “offline” assumptions made about future values of unmodeled “exogenous” variables. Almost all economic systems are equilibrium-correction models (EqCMs) or differenced variants thereof. Clements and Hendry (1998, 1999) develop taxonomies of all forecast errors in closed systems to show that the key determinant of forecast failure is an unmodeled shift in the equilibrium mean. Hendry and Mizon (2012) extend the taxonomy to open systems and show that result still holds, but there are additional potential sources of forecast failure deriving from changes in the “exogenous” variables. Introductions to forecasting facing breaks are provided by Castle et al. (2016, 2019).

Nothing can solve the forecast failure resulting from an unanticipated shift in the equilibrium mean over the forecast horizon, of which the 21st century has already witnessed several, including the dot.com crash, the financial crisis, and the COVID-19 pandemic. Forecasting such shifts before they occur has not yet proved feasible, even if ex post claims that they were predicted abound. However, a number of approaches have been proposed to counter the problem of forecast failure due to in-sample shifts in equilibrium correction models, including differencing, and developing predictors that are “robust” after breaks (see, e.g., Castle et al., 2015; Hendry, 2006; Martinez et al., 2021). Pesaran et al. (2013) propose weighting observations when there are breaks near the forecast origin to minimize RMSFEs, allowing for both continuous and discrete parameter change (also see Pesaran & Timmermann, 2007). Nevertheless, avoiding systematic mis-forecasting has a cost in larger RMSFEs when there is no shift to offset. Given the dominance of breaks as the major source of forecast failure, the implications of model selection from a large set of explanatory variables have very little effect on forecast uncertainty, particularly given the similarity between postselection parameter distributions from the selected model and from the DGP. Indeed, model selection with saturation ensures unbiased deterministic terms at the forecast origin, thereby reducing forecast errors.

Exogeneity in Ice Ages Forecasts

As Pretis (2020) remarks:

Econometric studies beyond IAMs (integrated assessment models) are split into two strands: One side empirically models the impact of climate on the economy, taking climate variation as given . . . the other side models the impact of anthropogenic (e.g., economic) activity onto the climate by taking radiative forcing—the incoming energy from emitted radiatively active gases such as CO2—as given. . . . This split in the literature is a concern as each strand considers conditional models, while feedback between the economy and climate likely runs in both directions. (p. 257)

Pretis (2021) addresses the exogeneity issue in more detail. Examples of approaches conditioning on climate variables such as temperature include Burke et al. (2015), Pretis et al. (2018), Burke et al. (2018), and Davis (2019). Hsiang (2016) reviews such approaches to climate econometrics. Examples from many studies modeling climate time series include Estrada et al. (2013), Kaufmann et al. (2011, 2013), and Pretis and Hendry (2013).10

The dynamic simultaneous system in Castle and Hendry (2020) for modeling and forecasting Antarctic ice volume and temperature as well as atmospheric CO2 over the past 800,000 years of ice ages (see Figure 3) conditioned on the contemporaneous and lagged values of the Earth’s orbital variables of eccentricity, obliquity, and precession, shown in Figure 4. The open model taxonomy in Hendry and Mizon (2012) added nine potential sources of forecast error to the 10 that occur in closed models. They show that despite the in-sample forecasting model being correctly specified and all unmodeled variables (denoted by the vector zt) are strongly exogenous with known future values, changes in dynamics can lead to forecast failure when the zt have nonzero means. Their possible impacts on the forecast accuracy of the system 100,000 years into the future are addressed.

To illustrate, let the DGP of a vector yt conditional on known zt be

yt=Ψ1yt1+Ψ2zt+ϵtwhereϵtIN[0,Ωϵ],(35)

which has a zero intercept to simplify the algebra. If zt has a nonzero mean, E[zt]=μ, then when Ψ1 has all its eigenvalues inside the unit circle so that the process is stationary, the equilibrium mean is

E[yt]=ψ=(IΨ1)1Ψ2μ.(36)

In terms of deviations from equilibrium means:

yt=ψ+Ψ1(yt1ψ)+Ψ2(ztμ)+ϵt.(37)

If any of the parameters determining ψ in Equation 36 change, then despite the future zt being super-strongly exogenous and known, shifts in that equilibrium mean will lead to forecast failure until the model is appropriately updated.

Forecasting after the dynamic parameter matrix shifts at T+1, so that the DGP becomes

yT+1=Ψ1*yT+Ψ2zT+1+ϵT+1(38)

from a forecast origin at T, where yT is known using

y^T+1|T=Ψ^1yT+Ψ^2zT+1,(39)

leads to the forecast error eT+1|T=yT+1y^T+1|T:

eT+1|T=(Ψ1*Ψ1)yT+(Ψ1Ψ^1)yT+(Ψ2Ψ^2)zT+1+ϵT+1.(40)

When the in-sample parameter estimates are unbiased so E[Ψ^1]=Ψ1 and E[Ψ^2]=Ψ2:

E[eT+1|T]=(Ψ1*Ψ1)ψ=(Ψ1*Ψ1)(IΨ1)1Ψ2μ.(41)

Thus, the equilibrium mean shifts when μ0 and would also do so then if Ψ2 shifted or if there was an intercept ϕ0 in the DGP. If neither μ nor ϕ shift, then the key problem appears to be shifts in Ψ1 and Ψ2, although if y^T has to be estimated, then yTy^T could also cause a systematic forecast error, especially for multihorizon forecasts.

The Earth’s orbital drivers of eccentricity, obliquity, and precession are super-strongly exogenous and known into the distant future, albeit that a sufficiently large rogue object intruding into the solar system might perturb Earth’s orbit in unanticipated ways. Excluding that last possibility for the next 100,000 years so that neither μ nor Ψ2 change, then the forecasting system being open does not by itself create additional problems. Instead, the change to the system from CO2 being endogenously determined to being created anthropogenically is a fundamental shift in the DGP, seen in Figure 2. Castle and Hendry (2020) tackled this by computing two scenarios where the model for ice volume and temperature remains constant, but CO2 is determined exogenously from ad 1000. The impact on the forecasts is dramatic, as Figure 21 shows, where Panel (a) is for ice under the three scenarios and Panel (b) for temperature.

Figure 21. Three scenarios of different atmospheric CO2 levels: Endogenous (long-dashed with error bars), and fixed at 400 ppm (dotted with error bands) and 560 ppm (thick dashed with error fans) for (a) ice volume; (b) temperature.

Forecasting 110,000 years ahead under ceteris paribus, so there is no human intervention, shown as the long-dashed line with ±2σ¯f bars, mimics the ice age data. Forecasting conditional on atmospheric CO2 remaining at 400 ppm (roughly the level in 2020), shown as the dotted line with solid ±2σ˜f error bands, leads to less ice than the ice age minimum over the past 800,000 years and higher temperatures than the peak of the ice ages. Finally, at 560 ppm (roughly representative concentration pathway, RCP8.5), shown as the thick dashed line with ±2σ^f fans, temperatures are far higher than during the ice ages and Antarctica is almost ice free.

Updating the U.K. CO2 Model

Since modeling U.K. CO2 emissions reported by Castle and Hendry (2020), two more annual observations have been produced, allowing an important update of their equation. This enables both a check on the role of the step indicator S2010 for the U.K.’s Climate Change Act of 2008 (CCA2008) and to test the constancy of the relationship. At the time of their model, only two data points were available for estimating the coefficient of S2010 when using the final four observations for forecasting; again, keeping the last four data points for forecasts doubles to four that number of observations. In turn, the updated estimate is sufficiently precise that S2010 can be included in the cointegrating vector.

Figure 22 records the extended U.K. data series together with U.S. CO2 emissions per capita, in tons p.a., 1850–2019. United Kingdom per capita CO2 emissions have continued to fall, as has fossil fuel usage, whereas wind + solar has risen rapidly (measured in millions of tonnes of oil equivalent, Mtoe), albeit still less than both oil and gas. However, the U.S. per capita CO2 emissions remain higher than the highest recorded U.K. values.

Figure 22. (a) U.K. CO2 emissions per capita in tons per annum (p.a.) to 2018; (b) U.K. coal (millions of tonnes, Mt), oil (Mt), natural gas (millions of tonnes of oil equivalent, Mtoe), and wind + solar (Mtoe), all to 2018; (c) ratio of CO2 emissions to the capital stock on a log scale to 2017; (d) U.S. CO2 emissions per capita, in tons p.a., 1850–2019.

The distributional shifts of total U.K. CO2 emissions in Mt p.a., shown in Figure 23, continue to emphasize the need to handle them in modeling. Hendry and Mizon (2011) highlight how failing to handle shifting and evolving relationships can lead to rejecting a sound theory. Consequently, denoting impulse (IIS) and step (SIS) indicators by 1abcd and Sabcd, respectively, where observation abcd is an outlier or the end of a step shift (so the step indicators are given by 1tabcd=1 for observations up to abcd and zero otherwise), the general unrestricted model (GUM) is formulated as

Et=β0+β1Et1+β2Ct+β3Ct1+β4Ot+β5Ot1+β6kt+β7kt1+β8gt+β9gt1+indicators+ϵt.(42)

where Et is CO2 emissions, Ct is coal volumes, and Ot is net oil usage, all in millions of tonnes, along with Kt as total capital stock and Gt as real GDP, where lower cases denote logs. See “Appendix” for definitions and sources.

Figure 23. Shifting subperiod distributions of U.K. CO2 emissions.

The model was reselected from data over 1861–2013, testing constancy for 2014–2017, first selecting by super-saturation combining IIS and SIS at α1=0.001, with all other explanatory variables retained. Equation 43 records the outcome at this stage.

E^t=0.49(0.06)Et147(13)11921161(20)1192644(10)11946+54(11)11947+28(9.8)1199639(14)S1925+71(13)S192732(7.5)S1969+35(7.0)S2010189(91)+1.86(0.12)Ct0.84(0.18)Ct1+1.70(0.26)Ot1.01(0.28)Ot1+0.94(0.33)gt1.12(0.33)gt1+7.97(1.8)kt7.32(1.8)kt1σ^=9.62R2=0.995Far(2,131)=1.59χnd2(2)=4.98Farch(1,150)=2.73FHet(21,124)=0.79FReset(2,131)=2.29FChow(4,133)=1.48Fnl(27,109)=1.08(43)

Coefficient standard errors are shown in parentheses, σ^ is the residual standard deviation, R2 is the coefficient of multiple correlation, Far tests residual autocorrelation (see Godfrey, 1978), Farch tests autoregressive conditional heteroscedasticity (see Engle, 1982), FHet tests residual heteroskedasticity (see White, 1980), χnd2(2) tests non-Normality (see Doornik & Hansen, 2008), FChow is a parameter constancy forecast test over 2014–2017 (see Chow, 1960), and FReset tests nonlinearity (see Ramsey, 1969), as does Fnl (see Castle & Hendry, 2010). Also, the PcGive unit-root t-test value of tur9.33** strongly rejects no cointegration. See Doornik and Hendry (2018) and Ericsson and MacKinnon (2002) for critical values.

No mis-specification tests were significant and five impulse and four step indicators were selected from the 307 candidate variables despite α1=0.001. Of these, 11926 and S1927 can be combined to Δ11926, and 1194711946=Δ11947, leaving three step indicators: σ^ was unaffected by these linear combinations. As SIS indicators in Autometrics terminate at the dates shown, they reflect what happened earlier. Thus, a positive coefficient for S1925 entails a higher level prior to 1926, the date of the Act of Parliament creating the United Kingdom’s first nationwide electricity grid, enhancing its efficiency, but also the General Strike. However, as coal is a regressor, indicators for miners’ strikes should only be needed to capture large changes in inventories, perhaps as in 1926. The Clean Air Act of 1956 did not need a step indicator, as the drop in CO2 should again be captured by the fall in coal use. Next, 1969 was the start of converting burner equipment from coal gas (about 50% hydrogen) to natural gas (mainly methane), with a considerable expansion in the use of gas over the following decades. The shift in 2010 seems to be a response to the CCA2008 and the European Union Renewables Directive of 2009 rather than the Great Recession, as the larger GDP fall in 1921–1922 did not need a step, although there was an impulse indicator for the large outlier in 1921. The methodology does not impose that any policies had an effect—the data show that they did. The coefficients of all these location shifts have the appropriate signs of reducing and increasing emissions, respectively. Selecting the fuel and economic regressors at α2=0.01, including the indicators in Equation 43 retained all those variables.

Cointegration

The cointegrating or long-run relation was derived from that equation after transforming the indicators as noted. When mapping to a non-integrated specification that reparametrizes levels variables to first differences, step indicators should be included in the equilibrium correction mechanism (EqCM) to avoid cumulating to trends. However, they need to be led one period, as the EqCM will be lagged in the I(0) formulation. Impulse indicators and differenced step indicators are left unrestricted (see the survey articles by Hendry & Juselius, 2000, 2001; and Hendry & Pretis, 2016). Applications of cointegration analysis in climate econometrics include Kaufmann and Juselius (2010), Kaufmann et al. (2013), and Pretis (2020). This yields

E˜LR=2.0(0.06)C+1.4(0.19)O+1.25(0.28)k0.35(0.29)g+61(7)S192462.0(14)S1968+70.0(13)S2009234(170).(44)

All variables are significant at 1% other than g, which enters negatively, so given the other variables, the long-run effect of higher GDP is slightly lower emissions, perhaps reflecting the role of services in the United Kingdom. The coefficients of coal and oil are close to their values in Table 1. With four observations on 1S2010, S2010 is now precisely estimated and so can be included in E˜LR. Figure 24 records the long-run estimated emissions from Equation 44, with actual emissions in Panel (a) and the mean zero long-run equilibrium error, Q˜t=EtE˜LR,t, in Panel (b).

Figure 24. (a) Et and E˜LR,t; (b) Q˜t=EtE˜LR,t without impulse indicators, centered on a mean of zero.

Transforming to a model in first differences and the lagged EqCM from Equation 44, then reestimating, revealed a significant nonnormality test, so IIS was reapplied at 0.5%, which yielded (for 1861–2013, testing constancy over 2014–2017):

ΔE^t=1.88(0.10)ΔCt+1.72(0.21)ΔOt+7.16(1.10)Δkt+0.88(0.28)Δgt0.50(0.05)Q˜t159.2(9.2)79.4(8.77)Δ11926+50.3(6.42)Δ1194745.9(11.1)1192128.3(8.93)11912+26.8(8.95)11978+27.5(8.94)11996σ^=8.88R2=0.94Far(2,139)=0.49χnd2(2)=1.68FHet(14,134)=1.02Farch(1,151)=0.54FReset(2,139)=1.5Fnl(15,126)=1.34FChow(4,141)=1.76.(45)

Changes in coal, oil, k, and g all lead to changes in the same direction in emissions, which then equilibrates back to the long-run relation in Equation 44. Figure 25 provides a graphical description of the selected model. “Forecasts” are in inverted commas, since they are for the past, although the data points were outside the sample period used for selection and estimation and were not available when the previous study was undertaken. As Panel (a) is dominated by the fluctuations in the 1920s, Figure 26 plots the levels outcome from modeling, with the impulse and step indicator dates shown.

Figure 25. (a) Actual, fitted, and forecast values for ΔEt from Equation 45; (b) residuals and forecast errors scaled by the equation standard error; (c) residual density and histogram with a Normal density; (d) residual autocorrelation.

Figure 26. Actual, fitted, and forecast values for U.K. CO2 emissions.

Forecast Evaluation

Figure 27(a) records the changes in CO2 emissions and the fitted values from Equation 45, along with the 1-step conditional forecasts, denoted ΔE^T+h|T+h1, with ±2σ^f shown as bars, estimating the model up to 2014. In the same panel, robust forecasts from Hendry (2006) (differencing the EqCM but without smoothing) are also reported. The forecasts are similar. Panel (b) integrates the forecasts to obtain 1-step ahead forecasts of the level of CO2 emissions from both the model in Equation 45 (with ±2σ^f bands) and the robust forecasts. Again, the forecasts are similar, with no clear advantage to using either the econometric model or its robustified form. Panel (c) shifts the forecast origin back to 2009, so only data up to 2008 is available to estimate the model, although it was selected from the longer period. Again, the 1-step conditional forecasts from the econometric model, denoted ΔE^T+h|T+h1 with ±2σ^f bars, are recorded along with the robust forecasts ΔE˜T+h|T+h1. There is a clear benefit to using the robust forecasts over the longer forecast period. The conditional forecasts come before the implementation of the CCA2008 could have an effect, leading to forecasts that are too high from 2012 onward. The robust device remains accurate even when the CCA2008 is not explicitly modeled, as it differences out the previous in-sample mean for the change in CO2 emissions, which was higher prior to the CCA2008, leading to biased 1-step ahead forecasts if not handled.

Figure 27. Conditional 1-step forecasts for ΔEt (a) from Equation 45 over 2014–2017 (denoted ^) with ±2 forecast standard error bars and the robust forecasts ΔE˜T+h|T+h1; (b) the derived forecasts in levels with ±2 forecast standard error bands; (c) same as (a), but commencing in 2009.

The 1-step ahead forecasts highlight the impact of the CCA2008, which needs to be either modeled or accounted for (via robustification) for accurate forecasts. Modeling CO2 emissions within a system allows for unconditional forecasts to be produced, so does not rely on data over the forecast period and hence is feasible ex ante. A vector autoregression (VAR) with two lags is sufficient to model the dynamics. Figure 28 plots the unconditional system 1-step and dynamic forecasts from a VAR in all five variables, either including the indicators found in Equation 45 or without the indicators. In the former case, all outliers and shifts will be captured in the VAR, whereas in the latter they will be ignored, which is a common approach in the economics literature that uses VARs as forecasting benchmarks. Panel (a) plots the 1-step ahead forecasts from the VAR without and with indicators, and Panel (b) records the dynamic forecasts. The importance of the step indicators is readily apparent, yielding huge reductions in RMSFEs and correcting the bias evident in the VAR without the steps. The role of the CCA2008 in producing a level shift reduction in CO2 emissions is clear, so it needs to be modeled for accurate forecasting.

Figure 28. (a) Outcomes and 1-step forecasts ±2 forecast standard errors as bars and fans, without and with indicators; (b) outcomes and h-step forecasts and a comparison with Cardt forecasts.

Figure 28 also records the forecasts for U.K. CO2 emissions, comparing Cardt forecasts to the VAR forecasts, where the Cardt forecasting device is described in Doornik et al. (2020) and Castle et al. (2021). The sample estimation period for the Cardt forecasts is kept short (2005–2011) to avoid contaminating the persistence parameter estimates with earlier in-sample breaks. As a consequence, the uncertainty bands around the Cardt forecasts are very wide and thus are not shown. Extending the in-sample estimation period reduces the uncertainty bands but results in a larger bias due to unmodeled shifts in-sample, especially 2010. The Cardt forecasts are close to the VAR forecasts using SIS, although the VAR forecasts with SIS are preferred. The VAR with SIS has a mean forecast error of −3 compared to a mean error for the Cardt forecasts of −20. In contrast, the VAR model without SIS has a mean error of −83. The key characteristics of the Cardt forecasting device is that it dampens trends and growth rates, it averages across predictors, and it robustifies to breaks by over-differencing. As such, it has similar characteristics to the forecasts from the VAR with SIS, which models step shifts explicitly with step indicators and is superior to the VAR forecasts that do not model location shifts explicitly.

Super Exogeneity Tests

Parameter invariance is essential in policy models because without it, a model will mispredict under regime shifts. The concept of super exogeneity combines parameter invariance with valid conditioning, so it is crucial for policy (see Engle & Hendry, 1993; Hendry & Massmann, 2007; Hendry & Santos, 2010; Krolzig & Toro, 2002). To test the joint super exogeneity of the regressors, a natural procedure here is to check if any of the indicators in the conditional model enter the equations for the marginal variables: If so, the impacts of large changes in marginal variables are different from small changes. This test requires no ex ante knowledge of the timings or magnitudes of breaks or the data-generating processes (DGPs) of the marginal variables. The test has the correct size under the null of super exogeneity for a range of sizes of marginal-model saturation tests, and it has power to detect failures of super exogeneity when location shifts occur in the marginal models and the conditioning on them is invalid.

Having created a VAR, super exogeneity can be tested by a likelihood ratio test of the VAR, with the indicators only entering the equation for Et against entering every equation. This yields χ2(37)=161**, which strongly rejects. However, the rejection is due to the indicators for the 1920s also occurring in the equations for GDP and coal, which is not surprising as the postwar crash and the 1926 general strike affected both. Retaining 11921, 11926, S1925, and S1927 in the coal equation and 11921 and 11926 in GDP delivers χ2(31)=25, which is insignificant, implying the regressors in the CO2 emissions model are super exogenous, so they are weakly exogenous and the parameters of the conditional model (Equation 45) are invariant to structural breaks in the marginal models.

Evaluating the U.K.’s 2008 Climate Change Act

The most important implication of the previously stated evidence is that substantial CO2 reductions have been feasible, so far with little impact on GDP. The U.K.’s CCA2008 established the world’s first legally binding climate change target to reduce the U.K.’s GHG emissions (six gases, including CO2, which is approximately 80% of the total) by at least 80% by 2050 from a 1990 baseline. The policy produced a series of 5-year carbon budgets, which are mapped to annual targets by starting 20 Mt above and ending 20 Mt below them in each period. Allowing 20% for other GHG emissions, these are called the Targets for CO2. Figure 29(a) plots the Targets and CO2. TargDiff denotes the difference between these targets and CO2 emissions, and Equation 46 records the result from selecting step indicators by SIS to describe it over 2008–2020:

TargDiff^t=52.3(9.1)S2013+49.9(17)S2019101(16)(46)
σ^=15.7R2=0.85Far(1,9)=0.38χnd2(2)=0.48Farch(1,11)=0.02FReset(2,8)=0.00T=20082020

Thus, emissions were approximately 52 Mt below target after 2013 and fell another 50 Mt further below after 2019, part of which is undoubtedly due to the impacts from pandemic lockdowns. Nevertheless, these are large reductions. Farmer et al. (2019) suggest exploiting “sensitive intervention points” (SIPs) to accelerate the postcarbon transition and include the U.K.’s CCA2008 as a timely SIP with a large effect, corroborated here.

Figure 29. (a) U.K. CO2 emissions and CCA2008 CO2 targets; (b) deviations from the target values with step indicators.

A similar approach could be used to evaluate the extent to which countries met their Paris Accord Nationally Determined Contributions (NDCs), given the appropriate data. The NDCs agreed at COP21 in Paris are insufficient to keep temperatures below 2 ºC, so they must be enhanced, and common time frames must be adopted to avoid a lack of transparency in existing NDCs (see Rowan, 2019). Since the baseline dates from which NDCs are calculated is crucial, 5-year NDC reviews and evaluation intervals are needed.

In 2019, the U.K. government revised its target to one of net zero GHG emissions by 2050, entailing no use of coal, oil, and natural gas, with no emissions coming from agriculture, construction, and waste (about 100 Mt p.a. in 2019) beyond what can be captured or extracted from the atmosphere. Increases in the capital stock could make the target harder to achieve unless they were carbon neutral. As capital embodies the technology at the time of its construction and is long-lived, transition to zero carbon has to be gradual and necessitates that new capital, and indeed new infrastructure in general, must be zero carbon producing. “Stranded assets” in carbon-producing industries face a problematic future as legislation imposes ever lower CO2 emissions targets to achieve zero net emissions (see Pfeiffer et al., 2016), threatening “stranded workers,” as has happened historically for coal miners in the United Kingdom.

Implications of the U.K.’s CO2 Emissions Model

Despite more candidate variables than observations, the econometric approach presented in this article developed a model to explain the United Kingdom’s extremely nonstationary CO2 emissions time series data over 1860–2017 in terms of coal and oil usage, capital stock, and GDP. It was essential to take account of both stochastic trends and distributional shifts. Detection of major policy interventions by indicator saturation estimators yielded a congruent model of CO2 emissions and accurate forecasts since the CCA2008 came into force. The policy implications highlight that CO2 emissions are reducing rapidly, but far greater reductions are needed if the United Kingdom is to achieve its net-zero emissions target requiring all coal, oil, and natural gas to be eliminated or their GHG emissions sequestrated. The United Kingdom was a net CO2 exporter through embodied CO2 in the 19th century, but is now a net importer, although this component will decrease with falls in the GHG emissions of exporting countries.11

The aggregate data provided little evidence of high costs to the large domestic reductions in CO2 emissions—dropping by 202 Mt from 554 Mt in 2000 to 352 Mt (39%) by the end of 2019, before the pandemic—whereas real GDP rose by 39% over that period despite the “Great Recession.” As new “green” technologies are implemented, careful attention must be paid to local costs of lost jobs. Mitigating the inequality impacts of climate-induced changes has to be achieved to retain public support.

Applying Model Discovery Techniques to Climate Change Data

The role of human behavior in climate change entails that methods developed to model human behavior in the economic sphere are applicable to modeling climate phenomena as well. There is a two-way interaction between the climate and human actions, characterized by wide-sense nonstationary data interacting in complex and nonconstant ways. As such, the tools needed to understand these interactions must be able to handle the complex, evolving, and shifting interactions over time due to changing human behavior. An approach to modeling such phenomena using time series econometric tools designed to handle wide-sense nonstationary data from unknown generating processes is outlined. The key role that anthropogenic forces play in determining climate can be drawn out by careful modeling of the relationships, embedding an understanding of the climate with climate theory but allowing for data-based search to handle the nonconstant distributions. Such an approach allows for testing climate and economic theories, forecasting, and policy studies without contaminating the analyses by unmodeled phenomena. This is essential to provide reliable guidance on how countries can achieve net zero emissions to maintain stable global land surface temperatures.

Climate science provides the background to climate econometrics. Past climate changes can be related to the “great extinctions” seen in the geological record, emphasizing that it is climate change that matters, and the rapidity of change over the most recent past is dramatically faster than any previous changes experienced, except perhaps after the meteor impact 65 million years ago. The mid-18th century Industrial Revolution brought huge benefits but led to a global explosion in anthropogenic GHG emissions. Emissions are subject to shifts from wars, crises, resource discoveries, technological innovations, pandemics, and policy interventions, and the resulting stochastic trends, large shifts, and numerous outliers must be handled for viable empirical models of climate phenomena.

The assumption of stationarity is violated by every climate and economics time series imaginable but is often the premise for climate econometric modeling. Ignoring shifting distributions results in biased and inconsistent parameter estimates along with incorrect inference, poor forecasts, and mistaken policy implications. Indicator saturation estimators (ISEs) provide a route to handling wide-sense nonstationary data, allowing for very general forms of change. ISEs work well despite creating more candidate variables to select over than observations, where there is little loss of efficiency in selecting over T indicators using a tight nominal significance level, even in dynamic models, but the gains under the alternative can be large.

This leads to a more general framework aimed at model discovery rather than model building or theory testing. The theory of reduction underpins economic modeling, where the data-generating process (DGP) for the variables under analysis is the target, while the theory is retained as the object. As the DGP is unknown, it must be discovered and so automatic model selection is essential. Such discovery needs to be able to handle all the nonstationarities due to outliers, shifting distributions, changing trends, possible nonlinearities, and omitted variables. Commencing from very general models, models with no losses on reduction are congruent and those that explain rival models are encompassing. Autometrics, a multipath search machine-learning algorithm, provides a route to implementing such a search.

The econometric techniques allow for direct linking of climate models with empirical data to further improve econometric research on human responses to climate variability. The approach to jointly addressing all aspects of wide-sense nonstationarity for an unknown DGP seems most appropriate in climate modeling, where the theory is incomplete, the data are evolving and subject to sudden shifts, there are huge measurement issues, and feedbacks generate nonconstant and nonlinear relationships. Such an approach should improve the ability to test climate change mitigation proposals and the role of human behavior within the climate system, produce more accurate forecasts and scenarios based on differing emissions paths, and provide useful policy analysis to guide the policy response to this most imperative of issues.

Acknowledgments

The authors thank the editor and external reviewers for their helpful feedback and suggestions. Financial support from the Robertson Foundation (award 9907422) and Nuffield College is gratefully acknowledged, as are important contributions to the background research from Susana Campos-Martins, Jurgen A. Doornik, Luke P. Jackson, Søren Johansen, Andrew B. Martinez, Bent Nielsen, and Felix Pretis. All calculations and graphs use PcGive (Doornik & Hendry, 2018) and OxMetrics (Doornik, 2018).

Further Reading

Introductory Reading
Climate Change
Modeling Methodology

References

  • Asseng, S., Guarin, J. R., Raman, M., Monje, O., Kiss, G., Despommier, D. D., Meggers, F. M., & Gauthier, P. P. G. (2020). Wheat yield potential in controlled-environment vertical farms. Proceedings of the National Academy of Sciences, 117(32), 19131–19135.
  • Beerling, D. J., Kantzas, E. P., Lomas, M. R., Wade, P., Eufrasio, R. M., Renforth, P., Sarkar, B., Andrews, M. G., James, R. H., Pearce, C. R., Mercure, J.-P., Pollitt, H., Holden, P. B., Edwards, N. R., Khanna, M., Koh, L., Quegan, S., Pidgeon, N. F., Janssens, I. V. . . . Banwart, S. A. (2020). Potential for large-scale CO2 removal via enhanced rock weathering with croplands. Nature, 583, 242–248.
  • Bontemps, C., & Mizon, G. E. (2008). Encompassing: Concepts and implementation. Oxford Bulletin of Economics and Statistics, 70, 721–750.
  • Brock, W. A., & Miller, J. I. (2020). Beyond RCP8.5: Marginal mitigation using quasi-representative concentration pathways. Working Paper 21-05. Economics Department, University of Missouri.
  • Burke, M., Davis, W. M., & Diffenbaugh, N. S. (2018). Large potential reduction in economic damages under UN mitigation targets. Nature, 557, 549–553.
  • Burke, M., Hsiang, S. M., & Miguel, E. (2015). Global nonlinear effect of temperature on economic production. Nature, 527, 235–239.
  • Campos-Martins, S., & Hendry, D. F. (2020). Geo-climate, geopolitics and the geo-volatility of carbon-intensive asset returns. Working Paper. Nuffield College, Oxford University.
  • Castle, J. L., Clements, M. P., & Hendry, D. F. (2015). Robust approaches to forecasting. International Journal of Forecasting, 31, 99–112.
  • Castle, J. L., Clements, M. P., & Hendry, D. F. (2016). An overview of forecasting facing breaks. Journal of Business Cycle Research, 12, 3–23.
  • Castle, J. L., Clements, M. P., & Hendry, D. F. (2019). Forecasting: An essential introduction. Yale University Press.
  • Castle, J. L., Doornik, J. A., & Hendry, D. F. (2011). Evaluating automatic model selection. Journal of Time Series Econometrics, 3(1), Article 8.
  • Castle, J. L., Doornik, J. A., & Hendry, D. F. (2020). Multiplicative-indicator saturation. Working Paper. Nuffield College, Oxford University.
  • Castle, J. L., Doornik, J. A., & Hendry, D. F. (2021). Forecasting principles from experience with forecasting competitions. Forecasting, 3(1), 138–165.
  • Castle, J. L., Doornik, J. A., & Hendry, D. F. (in press). Robust discovery of regression models. Econometrics and Statistics. Advance online publication.
  • Castle, J. L., Doornik, J. A., Hendry, D. F., & Pretis, F. (2019). Trend-indicator saturation. Working Paper. Nuffield College, Oxford University.
  • Castle, J. L., & Hendry, D. F. (2010). A low-dimension portmanteau test for nonlinearity. Journal of Econometrics, 158, 231–245.
  • Castle, J. L., & Hendry, D. F. (2014). Semi-automatic nonlinear model selection. In N. Haldrup, M. Meitz, & P. Saikkonen (Eds.), Essays in nonlinear time series econometrics (pp. 163–197). Oxford University Press.
  • Castle, J. L., & Hendry, D. F. (2020). Climate econometrics: An overview. Foundations and Trends in Econometrics, 10, 145–322.
  • Castle, J. L., & Shephard, N. (Eds.). (2009). The methodology and practice of econometrics. Oxford University Press.
  • Chow, G. C. (1960). Tests of equality between sets of coefficients in two linear regressions. Econometrica, 28, 591–605.
  • Clements, M. P., & Hendry, D. F. (1998). Forecasting economic time series. Cambridge University Press.
  • Clements, M. P., & Hendry, D. F. (1999). Forecasting non-stationary economic time series. MIT Press.
  • Croll, J. (1875). Climate and time in their geological relations: A theory of secular changes of the Earth’s climate. D. Appleton.
  • Davis, W. M. (2019). Dispersion of the temperature exposure and economic growth: Panel evidence with implications for global inequality [Master’s thesis, Oxford University].
  • Dethier, E. N., Sartain, S. L., Renshaw, C. E., & Magilligan, F. J. (2020). Spatially coherent regional changes in seasonal extreme streamflow events in the United States and Canada since 1950. Science Advances, 6(49), eaba5939.
  • Doornik, J. A. (2008). Encompassing and automatic model selection. Oxford Bulletin of Economics and Statistics, 70, 915–925.
  • Doornik, J. A. (2009). Autometrics. In J. L. Castle & N. Shephard (Eds.), The methodology and practice of econometrics (pp. 88–121). Oxford University Press.
  • Doornik, J. A. (2018). OxMetrics: An interface to empirical modelling (8th ed.). Timberlake Consultants Press.
  • Doornik, J. A., Castle, J. L., & Hendry, D. F. (2020). Card forecasts for M4. International Journal of Forecasting, 36, 129–134.
  • Doornik, J. A., & Hansen, H. (2008). An omnibus test for univariate and multivariate normality. Oxford Bulletin of Economics and Statistics, 70, 927–939.
  • Doornik, J. A., & Hendry, D. F. (2018). Empirical econometric modelling using PcGive (Vol. I, 8th ed.). Timberlake Consultants Press.
  • Eglinton, T. I., Galy, V. V., Hemingway, J. D., Feng, X., Bao, H., Blattmann, T. M., Dickens, A. F., Gies, H., Giosan, L., Haghipour, N., Hou, P., Lupker, M., McIntyre, C. P., Montlucon, D. B., Peucher-Ehrenbrink, B., Ponton, C., Schefuß, E., Schwab, M. S., Voss, B. M. . . . Zhao, M. (2021). Climate control on terrestrial biospheric carbon turnover. Proceedings of the National Academy of Sciences, 118(8), e2011585118.
  • Engle, R. F. (1982). Autoregressive conditional heteroscedxasticity, with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987–1007.
  • Engle, R. F., & Campos-Martins, S. (2020). Measuring and hedging geopolitical risk. NYU Stern School of Business.
  • Engle, R. F., & Hendry, D. F. (1993). Testing super exogeneity and invariance in regression models. Journal of Econometrics, 56, 119–139. Reprinted in Ericsson, N. R., & Irons, J. S. (Eds.). (1994). Testing exogeneity. Oxford University Press.
  • Ericsson, N. R., & MacKinnon, J. G. (2002). Distributions of error correction tests for cointegration. Econometrics Journal, 5, 285–318.
  • Ericsson, N. R., & Reisman, E. L. (2012). Evaluating a global vector autoregression for forecasting. International Advances in Economic Research, 18, 247–258.
  • Estrada, F., Perron, P., & Martinez-Lopez, B. (2013). Statistically derived contributions of diverse human influences to twentieth-century temperature changes. Nature Geoscience, 6, 1050–1055.
  • Farmer, J. D., Hepburn, C., Ives, M. C., Hale, T., Wetzer, T., Mealy, P., Rafaty, R., Srivastav, S., & Way, R. (2019). Sensitive intervention points in the post-carbon transition. Science, 364(6436), 132–134.
  • Feinstein, C. H. (1972). National income, expenditure and output of the United Kingdom, 1855–1965. Cambridge University Press.
  • Foote, E. (1856). Circumstances affecting the heat of the sun’s rays. The American Journal of Science and Arts, 22, 382–383.
  • Godfrey, L. G. (1978). Testing for higher order serial correlation in regression equations when the regressors include lagged dependent variables. Econometrica, 46, 1303–1313.
  • Hendry, D. F. (1984). Monte Carlo experimentation in econometrics. In Z. Griliches & M. D. Intriligator (Eds.), Handbook of econometrics (Vol. 2, Chapter 16, pp. 937–976). North-Holland.
  • Hendry, D. F. (2006). Robustifying forecasts from equilibrium-correction models. Journal of Econometrics, 135, 399–426.
  • Hendry, D. F., & Doornik, J. A. (2014). Empirical model discovery and theory evaluation. MIT Press.
  • Hendry, D. F., & Johansen, S. (2015). Model discovery and Trygve Haavelmo’s legacy. Econometric Theory, 31, 93–114.
  • Hendry, D. F., & Juselius, K. (2000). Explaining cointegration analysis: Part I. Energy Journal, 21, 1–42.
  • Hendry, D. F., & Juselius, K. (2001). Explaining cointegration analysis: Part II. Energy Journal, 22, 75–120.
  • Hendry, D. F., & Krolzig, H.-M. (2003). New developments in automatic general-to-specific modelling. In B. P. Stigum (Ed.), Econometrics and the philosophy of economics (pp. 379–419). Princeton University Press.
  • Hendry, D. F., & Krolzig, H.-M. (2005). The properties of automatic Gets modelling. Economic Journal, 115, C32–C61.
  • Hendry, D. F., & Massmann, M. (2007). Co-breaking: Recent advances and a synopsis of the literature. Journal of Business and Economic Statistics, 25, 33–51.
  • Hendry, D. F., & Mizon, G. E. (2011). Econometric modelling of time series with outlying observations. Journal of Time Series Econometrics, 3(1), Article 6.
  • Hendry, D. F., & Mizon, G. E. (2012). Open-model forecast-error taxonomies. In X. Chen & N. R. Swanson (Eds.), Recent advances and future directions in causality, prediction, and specification analysis (pp. 219–240). Springer.
  • Hendry, D. F., & Mizon, G. E. (2014). Unpredictability in economic analysis, econometric modelling and forecasting. Journal of Econometrics, 182, 186–195.
  • Hendry, D. F., & Pretis, F. (2016). All change! The implications of non-stationarity for empirical modelling, Forecasting and policy. Oxford Martin School Policy Paper.
  • Hendry, D. F., & Santos, C. (2010). An automatic test of super exogeneity. In M. W. Watson, T. Bollerslev, & J. Russell (Eds.), Volatility and time series econometrics (pp. 164–193). Oxford University Press.
  • Hepburn, C., & Schwarz, M. (2020). Climate change: Answers to common questions. Pictet Report. Smith School of Enterprise and the Environment, University of Oxford.
  • Hillebrand, E., & Proietti, T. (2017). Phase changes and seasonal warming in early instrumental temperature records. Journal of Climate, 30, 6795–6821.
  • Hofer, S., Lang, C., Amory, C., Kittel, C., Delhasse, A., Tedstone, A., & Fettweis, X. (2020). Greater Greenland ice sheet contribution to global sea level rise in CMIP6. Nature Communications, 11. Advance online publication.
  • Hoffman, P. F., & Schrag, D. P. (2000). Snowball Earth. Scientific American, 282, 68–75.
  • Hsiang, S. (2016). Climate econometrics. Annual Review of Resource Economics, 8(1), 43–75.
  • Johansen, S., & Nielsen, B. (2009). An analysis of the indicator saturation estimator as a robust regression estimator. In J. L. Castle & N. Shephard (Eds.), The methodology and practice of econometrics (pp. 1–36). Oxford University Press.
  • Jouzel, J., Masson-Delmotte, V., Cattani, O., Dreyfus, G., Falourd, S., Hoffmann, G., Minster, B., Nouet, J., Barnola, J. M., Chappellaz, J. Fischer, H., Gallet, J. C., Johnsen, S., Leuenberger, M., Loulergue, L., Luethi, D., Oerter, H., Parrenin, F., Raisbeck, G. . . . Wolff, E. W. (2007). Orbital and millennial Antarctic climate variability over the past 800,000 years. Science, 317, 793–797.
  • Kaufmann, R. K., & Juselius, K. (2010). Glacial cycles and solar insolation: The role of orbital, seasonal, and spatial variations. Climate of the Past Discussions, 6, 2557–2591.
  • Kaufmann, R. K., Kauppi, H., Mann, M. L., & Stock, J. H. (2011). Reconciling anthropogenic climate change with observed temperature 1998–2008. Proceedings of the National Academy of Science, 108, 11790–11793.
  • Kaufmann, R. K., Kauppi, H., Mann, M. L., & Stock, J. H. (2013). Does temperature contain a stochastic trend?: Linking statistical results to physical mechanisms. Climatic Change, 118, 729–743.
  • Kitov, O. I., & Tabor, M. N. (2015). Detecting structural changes in linear models: A variable selection approach using multiplicative indicator saturation [Unpublished paper]. University of Oxford.
  • Krolzig, H.-H., & Toro, J. (2002). Testing for super-exogeneity in the presence of common deterministic shifts. Annales d’Economie et de Statistique, 67/68, 41–71.
  • Kulp, S. A., & Strauss, B. H. (2019). New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding. Nature Communications, 10, 4844.
  • Kurle, J. K. (2019). Essays in climate econometrics [Unpublished master’s thesis, University of Oxford].
  • Levendis, Y. A., Kowalski, G., Lu, Y., & Baldassarre, G. (2020). A simple experiment on global warming. Royal Society Open Science. Advance online publication.
  • Lisiecki, L. E., & Raymo, M. E. (2005). A Pliocine-Pleistocene stack of 57 globally distributed benthic δ18O records. Paleoceanography, 20(1).
  • Lüthil, D., Le Floch, M., Bereiter, B., Blunier, T., Barnola, J.-M., Siegenthaler, U., Raynaud, D., Jouzel, J., Fischer, H., Kawamura, K., & Stocker, T. F. (2008). High-resolution carbon dioxide concentration record 650,000–800,000 years before present. Nature, 453, 379–382.
  • Manley, G. (1974). Central England temperatures: Monthly means 1659 to 1973. Quarterly Journal of the Royal Meteorological Society, 100, 389–405.
  • Martinez, A. B., Castle, J. L., Hendry, D. F. (2021). Smooth robust multi-step forecasting methods. Paper No. 2021-W01. Nuffield College Economics Discussion Papers.
  • Milankovitch, M. (1969). Canon of insolation and the ice-age problem. National Science Foundation. English translation by the Israel Program for Scientific Translations of Kanon der Erdbestrahlung und seine Anwendung auf das Eiszeitenproblem. (Original work published 1941)
  • Mitchell, B. R. (1988). British historical statistics. Cambridge University Press.
  • Mizon, G. E., & Richard, J. F. (1986). The encompassing principle and its application to non-nested hypothesis tests. Econometrica, 54, 657–678.
  • Noel, L., Zarazua de Rubens, G., Kester, J., & Sovacool, B. (Eds.). (2019). Vehicle-to-grid: A sociotechnical transition beyond electric mobility. Palgrave Macmillan.
  • Nunes, J., Kautzmann, R., & Oliveira, C. (2014). Evaluation of the natural fertilizing potential of basalt dust wastes from the mining district of Nova Prata (Brazil). Journal of Cleaner Production, 84, 649–656.
  • Ortiz, J. D., & Jackson, R. (2020). Understanding Eunice Foote’s 1856 experiments: Heat absorption by atmospheric gases. Royal Society Notes and Records.
  • Parker, D., Legg, T., & Folland, C. (1992). A new daily Central England Temperature series, 1772–1991. International Journal of Climatology, 12, 317–342.
  • Parrenin, F., Barnola, J.-M., Beer, J., Blunier, T., Castellano, E., Chappellaz, J., Dreyfus, G., Fischer, H., Fujita, S., Jouzel, J., Kawamura, K., Lemieux-Dudon, B., Loulergue, L., Masson-Delmotte, V., Narcisi, B., Petit, F.-R., Raisbeck, G., Raynaud, D., Ruth, U. . . . Wolff, E. (2007). The EDC3 chronology for the EPICA Dome C ice core. Climate of the Past, 3, 485–497.
  • Parrenin, F., Petit, J.-R., Masson-Delmotte, V., Wolff, E., Basile-Doelsch, I., Jouzel, J., Lipenkov, V., Rasmussen, S. O, Schwander, J., Severi, M., Udisti, R., Veres, D., & Vinther, B. M. (2012). Volcanic synchronisation between the EPICA Dome C and Vostok ice cores (Antarctica) 0- 145 kyr BP. Climate of the Past, 8, 1031–1045.
  • Pesaran, M. H., Pick, A., & Pranovich, M. (2013). Optimal forecasts in the presence of structural breaks. Journal of Econometrics, 177, 134–152.
  • Pesaran, M. H., & Timmermann, A. (2007). Selection of estimation window in the presence of breaks. Journal of Econometrics, 137, 134–161.
  • Pfeiffer, A., Millar, R., Hepburn, C., & Beinhocker, E. (2016). The “2oC capital stock” for electricity generation: Committed cumulative carbon emissions from the electricity generation sector and the transition to a green economy. Applied Energy, 179, 1395–1408.
  • Pretis, F. (2021). Exogeneity in climate econometrics. Energy Economics, 96, 105122.
  • Pretis, F., & Hendry, D. F. (2013). Comment on “Polynomial cointegration tests of anthropogenic impact on global warming” by Beenstock et al. (2012): Some hazards in econometric modelling of climate change. Earth System Dynamics, 4, 375–384.
  • Pretis, F., Reade, J. J., & Sucarrat, G. (2018). Automated general-to-specific (GETS) regression modelling and indicator saturation for outliers and structural breaks. Journal of Statistical Software, 68, 4.
  • Pretis, F., Schneider, L., Smerdon, J. E., & Hendry, D. F. (2016). Detecting volcanic eruptions in temperature reconstructions by designed break-indicator saturation. Journal of Economic Surveys, 30, 403–429.
  • Pretis, F., Schwarz, M., Tang, K., Haustein, K., & Allen, M. R. (2018). Uncertain impacts on economic growth when stabilizing global temperatures at 1.5oC or 2oC warming. Philosophical Transactions of the Royal Society A, 376, 20160460.
  • Qin, H. (2020). Machine learning and serving of discrete field theories. Science Reports, 10, 19329.
  • Ramsey, J. B. (1969). Tests for specification errors in classical linear least squares regression analysis. Journal of the Royal Statistical Society B, 31, 350–371.
  • Rowan, S. S. (2019). Pitfalls in comparing Paris pledges. Climatic Change, 155, 455–467.
  • Shehab, A. A., Yao, L., Wei, L., Wang, D., Li, Y., Zhang, X., & Guo, Y. (2020). The increased hydrocyanic acid in drought-stressed sorghums could be alleviated by plant growth regulators. Crop and Pasture Science, 71(5), 459–468.
  • Siddall, M., Rohling, E. J., Almogi-Labin, A., Hemleben, C., Meischner, D., Schmelzer, I., & Smeed, D. A. (2003). Sea-level fluctuations during the last glacial cycle. Nature, 423, 853–858.
  • Stein, K., Timmermann, A., Kwon, E. Y., & Friedrich, T. (2020). Timing and magnitude of Southern Ocean sea ice/carbon cycle feedbacks. Proceedings of the National Academy of Sciences, 117(9), 4498–4504.
  • Tian, H., Xu, R., Pan, N., Pan, S., Shi, H., Yao, Y., Canadell, J. G., Thompson, R. L., Winiwarter, W., Suntharalingam, P., Buitenhuis, E. T., Davidson, E. A., Ciais, P., Chang, J., Lauerwald, R., Li, W., Vuichard, N., Jackson, R. B., Janssens-Maenhout, G.. . . Yang, J. (2020). A comprehensive quantification of global nitrous oxide sources and sinks. Nature, 586, 248–256.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267–288.
  • Tyndall, J. (1859). Note on the transmission of radiant heat through gaseous bodie. Proceedings of the Royal Society of London, 10, 37–39.
  • Walker, A., Pretis, F., Powell-Smith, A., & Goldacre, B. (2019). Variation in responsiveness to warranted behaviour change among NHS clinicians: A novel implementation of change-detection methods in longitudinal prescribing data. British Medical Journal, 367, l5205.
  • White, H. (1980). A heteroskedastic-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817–838.
  • Woolf, D., Amonette, J. E., Street-Perrott, F. A., Lehmann, J., & Joseph, S. (2010). Sustainable biochar to mitigate global climate change. Nature Communications, 1(5), 56.

Appendix

Et

=

CO2 emissions in millions of tonnes (Mt)

1, 2

Ot

=

Net oil usage, millions of tonnes

3: 1 tonne = 0.984 imperial tons

Ct

=

Coal volumes in millions of tonnes

4

Gt

=

Real GDP, £10 billions, 1985 prices

5, 7, p. 836; 8a, 1993; 10 code: YBHH

Kt

=

Total capital stock, £billions, 1985 prices

6, 7, p. 864; 8c, 1972, 1979, 1988, 1992

Pt

=

Implicit deflator of GDP (1860 = 1)

7, p. 836; 8a, 1993; 10, code: ABML

Po,t

=

Price index, raw materials and fuels

9

1abcd

=

Impulse indicator equal to unity in year abcd

Sabcd

=

Step indicator equal to unity up to year abcd

Δxt

=

(xtxt1) for any variable xt

Δ2xt

=

ΔxtΔxt1

Sources. 1. World Resources Institute, Climate watch and Final UK greenhouse gas emissions national statistics.

2. Office for National Statistics (ONS)

3. Crude oil and petroleum products: Production, imports, and exports 1890 to 2015; Department for Business, Energy, and Industrial Strategy (Beis).

4. Beis and Carbon Brief.

5. ONS: UK sector accounts.

6. ONS: Capital stocks and fixed capital consumption.

7. Feinstein (1972); Mitchell (1988).

8. Charles Bean, from (a) Economic Trends Annual Supplements, (b) Annual Abstract of Statistics, (c) Department of Employment Gazette, and (d) National Income and Expenditure.

9. UN Statistical Yearbook and Christopher Gilbert (personal communication).

10. ONS, Blue Book and Annual Abstract of Statistics and Economic Trends Annual Supplement.

Notes

  • 1. See reports by the Intergovernmental Panel on Climate Change (IPCC).

  • 2. See Climate Econometrics.

  • 3. For XLModeler.

  • 4. Creating these data series has taken a massive international effort, collecting raw observations, where, for example, drilling in East Antarctica was completed to just a few meters above bedrock (see Parrenin et al., 2007), then adjusting the data to the common time scale and frequency of the European Project for Ice Coring in Antarctica–EPICA–Dome C (EDC3). Synchronization between the EPICA Dome C and Vostok ice core measures over the period of 145,000 to the present was based on matching residues from volcanic eruptions (see Parrenin et al., 2012). Ice volume estimates are from Lisiecki and Raymo (2005) (based on δ18O as a proxy measure). Antarctic-based land surface temperature proxies were taken from Jouzel et al. (2007), and the paleo record of atmospheric CO2 from deep ice cores from Lüthil et al. (2008). Sea level data, based on sediments, can be obtained from Siddall et al. (2003).

  • 5. See discussion at UK Parliament website.

  • 6. See The Conversation.

  • 7. The Boston time series is from NOAA’s National Centers for Environmental Information Global Historical Climatology Network (GHCN) Station Number 42572509000.

  • 8. Monthly GDP is measured as a chained volume index of gross value added. See Office for National Statistics.

  • 9. The shift did not quite match the node at the break point so is picked up by the implicit difference of the indicators at t=58: The timing in PcNaive starts at 0, so dates are shifted back, and the trend indicators end at the date shown.

  • 10. See the climate econometrics newsletter for summaries of many contributions to climate econometrics from members of the climate econometrics network.

  • 11. See Exploring the UK’s Carbon Footprint: Consumption emissions over time; Biogeosciences.