Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, ECONOMICS AND FINANCE (oxfordre.com/economics). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 21 October 2019

Data Revisions and Real-Time Forecasting

Summary and Keywords

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations.

The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.

Keywords: data revisions, news, noise, unobserved components, vector autoregressions, point predictions, density forecasts

Introduction

The majority of the macroeconomic time series of interest to policymakers and forecasters are subject to data revisions. Monetary policy decisions require accurate measurement of the current state of the economy and the likely future paths for inflation and economic activity. Fiscal policy decisions also rely on current and predicted values for economic activity to enable the calculation of the size of future government deficits. Data revisions to key economic activity aggregates, such as gross domestic product (GDP), add an extra layer of uncertainty to the measurement of current and future economic conditions and to the conduct of economic policy.

Both forecasting and policy analysis have to be undertaken in real time. We look at different approaches to forecasting in real time, when the available information set is restricted to only the vintages of data that were available at each historical period. With the advent of readily available real-time data sets of key macroeconomic indicators (e.g., Croushore & Stark, 2001), we are able to look back and gauge how well different approaches would have fared over the past 50 years or so, which can serve as a guide to what might be expected to work for forecasters today. As an example, by forecasting data revisions we are able to improve the reliability of output gap estimates in real time (see, e.g., Garratt, Lee, Mise, & Shields, 2008, 2009; Clements & Galvão, 2012).

This article considers the implications of the nature of the estimates of macroeconomic data published by national statistics agencies, and the subsequent revisions to these estimates, for forecasting the future values of those variables. Excellent general surveys on data revisions and real-time analysis that go beyond our emphasis on forecasting are provided by Croushore (2006, 2011a, 2011b).

We begin in section “Why Data Revisions?” by explaining why statistical offices revise macroeconomic data, including details of the nature and timing of their updates to initial estimates (i.e., data revisions) for the United States, and for other Organisation for Economic Cooperation and Development (OECD) member countries.

One approach to forecasting when data are subject to revision is to attempt to model the behavior of the statistics office (henceforth, SO) and incorporate it into the forecasting model. Section “Forecasting Methods and a Model of the Behavior of the Statistical Office” discusses some simple models of the behavior of the SO in terms of its processing of source data to generate initial estimates of macroeconomic variables, and how these models lead to published data inheriting the properties of news or noise revisions. Initially, we assume a very simple revisions process, namely that the second estimate (equivalently, first revision) reveals the truth. Later on, we describe the practical implementation of the approach suggested by Kishor and Koenig (2012) for data subject to multiple revisions. This approach requires a model for the true process, data revisions, and the application of the Kalman Filter. Section “Data Revisions and Forecasting With DSGE Models” considers data revisions in the context of forecasting with dynamic stochastic general equilibrium (DSGE) models. DSGE models are typically central bankers’ model of choice, especially for policy analysis and increasingly for forecasting.

The data revisions modeling approaches in sections “Forecasting Methods and a Model of the Behavior of the Statistical Office” and “Data Revisions and Forecasting With DSGE Models” rely on state-space models and a set of unobserved components. In Section “Vintage-Based VARs: Models in Terms of Observables” we survey approaches for modeling data revisions and forecasting that do not require the estimation of unobserved components. Section “Single-Equation Approaches: EOS and RTV” considers simple single-equation approaches to forecasting and includes the “traditional approach,” which amounts to effectively ignoring revisions. The scope of our survey is also widened to consider density forecasts in addition to point forecasts.

With the aim of providing guidance on which modeling approach to choose when forecasting with data subject to revision, section “Evidence on the Performance of Alternative Methods of Forecasting” reviews some of the empirical evidence on the impact of data revisions on forecasting performance and on the relative performance of some of the approaches discussed in this article. Section “Conclusion” offers some concluding remarks.

Why Data Revisions?

Why are macroeconomic data revised? This can perhaps be best understood with reference to the US Bureau of Economic Analysis (BEA) timetable for data releases.1 The BEA publishes its first or “advance” estimates of quarterly national accounts data about a month after the end of the quarter in question. These estimates are necessarily based on only partial source data, and are subject to revision as more complete data becomes available. They are then revised twice more at monthly intervals. These two monthly revisions were formerly known as the “preliminary” and “final” estimates, but are now simply the second and third estimates. As described by Landefeld, Seskin, and Fraumeni (2008), 25% of the GDP components at the time of the release of the first estimate are trend-based data obtained from extrapolations supported by related indicator series. The proportion of trend-based data in the second and third estimates is 23% and 13%, respectively. Hence these revisions typically reflect the availability of more complete source data. The series is then subject to three annual rounds of revisions (in the third quarters of the year)2 to incorporate new annual source data into the estimates. Finally, comprehensive or benchmark revisions make use of major periodic source data as well as methodological and conceptual improvements.3

In terms of the question we posed—why data revisions?—the need for timely estimates of recent economic developments necessitates that estimates are produced long before all the data have been collected. As indicated, data collection and refinement of official statistics ought perhaps be viewed as an ongoing process.

Table 1. Real GDP Data Vintages—Snapshot

DATE

13Q1

13Q2

13Q3

13Q4

14Q1

14Q2

14Q3

14Q4

15Q1

47:Q1

1770.7

1770.7

1932.6

1932.6

1932.6

1932.6

1934.5

1934.5

1934.5

47:Q2

1768.0

1768.0

1930.4

1930.4

1930.4

1930.4

1932.3

1932.3

1932.3

47:Q3

1766.5

1766.5

1928.4

1928.4

1928.4

1928.4

1930.3

1930.3

1930.3

47:Q4

1793.3

1793.3

1958.8

1958.8

1958.8

1958.8

1960.7

1960.7

1960.7

48:Q1

1821.8

1821.8

1987.6

1987.6

1987.6

1987.6

1989.5

1989.5

1989.5

48:Q2

1855.3

1855.3

2019.9

2019.9

2019.9

2019.9

2021.9

2021.9

2021.9

48:Q3

1865.3

1865.3

2031.2

2031.2

2031.2

2031.2

2033.2

2033.2

2033.2

48:Q4

1868.2

1868.2

2033.3

2033.3

2033.3

2033.3

2035.3

2035.3

2035.3

12:Q4

13647.6

13665.4

15539.6

15539.6

15539.6

15539.6

15433.7

15433.7

15433.7

13:Q1

#N/A

13750.1

15583.9

15583.9

15583.9

15583.9

15538.4

15538.4

15538.4

213:Q

#N/A

#N/A

15648.7

15679.7

15679.7

15679.7

15606.6

15606.6

15606.60

13:Q3

#N/A

#N/A

#N/A

15790.1

15839.3

15839.3

15779.9

15779.9

15779.9

13:Q4

#N/A

#N/A

#N/A

#N/A

15965.6

15942.3

15916.2

15916.2

15916.2

14:Q1

#N/A

#N/A

#N/A

#N/A

#N/A

15946.6

15831.7

15831.7

15831.7

14:Q2

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

15985.7

16010.4

16010.4

14:Q3

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

16150.6

16205.6

14:Q4

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

#N/A

16311.6

Consider now the quarterly vintages recorded in Croushore and Stark’s (2001) Real Time Data Set for Macroeconomists (RTDSM). Table 1 illustrates an excerpt of the real-time data set for for US real GDP.4 This resource has greatly faciliated real-time data analysis. The first available estimate for any quarter in the RTDSM is the “advance” estimate denoted by ytt+1. The second quarterly estimate for quarter t is denoted by ytt+2, and so on. The subscript indicates the reference quarter and the superscript the quarterly vintage (or estimate). In the case of US BEA data, the data will typically then remain unchanged unless the t+3-vintage is an annual revision (t+3Q3) or a benchmark revision. In practice, a small modification to this simple seasonal pattern may occur. When benchmark revisions are anticipated to be published in January, annual revisions may not be published in the previous July.5

Table 1 depicts only the vintages from 2013Q1 to 2015Q1. Quarterly vintages for real GDP are available back to 1965. The first period for which data exist is 1947Q1, and for each vintage data are available up to one quarter before the vintage, reflecting the delay of one quarter in publishing data (when the vintages are recorded at the quarterly frequency, as here). Note that we drop the observations from 1949Q1 to 2012Q3, inclusive, to save space. To illustrate the patterns of revision described in the preceding paragraph, consider the 2013Q4 vintage estimate of the 2013Q3 observation. This is the first estimate of reference quarter 2013Q3, which is 15790.1. In 2014Q1 this estimate is revised up to 15839.3 (a 0.3% increase). The estimate remains unaltered in 2014Q2 and is then revised down to 15779.9 in the 2014Q3 annual revision.

Our focus will be on quarterly revisions to quarterly data, primarily to keep the notation simple. But as shown by Clements and Galvão (2017b), monthly vintages can also be modeled and forecasted. It is generally the case that the models we discuss here are of the growth rates (or differences of logs) of the variable (such as GDP), and the revisions are then the differences between vintage estimates of the growth rates. This means that we are dealing with variables that are integrated of order zero, assuming that (the log of) real GDP is integrated of order one. An exception arises when we discuss the model of Garratt et al. (2008) in section “The Garratt et al. Model.”

We have described the institutional nature of the revisions to US national accounts data. The task of providing timely estimates based on incomplete source data is common to all government statistics offices, so data revisions are not specific to the United States. Zwijnenburg (2015) analyzed the revisions to national accounts data in 18 OECD countries in terms of regular revisions and benchmark revisions. Regular revisions are as described earlier, and are normally the focus of interest as they are likely to be more amenable to modeling. These revisions result from the updating of the data set used to compute the earlier estimates and are present in all 18 OECD countries. The mean revisions for most countries are not statistically significantly different from zero—data revisions normally have no effect on the unconditional mean of quarterly growth rates. However, Zwijnenburg (2015) has shown that revisions can be sizable, with a mean absolute error of 0.18 (the average across countries for the observations for the 1993–2014 period) for revisions made up to 5 months, and of 0.40 for revisions made up to 3 years.

Forecasting Methods and a Model of the Behavior of the Statistical Office

As mentioned earlier, regular data revisions can be related to the operational behavior of the government statistical office (SO). A model of this behavior is provided by Kishor and Koenig (2012), who drew on earlier contributions by Howrey (1978) and Sargent (1989). Kishor and Koenig provide models of the behavior of the SO that directly suggest ways of forecasting the true values of the series. Before considering models of SO behavior, it will be useful to make a distinction between news and noise revisions.

News and Noise Data Revisions

Following Mankiw and Shapiro (1986), data revisions are sometimes characterized as news or noise. Data revisions are news when they add new information and noise when they reduce measurement error. If data revisions are noise, they can be predicted based on the current estimate. Mankiw and Shapiro (1986) and Faust, Rogers, and Wright (2005) provided empirical evidence that data revisions to US real GDP are largely news. Aruoba (2008) and Corradi, Fernandez, and Swanson (2009) provided recent extensions to testing for the properties of data revisions, which may lead to more nuanced findings, but the broad classification of each series as being subject to news or noise revisions in the simple setting below is a good starting point.

The standard tests for news and noise regress a revision on either the earlier estimate or on the later estimate. For example, consider the revision between the first estimates ytt+1, and the data available some three-and-a-half years later ytt+14 (chosen to include the three rounds of annual revisions). One can test for news and noise revisions using, respectively:

ytt+14ytt+1=α+βneytt+1+ωt

and

ytt+14ytt+1=α+βnoytt+14+ωt.

If we reject βne=0, then we reject the null hypothesis that data revisions are news, because we are able to predict revisions from knowledge of the earlier estimate, ytt+1. If we reject βno=0, then we reject the null hypothesis that data revisions are noise, because they are correlated with the final estimate, ytt+14. If we reject both null hypotheses, then data revisions are neither news nor noise. If we fail to reject both hypotheses, we conclude that the data are not informative regarding the news/noise dichotomy.

Of course, the properties of revisions may depend on the choice of the earlier and later estimates, may vary over time, and need not give a definitive classification. The definition of efficiency underlying news is in terms of an information set consisting only of the initial estimate, leaving open the possibility that revisions may be predictable using initial estimates of other related variables (or other sources of information, as in Clements & Galvão, 2017b) so that the property of revisions being news, narrowly defined, may be less useful in multivariate settings.

To illustrate, using the standard tests, Clements (2017, p. 427, Table 2) compared the 1st estimates to the 15th estimates for the observations 1970Q2 to 2007Q2 for a number of variables. He found that 9 of the 25 variables analyzed had data revisions that appeared to be news, in that βne=0 was not rejected, but βno=0 was rejected. Seven of the 25 variables were found to have noise revisions (βne=0 rejected, but βno=0 not rejected).

Rather than considering the revisions between an early estimate and a later estimate (such as the preliminary and fully revised data), Swanson and van Dijk (2006) considered the entire revision history and identified the point at which revisions become “rational” (or unpredictable). They also considered the properties of revisions separately in expansions and contractions.

Model of Statistical Office Behavior

Suppose that y˜t is the true period t value of the quarterly growth rate of a macroeconomic variable subject to revisions and that it follows a simple AR(1):

y˜t=fy˜t1+vt.
(1)

There is however a delay in the publication of the true/revised values y˜t, such that we assume it is not observed or published until t+2, i.e., ytt+2=y˜t. We relax this assumption when implementing the Kishor and Koenig (2012) modeling approach in practice in section “The Kishor and Koenig Approach in Practice.”

In period t+1, the SO observes source data wtt+1=y˜t+ηt, where ηt is iid(0,ση2). Based on wtt+1 (and possibly y˜t1,y˜t2,;wt1t,), the SO produces its first estimate of yt, denoted ytt+1. Private forecasters do not observe wtt+1, unless the SO sets ytt+1=wtt+1.

From the perspective of a professional forecaster (PF) using only data published by the SO, the goal is to forecast future true values y˜T, y˜T+1, y˜T+2, . . . based on period T+1 information, yTT+1, yT1T+1 (=y˜T1), and so on. Note that the superscripts denote when the value is published, while the subscripts refer to the reference quarter, or the time period to which the data refers.6

The PF considers a possible strategy that consists of the following stages:

  1. 1. Estimate the model in Equation 1 over t=2 to T1 using {y˜t}. The model is estimated solely on the true data, and because of the publication delay assumed above the estimation sample runs only to T1 (as the observation for reference period T is a first estimate).

  2. 2. Obtain an estimate of y^T. This is an estimate of the true value of the variable at period T, and how this should be done depends on how the SO chooses yTT+1, as discussed below.

  3. 3. Then compute forecasts as y˜T+h|T=f^hy^T for h=1,2, . . .

This strategy avoids what Kishor and Koenig (2012) refer to as the mixing of “apples and oranges.” They state that:

The problem with conventional practice is that it mixes apples and oranges. Data toward the tail end of the sample (oranges) have undergone little or no revision. Data early in the sample (apples) are heavily revised. For most series and typical sample sizes, the heavily revised data dominate estimation. Consequently, the VAR approximates the dynamic relationship between apples and apples. However, the data that are substituted in to the VAR equations to generate a forecast are end-of-sample oranges. Essentially, conventional practice constructs a cider press and then feeds oranges into it, expecting somehow to get cider. (From the Discussion Paper version of Kishor & Koenig, 2012; Federal Reserve Bank of Dallas, Working Paper 0501, 2005, p. 1)

The strategy estimates the model on fully revised data (or “apples”) and also conditions the forecast on (an estimate of) a fully revised data point. The conventional approach estimates the model on {ytT+1} for t=2,,T, so that all but the last observation is fully revised, and then sets y^T=yTT+1, that is, conditions on a first estimate, so falling afoul of the dictum not to mix apples and oranges.

For Step 2 we need an estimate of y^T. Suppose the SO simply sets yTT+1=wTT+1, so that:

yTT+1=y˜T+ηT=fy˜T1+vT+ηT,

and the PF assumes this is the case. That is, the SO makes no attempt to process the source data in order to generate an efficient first estimate, and the subsequent revision is noise, as defined in the previous section. To see this, note that

Cov(y˜t,y˜tytt+1)=0,

but that

Cov(ytt+1,y˜tytt+1)=ση20,

which implies that the revision y˜tytt+1 is uncorrelated with the true value but is predictable from ytt+1.

Suppose now that the SO actively processes the source data. Intuitively, the release of a high value yTT+1 could be because of a large positive ηT (noise) or because of a high positive vT (i.e., signaling a high true value y˜T), and some weight ought to be attributed to both of these possibilities. If vt and ηt are normally distributed, then the standard signal extraction problem gives the optimal estimate of y˜T as:

E(y˜T|yTT+1,yT1T,;y˜T1,y˜T2,)=fy˜T1+γN(yTT+1fy˜T1)
(2)

where γN=σv2/(σv2+ση2) and γN is the weight on the first estimate, yTT+1, and (1γN) is the weight on the model predicted value, fy˜T1. When normality does not hold, γN still has the justification of minimizing

E[(y˜T[fy˜t1+γ(yTT+1fy˜T1)])2]

over γ.

Setting y^T equal to the righthand side of (2) results in an efficient estimate, in the sense that the induced revision y˜Ty^T is uncorrelated with y^T; Cov(y^T,y˜Ty^T)=0.

Hence the release of the unprocessed source data by the SO leads to inefficient initial estimates and subsequent revisions are predictable. In our setup, defining y^T by (2) in the second step of the three-stage forecasting strategy would result in unpredictable revisions.

Extensions

A number of extensions and generalizations are possible. For example, Howrey (1978) allows revisions to be serially correlated:

ytt+1y˜t=h(yt1ty˜t1)+wt
(3)

and then the estimator of y˜t is given by:

E(y˜T|yTT+1,yT1T,;y˜T1,y˜T2,)=fy˜T1+γH(yTT+1h(yT1Ty˜T1)fy˜T1)

with γH=σv2/(σv2+σw2).

Suppose instead, as suggested by Sargent (1989), that the SO does not announce ytt+1=wtt+1, but filters the source data itself, and in so doing inadvertently introduces an additive random error so that the first announcement ytt+1 is given by:

ytt+1=fy˜t1+g(wtt+1fy˜t1)+ξt,

where g=γN and ξt is the idiosyncratic error induced by the filtering.

From the perspective of the PF, who now does not observe wtt+1, we obtain:

ytt+1=fy˜t1+g(y˜tfy˜t1)+ξt+gηt=fy˜t1+g(y˜tfy˜t1)+εt=fy˜t1+gvt+εt,

setting εt=ξt+gηt.

This suggests the optimal forecast of y˜T is given by:

E(y˜T|yTT+1,yT1T,;y˜T1,y˜T2,)=fy˜T1+γS(yTT+1fy˜T1)
(4)

where γS=gσv2/(g2σv2+σε2) and, as before, γS minimizes the expected squared deviation between y˜T and a linear combination of fy˜T1 and yTT+1.

Clearly, from (4), the PF should only set y^T=yTT+1 when γS=1. This condition requires that the government filters the source data (as in Sargent, 1989) and does so without error. Then, ξt=0, so σε2=g2ση2, and using g=σv2/(σv2+ση2), we get γS=1.

We can write the equations for the true process and the first estimate for the Sargent setup as:

y˜t=fy˜t1+vtytt+1=fy˜t1+g(y˜tfy˜t1)+(ξt+gηt).

Combining these, revisions are given by:

ytt+1y˜t=(g1)vt+εt

so that the error in the revisions equation is correlated with that in the equation for the true process, but revisions are not serially correlated. By contrast, in Howrey’s (1978) setup the errors in the revisions and true process equations are not correlated, but revisions are serially correlated.

Kishor and Koenig (2012) suggest combining these two models by allowing correlated errors and serially correlated revisions and specifying the revisions process as:

ytt+1y˜t=k(yt1ty˜t1)+(g1)vt+εt.
(5)

Then the best estimate of y˜T is given by:

y^T=fy˜T1+γK(yTT+1k(yT1Ty˜T1)fy˜T1),
(6)

with γK=γS and “best” defined following (2) or (4).

As a consequence, the PF could use a model that encompasses the Howrey (1978) and Sargent (1989) specifications and combines Equation 1 for the true process and Equation 5 for the revisions process, written for convenience as:

y˜t=fy˜t1+vtytt+1y˜t=k(yt1ty˜t1)+εt(1g)vt.
(7)

Because of the cross-equation correlation in the error terms, efficient estimation of the two-equation system requires the seemingly unrelated regressions estimator (SURE). This is carried out on observations from t=2 up to T1. The covariance matrix of (vt, εt(1g)vt) is given by:

Q=[σv2(1g)σv2(1g)σv2σε2+(1g)2σv2].

From the estimates of the elements of Q, the parameter γk=gσv2/(g2σv2+σε2) in Equation 6 can be computed. Combining this with the estimates of f and k from the system in Equation 7 provides an estimate of y^T, as required by Step 2 of the forecasting strategy.

For ease of exposition we have assumed that the first revision reveals the true value, but this clearly needs to be generalized given that such data typically experience many rounds of revisions. This suggests that at time T+1 estimates are required of the true values of not just y˜T but of earlier periods, y˜T1, y˜T2, as well. The following section describes a practical implementation of the Kishor and Koenig (2012) approach allowing for multiple revisions.

The Kishor and Koenig Approach in Practice

For expositional purposes, we assumed in the previous two sections that the true value y˜t is observed two quarters after the observational quarter, that is, y˜t=ytt+2. In practice, data are subject to annual revisions that may shape the true values, such that in general y˜t=ytt+q. For US data, accommodating the three rounds of annual revisions would require setting q=14. The model described in this section is motivated by, and generalizes, the simpler cases described previously.

To implement the Kishor and Koenig (2012) approach in practice, we assume an AR(p) for the “final data,” and equations for q1 rounds of revisions. The key components are the t+1-vintage vector yt+1 and the true values vector y˜t:

yt+1=[ytt+1yt1t+1yt2t+1ytq+1t+1]andy˜t=[y˜ty˜t1y˜t2y˜tq+1].

If we assume that ytq+1t+1=y˜tq+1: ytq+1t+1 is supposed to be an efficient estimate of the true value y˜tq+1, the model can be written succinctly in state-space form with measurement and state equations given by:

yt+1=[IqIq][y˜tyt+1y˜t],
(8)

and

[y˜tyt+1y˜t]=[c1c2]+[F0q×q0q×qK][y˜t1yty˜t1]+[vtεt].
(9)

In Equation 8, Iq denotes an identity matrix of order q. The disturbance vectors are vt=(v1t,0,...0)' and εt=(ε1t,...,εq1t,0)'. The errors in the data revision equations, εt, are allowed to be correlated with the disturbances to the true values, v1t, as well as being contemporaneously correlated. Defining vt=(vt,εt)', we let E(vtvt')=Q.

The true values y˜t follow an autoregression of order p, defined by the first block of Equation 9 with:

F=[ f01×(qp)Iq10(q1)×1 ],

where f=(f1,,fp) is the 1×p coefficient vector (p<q). The matrix K describes the dynamics of q1 data revisions yt+1yt:

K=[k1,1...k1,q10kq1,1...kq1,q1000...0].

The q×1 vectors c1 and c2 are c1=(c1,0,...,0)' and c2=(c21,...,c2q1,0)'.

This provides a complete specification of the model and allows for multiple revisions. The unknown parameters c1, c2,F, K and Q are estimated by SURE. Because y˜t consists solely of final values, given the assumption that ytq+1t+1+i=y˜tq+1 for i0, the estimation sample has to end at t=Tq+1 when T+1 is the forecast-origin data vintage. Application of the Kalman filter then provides estimates of the fully revised values of past observations (i.e., of the post-revision values of current and past observations, y˜Tq+2 up to y˜T). Forecasts of post-revision future observations y˜T+1, . . ., y˜T+h are obtained by iterating the state equation for yt using the estimate of y^T obtained from the Kalman filtering using all data through vintage T+1.

Alternative approaches to dealing with many rounds of data revisions are proposed by Cunningham, Eklund, Jeffery, Kapetanios, and Labhard (2009) and Jacobs and van Norden (2011). The model of Jacobs and van Norden (2011) is discussed in section “A News and Noise Model of Data Revisions.”

Data Revisions and Forecasting With DSGE Models

Dynamic stochastic general equilibrium (DSGE) models are now routinely used for forecasting (see, e.g., Del Negro & Schorfheide, 2013). Just as with the autoregressive model for the true process in section “The Kishor and Koenig Approach in Practice” (or vector autoregressive models more generally), forecasts will need to be generated when only early estimates of the data for the more recent time periods are available.

Galvão (2017) showed how to accomplish this by explicitly modeling data revisions while estimating and forecasting DSGE models. The approach aims to predict revised values of macroeconomic variables and computes density forecasts while making an allowance for data uncertainty. As in Kishor and Koenig (2012), Galvão (2017) estimates the forecasting model—a DSGE model in her case—with final true or fully revised data. Galvão (2017) assumes that ytt+q=y˜t, that is, after q1 revisions the true value is revealed. This means that at time t=T+1 we only observe true values up to t=Tq+1, and consequently a model of data revisions is used to provide estimates of the true values of the data still subject to revision.

Galvão (2017) proposed a one-step method to jointly estimate the parameters of the DSGE model with revised data and the parameters describing the data revisions process. Based on the model, one can compute backcasts and forecasts for the data subject to revision, including their underlying predictive density. A description of her approach follows.

Define yt as an n×1 vector of the endogenous DSGE variables written as deviations from their steady state values. In practice, yt may also include lagged variables. The elements of the vector yt need not be observable and the absence of superscripts is deliberate. Define θ as the vector of structural parameters. The solution of the DSGE model for a given vector of parameters θ is written as:

yt=F(θ)yt1+G(θ)vt,
(10)

where vt is a r×1 vector of structural shocks, and thus, the matrix G(θ) is n×r. Note also that vtiidN(0,Q) and that Q is a diagonal matrix because the shocks are regarded as being structural. Equation 10 is the state equation of the state space representation of the DSGE model.

Define Y˜t=(y˜1,t,...,y˜m,t)' as the m×1 vector of true values of the endogenous variables in yt that are assumed to be observable; typically m<n and mr. The Smets and Wouters (2007) medium-sized model has m=r=7. The measurement equation is:

Y˜t=d(θ)+H(θ)yt,
(11)

that is, the set of observable variables, such as inflation and output growth, are measured without error, although Galvão (2017) also showed that the approach would work if there were measurement errors.

The DSGE is estimated using the values observed after q1 rounds of revisions, that is, assuming that Y˜tq+1=Ytq+1t+1. Then the measurement equations are:

Ytq+1t+1=d(θ)+H(θ)ytq+1fort=1,...,T,
(12)

and the last q1 observations (that is, Ytq+2t+1,,Ytt+1) have to be excluded.

Define the demeaned observed revisions between first releases Ytt+1 and true values Ytt+q as:

revtt+q,1=(Ytt+1Ytt+q)M1fort=1,...,Tq+1,

where M1 is m×1 vector of mean revisions. This implies that we observe Tq+1 values of the full revision process to a first release at T+1 and that the full revision process for observation t is only observed at t+q because of the statistics office data release schedule. In general, for the kth release, the (demeaned) remaining revisions up to the true values are:

revtt+q+1k,k=(Ytt+kYtt+q)Mkfort=1,...,Tq+vandk=1,...,q1.

The released-based approach in Galvão (2017) augments the measurement Equation 12 to include a time series of first releases, second releases, and so on, as:

[Ytt+1Yt1t+1Ytq+1t+1]=[d(θ)+M1d(θ)+M2d(θ)]+[H(θ)0m0mIm0m0m0mH(θ)0m0mIm0m0mH(θ)0m0m0m][ytyt1ytq+1revt1revt12revtq+2q1]
(13)

for t=1,...,T and:

revtk=(Ytt+kY˜t)Mkfork=1,...,q1,

where the m×1 vectors Mv allow for nonzero mean data revisions.

The state equations are augmented by data revision processes as:

revtk=Kkrevt1k+ξtk+Akvt,ξtkN(0,Rk)
(14)

where the serial correlation allows for predictable revisions if the m×m matrix Kk is nonzero. The innovation term ξtk allows for data revisions that are caused by a reduction of measurement errors and they are assumed to be uncorrelated across variables, so Rk is diagonal. The last term, Akvt, implies that the data revisions may be caused by new information not available at the time of the current release but included in the revised data used to compute the complete effects of the structural shocks. Because the same vector of structural shocks vt may lead to revisions for each one of the variables in Ytt+k, then the processes rev1,tk,...,revm,tk may be correlated.

Galvão (2017) proposed a Metropolis-in-Gibbs approach to jointly estimate the DSGE parameters {θ,Q} and the parameters of the revision process:

{M1,...,Mv,K1,...,Kk,A1,...,Ak,R1,...,Rk}.

She showed that forecasts of future values of consumption and investment growth using a Smets and Wouters (2007) DSGE model are improved by using the release-based instead of the conventional approach, which employs only the last vintage of data.

Vintage-Based VARs: Models in Terms of Observables

Whereas the Kishor and Koenig (2012) and Galvão (2017) models include unobserved components (namely, the true values of variables at the time when only earlier estimates are available), other modeling approaches consider the relationships between the observables directly. A natural way to model a range of vintages and not just a single release is to use a vector autoregression (VAR; see Sims, 1980). A number of VAR-type models based only on observables have been proposed for modeling revisions.

VARs were originally used to model the dynamic relationships among interconnected macro variables without the need for “incredible” identifying restrictions and hard-to-defend assumptions about exogeneity (see Sims, 1980). It was soon recognized that such models would have many parameters to estimate for reasonable numbers of series and lagged values, so Bayesian methods were sometimes adopted (see, e.g., Doan, Litterman, & Sims, 1984; Litterman, 1986; Bańbura, Giannone, & Reichlin, 2010, for a more recent contribution). In terms of modeling data subject to revision, the observable variables are the data defined by both the release data and the reference quarter, and the potential for having highly parameterized models arises.

The Garratt et al. Model

The first VAR-type model we consider is that of Garratt et al. (2008). Following Patterson (1995), Garratt et al. (2008) worked in terms of the level (of the log) of a variable (e.g., output, denoted by Y). The variable is assumed to be integrated of order one (I(1); see, e.g., Banerjee, Dolado, Galbraith, & Hendry, 1993) and it is assumed that different vintage estimates are cointegrated such that data revisions are integrated of order zero (written I(0)). So, for example, Yt1t+1 and Yt1t are both I(1)—these are the second and first estimates of reference quarter t1, respectively. But Yt1t+1Yt1t, the revision between the first and second estimates of the period t1 value, is I(0). Garratt et al. model a vector of variables comprising three elements, Zt+1=(Ytt+1Yt1t,Yt1t+1Yt1t,Yt2t+1Yt2t)'. The first element of Zt+1 is a difference across vintage and observation, and the subsequent terms are revisions to past data. The inclusion of two revisions reflects the view that a revision horizon of 2 is appropriate in the sense that revisions such as Yt2jt+1Yt2jt for j>0 are supposed to be largely unpredictable and hence are not included in the vector of variables to be modeled. This assumption also serves to limit the dimensionality of the system to be estimated.

Garratt et al. (2008) relate Zt+1 to two lags of itself, where the lagging is applied to both the vintage and reference quarter scripts:

Zt+1=c+Φ1Zt+Φ2Zt1+εt,
(15)

where Zt+1i=(Ytit+1iYt1iti,Yt1it+1iYt1iti,Yt2it+1iYt2iti)', for i=0,1,2. The Φi are 3-by-3 matrices of coefficients with third columns consisting solely of zeros (see Garratt et al., 2008, for details), that is, their approach implies specific restrictions on a VAR.

A disadvantage of the formulation in Equation 15 is that shifts in the levels of Y due to base-year changes (or to other definitional changes) at times of benchmark revisions cannot be easily handled by the Garratt et al. model. These arise because the model is based on differences in Y across time periods and vintages. Differencing-across-vintages means that level shifts between vintages affect the resulting series. The level-shift components of the benchmark revisions are removed from the real-time data set prior to the formulation and estimation of models such as Equation 15.

The Vintage-Based VARs

The problems created by re-basings of the data can be circumvented by instead specifying the model in same-vintage-growth rates (see, e.g., Clements & Galvão, 2012, 2013a; Carriero, Clements, & Galvão, 2015). For example, let ytt+1=400(Ytt+1Yt1t+1) be the (approximate) quarterly percentage change (at an annual rate) for period t computed using data vintage t+1. Because the growth rate is calculated between two data points from the same vintage, level shifts or base-year changes will have no effect (to the extent that the change is simply a rescaling of the data).

Suppose that in addition to modeling ytt+1, the first estimate of the growth rate for period t, we also wish to model the revisions for the next q1 quarters. This can be accomplished by modeling the vintage t+1 values of observations tq+1 through t as a vintage VAR (V-VAR):

yt+1=c+i=1pΓiyt+1i+εt+1,
(16)

where yt+1=(ytt+1,yt1t+1,,ytq+1t+1)', yt+1i=(ytit+1i,yt1it+1i,,ytq+1it+1i)', and c is q×1, Γi is q×q, and εt+1 is a q×1 vector of disturbances. The first element of yt+1 is the new observation ytt+1, and subsequent estimates are the revised estimates of past observations, yt1t+1,,ytq+1t+1.

When q is relatively large, the autoregressive order p may be set to a low value, such as p=1 (see, e.g., Clements & Galvão, 2013a). Nevertheless, if q is large, say, q=14, in order to capture the three rounds of annual revisions to which US national accounts data are subject, there will be 14 free coefficients to estimate in each of the 14 equations when p=1 (plus an intercept in each).

The number of coefficients to be estimated can be reduced by restricting the model by supposing that, after a small number of revisions, further revisions are unpredictable. Suppose that after n1 revisions, the next estimate ytt+n+1 is an efficient forecast in the sense that the revision from ytt+n to ytt+n+1 is unpredictable, that is, E[(ytt+n+1ytt+n)|yt+n]=0, whereas E[(ytt+i+1ytt+i)|yt+i]0 for i<n. We can impose this restriction on the VAR, where it translates to E(ytnt+1|ytnt)=ytnt. This is achieved by specifying Γ1 and Γi (i=2,,p) in Equation 16 as:

Γ˜1=[ γn×q0(qn)×(n1)I(qn)×(qn)0(qn)×1 ],Γ˜i=[ γi,n×q0 ].
(17)

If n is set equal to 2, for example, then values after the first revision are assumed to be efficient forecasts (for example, for the United States this would correspond to the BEA estimate published two quarters after reference quarter constituting an efficient forecast). An unrestricted intercept is included in each equation to accommodate nonzero mean revisions. We refer to this model as the news-restricted vintage-based VAR, or RV-VAR.

Clements and Galvão (2013a) compared the forecasting performance of the RV-VAR with that of the V-VAR. They also considered a periodic specification that captures the seasonal nature of some data revisions: the annual rounds of revisions that occur in July of each year (for US data).

Clements and Galvão (2013a) discussed the interpretation of the forecasts generated by the V-VAR and related models. Consider the forecast-origin-vintage T+1. At this time, the information set will include all the data vintages up to and including the time-T+1 vintage, that is, Yt+1 for t=1,2,T, where Yt+1={,yt1t+1,ytt+1}. The h-step ahead forecast of the vector yT+1+h is defined as the conditional expectation, given the model and the information set:

yT+1+h|T+1E(yT+1+h|YT+1,YT,).

The elements of vector yT+1+h|T+1 are (yT+hT+1+h|T+1,...,yT+hq+1T+1+h|T+1), and thus provide forecasts of the first estimate of yT+h, of the second estimate of yT+h1, and so on down to a forecast of the qth estimate of T+hq+1.

Suppose we require forecasts of the “true” (revised) values, that is, y˜T1,y˜T,y˜T+1,y˜T+2. As before, we assume that for a reasonably large q (e.g., q=14, chosen to include the annual revisions), we have y˜t=ytt+q. That is, we can equate the true values with the values available after q1 revisions.7 We need then to consider forecasts of a subset of the elements of yT+1+h|T+1, namely, the last elements of each vector for h=1,2,3,,h. For the forecast origin T+1, this gives the following set of forecasts: yT+2qT+2|T+1, yT+3qT+3|T+1, . . ., yT+1+hqT+1+h|T+1, all of which are the q-th estimates. Some of these estimates will relate to the past and others to the present, relative to the vintage-origin T+1. When h+(1q)0, we have forecasts of past observations (or backcasts), y˜T,y˜T1, y˜T2, and so on, and for h+(1q)>0 forecasts of fully revised future observations: y˜T+1,y˜T+2, etc.

Single-Equation Approaches: EOS and RTV

Single-equation approaches have also been considered, and empirically have been found to work relatively well. We use the statistical framework of Jacobs and van Norden (2011) to show that, in principle, the traditional approach is not the best way to estimate an autoregressive model for forecasting. The statistical framework of Jacobs and van Norden (2011) separately identifies news and noise data revisions, as defined in section “News and Noise Data Revisions.” Their model can be used to estimate the importance of the news and noise contributions to the data revisions of a particular series, and generally this would be facilitated by specifying a relatively small number of revisions. Otherwise it may be difficult to identify the separate news and noise components with any precision. In this section we use their model as a coherent statistical framework for deriving the properties of single-equation approaches, including ignoring data revisions when forecasting. The clear demarcation of revisions into news and noise allows us to determine the implications of each for the relative forecast performance of the single-equation approaches, at least in population (that is, abstracting from model selection and parameter estimation issues).

Section “A News and Noise Model of Data Revisions” describes the statistical framework. Sections “Estimating AR Forecasting Models Using EOS Data” and “Estimating AR Forecasting Models Using RTV Data” provide the properties of the traditional approach and an approach suggested by Koenig, Dolmas, and Piger (2003) and show that the latter is optimal, at least in population. Section “Density Forecasting” discusses the properties of interval and density forecasts derived from the two single-equation models.

A News and Noise Model of Data Revisions

The Jacobs and van Norden (2011) statistical framework for modeling data revisions is based directly on the news/noise distinction. Each release is set equal to the true value plus an error, or errors, where the errors correspond to news or noise and are unobserved. So, for example, at period t+s, the SO releases an estimate of the value of y for reference period t, which we denote ytt+s, written as:

ytt+s=y˜t+vtt+s+εtt+s,

where y˜t is the true value and vtt+s and εtt+s are the news and noise components. We allow for up to q releases, with s=1,,q. For any given s, one or other of the news and noise components may be absent.

Jacobs and van Norden (2011) stack the q releases of yt, namely, ytt+1,,ytt+q, in the vector yt=(ytt+1,,ytt+q)', and similarly εt=(εtt+1,,εtt+q)' and vt=(vtt+1,,vtt+q)', so that:

yt=iy˜t+vt+εt,
(18)

where i is a q vector of ones. In order that the news revisions are not correlated with the earlier release, namely that Cov(vtt+s,ytt+s)=0, where vtt+s=ytt+sy˜t, we need to assume a process for y˜t that includes the news components. For example, if y˜t is assumed to follow an AR(p), say, with iid disturbances R1η1t, then we need to add in the sum of q news components vi,t:

y˜t=ρ0+i=1pρiy˜ti+R1η1t+i=1qvi,t.
(19)

The vi,t are specified as vi,t=σviη2t,i (for i=1,...,l), and both η1t and η2t,i are iid(0,1).

The news and noise components of each vintage in yt are:

vt=[ vtt+1vtt+2vt+q ]=[ i=1qvi,ti=2qvi,tvq,t ],εt=[ εtt+1εtt+2εtt+q ]=[ σε1η3t,1σε2η3t,2σεqη3t,q ],
(20)

where η3t,i is iid(0,1). The shocks are also mutually independent, that is, if ηt=[η1t,η2t',η3t'], then E(ηt)=0, with E(ηtηt')=I.

To see how this setup delivers appropriately defined news and noise revisions, consider a few illustrative cases. The first estimate of yt, ytt+1 is: ytt+1=y˜t+vtt+1+εtt+1=ρ0+i=1pρiy˜ti+R1η1t+σε1η3t,1. This does not include any news component. The second estimate is: ytt+2=y˜t+vtt+2+εtt+2=ρ0+i=1pρiy˜ti+R1η1t+σε2η3t,2+σviη2t,1. Suppose there is no noise, so η3t,2=η3t,2=0.. Then clearly ytt+2 is a more accurate estimate of y˜t than ytt+1, as it includes the news σviη2t,1. Further, the revision between ytt+2 and the true value y˜t is uncorrelated with ytt+2, so that ytt+2 is an efficient estimate:

Cov(ytt+2y˜t,ytt+2)=Cov(i=2qvi,t,v1,t)=0.

Suppose now that there is only noise, so vtt+2=0 but εtt+20. It follows immediately that the revisions induced by the second estimate are predictable using ytt+2:

Cov(ytt+2y˜t,ytt+2)=σε22.

News revisions imply that var(ytt+1)<var(ytt+q), while noise revisions imply that var(ytt+1)>var(ytt+q), assuming that later estimates are less “noisy” (σε1>σεq). If σvl=0 and σεl=0 the qth released value is the true value ytt+l=y˜t. The assumption that y˜t is an I(0) stationary process ensures that yt is a stationary process from Equation 18, as both the news and noise terms are stationary. This is a reasonable assumption when the model is applied to an I(0) transformation of the data, such as growth rates.

This model implies that both noise and news revisions are zero mean, so that the unconditional mean of the underlying series {y˜t} and the observed data {yt} are equal at ρ0(1ρ(1))1. Nonzero mean revisions can easily be accommodated. Assume that each news term is instead vi,t=μvi+σviη2t,i and the noise components are εtt+i=μεi+σεiη3t,i. The true process is now:

y˜t=[ρ0+i=1qμνi]+i=1pρiy˜t1+R1η1t+i=1qσviη2t,i,
(21)

since now i=1qvit=i=1qμvi+i=1qσviη2t,i. The news and noise processes of each vintage are:

vt=[ i=1qμvii=2qμviμvq ][ i=1qσviη2t,ii=2qσviη2t,iσvqη2t,q ],εt=[ με1με2μεq ]+[ σε1,tη3t,1σε2,tη3t,2σεq,tη3t,q ].
(22)

The statistical model can be cast in state-space form using Equation 18 as the measurement equation and combining Equations 21 and 22 to obtain the transition equation. The parameters can be estimated by maximum likelihood using the Kalman filter as described by Jacobs and van Norden (2011).

Estimating AR Forecasting Models Using EOS Data

Assuming the variable we wish to forecast can be described by the model set out in the previous section, we begin with the standard or conventional approach, which effectively ignores the data revisions. We suppose the forecasting model is an autoregression. The conventional approach estimates this model on the vintage of data available at the forecast origin. From the discussion in section “Forecasting Methods and a Model of the Behavior of the Statistical Office,” such an approach is expected to be non-optimal, as it mixes apples and oranges. The framework outlined in the previous section allows us to determine why the conventional approach is not able to deliver optimal forecasts in real time.

Consider forecasting at time T+1. The T+1 vintage of data contains data up to T, {,yT1T+1,yTT+1}, and the model is estimated on this data, termed end of sample (EOS) by Koenig et al. (2003). For an AR(p) we have:

ytT+1=α0+i=1pαiytiT+1+et,EOS,
(23)

where the unknown parameters are estimated on the observations t=p+1,,T.

Writing the model in matrix notation:

YT+1=iα0+Y1T+1α+error,
(24)

where Y1T+1=[Y1T+1,,YpT+1], i is a Tp vector of 1‘s, and the vectors of observations YT+1 and YiT+1, i=1,,p, are:

YT+1=[yp+1T+1,,yT1T+1,yTT+1]',YiT+1=[yp+1iT+1,,yTi1T+1,yTiT+1]'

for i=1,,p.

Clements and Galvão (2013b) derive the population values of the least-squares estimator of the parameters in Equation 23, when the data are generated by Equations 1820 as:

α=(Σy˜+Σv_+Σy˜v_+Σy˜v_'+Σε_)1(Σy˜+Σy˜v_')ρα0=(1α'i)μy˜,
(25)

where Σv_ and Σε_ are second moment matrices of the news and noise components and Σy˜v_ is the second moment matrix between the news and the underlying process, y˜t, and μy˜E(y˜t) (see Clements & Galvão, 2013b, for details).

Clements and Galvão (2013b) also showed that these parameter values are not optimal, that is, they are not the values that minimize the real-time out-of-sample expected squared forecast error when the forecast is conditioned upon the forecast-origin vintage values of the data (as is standard practice). That is, if the forecast is given by ϕ0+ϕ'yTT+1, where yTT+1=(yTT+1,yTp+1T+1)', then setting ϕ0=α0, and ϕ=α, does not minimize the expected squared forecast error whether the aim is to forecast yT+1T+2 (the first estimate) or some later vintage estimate. This is perhaps not surprising, given that the majority of the data underlying the estimation are mature or revised data, whereas yTT+1 contains the first estimate of yT, the second estimate of yT1, and so on. In terms of the cider press analogy mentioned eaerlier, Equation 23 is estimated on (mostly) mature data—the “apples”—but the forecast is conditioned on yTT+1—the “oranges.”

We let ϕ0 and ϕ denote the values of the parameters in the forecast function ϕ0+ϕ'yTT+1 , which minimizes the expected squared error for forecasting yT+1T+2.

Estimating AR Forecasting Models Using RTV Data

Building on Koenig et al. (2003), Clements and Galvão (2013b) showed that estimating the AR model using real-time-vintage (RTV) data delivers optimal estimators of the forecasting model, in population, when the forecast is to be conditioned on yTT+1. RTV estimates the model:

ytt+1=β0+i=1pβiytit+et,RTV
(26)

on observations, t=p+1,,T, where in contrast to EOS, the superscript denoting the vintage is not fixed at the latest-available vintage. In matrix notation:

Ym=iβ0+Y1mβ+error,

where Ym and Y1m=[Y1m,,Ypm] are given by:

Ym=[yp+1p+2,,yT1T,yTT+1]',Yim=[yp+1ip+1,,yTi1T1,yTiT]',i=1,,p.

Ym and Yim contain data of a constant maturity, as indicated by the superscript m. All the observations in Ym are first estimates, and those in Yim are ith estimates. This means that, for example, the first row of [Ym:Y1m] contains the first estimate of yp+1, the first estimate of yp, the second estimate of yp1, and so on. The last row has the same maturities: the first estimate of yT, the first estimate of yT1, the second estimate of yT2, and so on.

Estimation of Equation 26 results in estimates of the β parameters, which minimize the expected squared error, at least in terms of forecasting the first estimate, yT+1T+2, and simple adjustments can be applied for forecasting later estimates when revisions have nonzero means; that is, β0=ϕ0, and β=ϕ.

Clements and Galvão (2013b) provided the general formula for the parameter estimator. We illustrate with the special case of an AR(1) for the true process and for general revisions that are a combination of news and noise when an AR(1) forecasting model is used. The optimal value of the AR parameter is:

β1*=ρ1(σy˜2σv2)σy˜2σv2+σε12,

where σv2i=1qσvi2, and σy˜2=Var(y˜t) and β1 is the population value of the parameter from RTV. When revisions are pure news (σε12=0):

β1,news=ρ1,β0,news=ρ0,

so RTV returns the population parameters of the true process. However, β1,news=ρ1 only holds for p=1. Generally speaking, when there are news revisions the parameter vector of the underlying process y˜t (i.e., ρ) is not optimal from a forecasting perspective when the forecasts are conditioned on early estimates.

For pure noise (σv2=0):

β1,noise=ρ1σy˜2σy˜2+σε12,
(27)

so that β1,noise<ρ1 when σε120.

Consider now EOS. When revisions are news, we can show that the EOS estimator simplifies such that α1=ρ1, matching the optimal value, but this is true only for the special case of p=1.

Under noise:

α1=ρ1σy˜2σy˜2+σεq2.
(28)

An immediate implication is that |α1|>|ϕ1| if earlier revisions are larger than later revisions, as might be expected (compare Equations 28 and 27 when σε12>σεq2). Note that if σεq2=0, so that the truth is eventually revealed when there is noise, then α1=ρ1 for a large estimation sample. Even so, ρ1 is not the parameter vector that minimizes the real-time squared forecast loss (ϕ1ρ1).

The difference between EOS and RTV can be visualized in terms of Table 1. Before the estimation, data is converted to quarterly growth rates within each column. EOS uses the column of data corresponding to the latest vintage of data available at the forecast origin. By way of contrast, RTV uses the elements in the diagonals of the data array for the lefthand side and explanatory variables.

For example, for forecasting in 2015Q1 for an AR(2), the last RTV observation is yTT+1=100[ln(Y14Q415Q1)ln(Y14Q315Q1)] while lags are taken from the previous vintage as yT1T=100[ln(Y14Q314Q4)ln(Y14Q214Q4)] and yT2T=100[ln(Y14Q214Q4)ln(Y14Q114Q4)]. A similar approach is followed for all t=p+1,...,T. EOS would use data only from the 2015Q1 vintage for both the left-side values and the lags.

Finally, RTV is related to the vintage-based VAR model. Equation 26 corresponds to the first equation of the system of equations given by Equation 16. When the lag order of the VAR is one and the dimension of the VAR vector is p there is an exact equivalence between the RTV model and the first equation of the VAR. RTV directly models the first release, ytt+1, as does the first equation of the VAR, whereas in addition the VAR models later estimates beyond the first.

Density Forecasting

Most of the literature has looked at first-moment prediction, but a few papers consider the impact on second-moment prediction and the calculation of prediction intervals and density forecasts. Clements (2017) and Clements and Galvão (2017a) compared RTV and EOS in the context of computing predictive interval and predictive densities for short-horizon forecasting. They were mainly interested in correctly measuring the forecasting uncertainty around one-step-ahead forecasts computed in real time, with the aim of predicting the first-release value.

A simple model of data revisions suffices to show that RTV delivers predictive densities that match the true underlying densities, while EOS delivers predictive densities that are too wide when data revisions are news but too narrow when they are noise.

The model for data revisions is a simplified version of the one in section “ A News and Noise Model of Data Revisions.” It assumes the true (i.e., fully revised) values y˜t follow an AR(1):

y˜t=ρ1y˜t1+ηt+vt,|ρ1|<1,
(29)

where ηt is the underlying disturbance, vt is a news revision with variance σv2, and the first estimate is given by:

ytt+1=y˜tvt+εt,
(30)

with ytt+n=yt for n=2,3,. Here εt is a noise revision with variance σε2. Then the revision ytt+2ytt+1y˜tytt+1=vtεt consists of a noise component (when σε20) and a news component (when σv20). ηt, vt and εt are assumed to be mutually uncorrelated, zero-mean, random variables.

Clements (2017) supposed the ηt are homoscedastic, var(ηt)=ση2. Clements and Galvão (2017a) also allowed for conditional heteroscedasticity—var(ηt) follows an ARCH(1), or a GARCH(1,1), or a stochastic volatility AR(1) process. That macroeconomic volatility may be time-varying has been reported by Clark (2011), Clark and Ravazzolo (2014), and Diebold, Schorfheide, and Shin (2016), inter alia.

In the homoscedastic case, Clements (2017) showed that prediction intervals (calculated in the standard way, i.e., Box and Jenkins (1970) using EOS are too wide when revisions are news, because the predictive variance is overestimated, and that the reverse situation holds when revisions are noise. RTV delivers correctly sized intervals. Clements and Galvão (2017a) extended these results to the heteroscedastic case. When there is conditional heteroscedasticity, they showed that estimating the forecasting model by RTV with an appropriate model for the variance of the errors will result in well-calibrated one-step-ahead predictive densities.

Evidence on the Performance of Alternative Methods of Forecasting

The relative performance of the different models and methods surveyed in this article will likely depend on the properties of the series under consideration, and in particular on the nature of the data revisions to that series. It may also depend on whether the aim is to forecast an early release or a more mature vintage, such as the fully revised value. Finally, relative performance may depend on whether point forecasts, density forecasts, or prediction intervals are required. In this section we briefly summarize some of the findings in the literature to provide guidance on which method might be better in a particular instance.

Clements and Galvão (2013a), Carriero et al. (2015), and Galvão (2017) provided evaluations of different approaches when the goal is to forecast the revised values, denoted y˜T+1,y˜T+2,...,y˜T+h. Clements and Galvão (2013a) compared the forecasting performance of the Kishor and Koenig (2012) approach and the Garratt et al. (2008) approach with the vintage-based VAR (VB-VAR) for US GDP growth and inflation. The models are univariate in the sense of modeling a single variable, but allow up to 14 releases of the variable in question. The findings favor the VB-VAR, which delivers more accurate point forecasts for both variables. Carriero et al. (2015) showed that forecasting accuracy can be improved by using their Bayesian approach. Their approach better controls the adverse effects of parameter uncertainty in such large VAR models. They were also able to allow for more than one variable, allowing for cross-equation dynamics between revisions to different macroeconomic variables (such as output growth and inflation in their application).

While these papers considered point forecasts of fully revised values, Galvão (2017) evaluated the density forecasts of the Smets and Wouters (2007) DSGE model when data revisions are modeled as in Kishor and Koenig (2012), and compared the findings to those obtained using the conventional approach. Galvão found gains in terms of logarithmic scores from the release-based approach for predicting the revised values of macroeconomic variables such as consumption and investment growth.

As well as forecasting the revised values of future outcomes, there are times when forecasts of the revised values of current and past observations are required. As an example, Clements and Galvão (2012) showed that improved real-time estimates of output and inflation gaps result from the use of VB-VAR model “backcasts.”

Clements and Galvão (2013b, 2017b) and Clements (2017) presented forecast comparisons in terms of predicting initial releases, that is, yT+1T+2,yT+2T+3,...,yT+hT+h+1. Clements and Galvão (2013b) described some of the circumstances under which RTV might be expected to outperform EOS. In particular, their findings suggested that larger gains might occur when there are explanatory variables—i.e., autoregressive-distributed lag models—as opposed to purely autoregressive models. They also suggested that ignoring data revisions, as in using EOS compared to RTV, might attract a greater penalty in terms of accuracy when the estimation sample is small, the process is more persistent, and revisions are news. Clements and Galvão (2013b) also found that more elaborate approaches, such as Kishor and Koenig (2012), do not outperform RTV for modeling US output growth and inflation. This parallels findings in the (nonreal-time) forecasting literature that more elaborate, complicated models do not necessarily outperform their simpler adversaries.

Clements (2017) suggested that larger relative gains might accrue to RTV when the goal is to provide well-calibrated prediction intervals. Clements and Galvão (2017a) extended the results to variables subject to time-varying conditional variance and found that RTV provides more accurate density forecasts for nominal national account variables.

Conclusion

We have not attempted an exhaustive survey of the literature on forecasting in real time, and our main focus has been on US data. Nevertheless, we have attempted to give the reader an introduction to the types of approaches that have been proposed and worked through some of these approaches in sufficient detail to lay their workings bare. We have summarized some of the evidence on the relative forecasting performance of the different approaches, but as in the macro-forecasting literature more generally, rankings across methods are unlikely to remain the same across different variables, or sample periods, and so on. In any specific instance it would seem sensible to consider a number of approaches.

The more complex models, which attempt to jointly model the true (or revised) values along with the revisions process, are not necessarily superior to simpler approaches in terms of forecasting. This is not surprising, given that simple models are often found to fare well in the forecasting literature. There are a number of possible explanations. The potential of the more complex models might be negated by the need to specify and estimate the models on relatively small historical samples. As an example, Clements and Galvão (2013b) provided a Monte Carlo study, whereby data were simulated from a vintage-based VAR model, or the Kishor and Koenig (2012) model, and the forecasting performance of RTV and EOS was compared to that of an estimated version of the model that generated the data. The authors found that the estimation sample has to be relatively large for forecasts from the model assumed as data-generating process to beat the simpler models. Another possible explanation stresses nonconstancy or structural breaks in the process being forecast, and suggests that simpler models might exhibit greater adaptivity or be more robust (see, e.g., Castle, Clements, & Hendry, 2016). Although the model of Kishor and Koenig (2012) assumes the processes for the data revisions are constant over time, this may not be the case, and if so, the potential advantages of such models may dissipate, especially when one takes into account the difficulties inherent in precisely estimating these channels unless the sample is large.

References

Aruoba, S. B. (2008). Data revisions are not well-behaved. Journal of Money, Credit and Banking, 40, 319–340.Find this resource:

Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector autoregressions. Journal of Applied Econometrics, 25(1), 71–92.Find this resource:

Banerjee, A., Dolado, J. J., Galbraith, J. W., & Hendry, D. F. (1993). Co-integration, error correction and the econometric analysis of non-stationary data. Oxford, U.K.: Oxford University Press.Find this resource:

Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis, forecasting and control. San Francisco, CA: Holden-Day.Find this resource:

Carriero, A., Clements, M. P., & Galvão, A. B. (2015). Forecasting with Bayesian multivariate vintage-based VARs. International Journal of Forecasting, 31(3), 757–768.Find this resource:

Castle, J. L., Clements, M. P., & Hendry, D. F. (2016). An overview of forecasting facing breaks.. Journal of Business Cycle Research, 12(1), 3–23Find this resource:

Clark, T. E. (2011). Real-time density forecasts from Bayesian vector autoregressions with stochastic volatility. Journal of Business and Economic Statistics, 29, 327–341.Find this resource:

Clark, T. E., & Ravazzolo, F. (2014). Macroeconomic forecasting performance under alternative specifications of time-varying volatility. Journal of Applied Econometrics, 30(4), 551–575.Find this resource:

Clements, M. P. (2017). Assessing macro uncertainty in real-time when data are subject to revision. Journal of Business & Economic Statistics, 35(3), 420–433.Find this resource:

Clements, M. P., & Galvão, A. B. (2012). Improving real-time estimates of output gaps and inflation trends with multiple-vintage VAR models. Journal of Business & Economic Statistics, 30(4), 554–562.Find this resource:

Clements, M. P., & Galvão, A. B. (2013a). Forecasting with vector autoregressive models of data vintages: US output growth and inflation. International Journal of Forecasting, 29(4), 698–714.Find this resource:

Clements, M. P., & Galvão, A. B. (2013b). Real-time forecasting of inflation and output growth with autoregressive models in the presence of data revisions. Journal of Applied Econometrics, 28(3), 458–477.Find this resource:

Clements, M. P., & Galvão, A. B. (2017a). Data revisions and real-time probabilistic forecasting of macroeconomic variables (Discussion paper ICM-2017-01). Reading, U.K.: Henley Business School, Reading University.Find this resource:

Clements, M. P., & Galvão, A. B. (2017b). Predicting early data revisions to US GDP and the effects of releases on equity markets. Journal of Business and Economic Statistics, 35(3), 389–406.Find this resource:

Corradi, V., Fernandez, A., & Swanson, N. R. (2009). Information in the revision process of real-time datasets. Journal of Business and Economic Statistics, 27, 455–467.Find this resource:

Croushore, D. (2006). Forecasting with real-time macroeconomic data. In G. Elliott, C. Granger, & A. Timmermann (Eds.), Handbook of economic forecasting (Vol. 1, pp. 961–982). Amsterdam, The Netherlands: North-Holland.Find this resource:

Croushore, D. (2011a). Forecasting with real-time data vintages. In M. P. Clements & D. F. Hendry (Eds.), The Oxford handbook of economic forecasting (pp. 247–267). Oxford, NY: Oxford University Press.Find this resource:

Croushore, D. (2011b). Frontiers of real-time data analysis. Journal of Economic Literature, 49, 72–100.Find this resource:

Croushore, D., & Stark, T. (2001). A real-time data set for macroeconomists. Journal of Econometrics, 105(1), 111.130.Find this resource:

Cunningham, A., Eklund, J., Jeffery, C., Kapetanios, G., & Labhard, V. (2009). A state space approach to extracting the signal from uncertain data. Journal of Business & Economic Statistics, 30, 173–180.Find this resource:

Del Negro, M., & Schorfheide, F. (2013). DSGE model-based forecasting. In G. Elliott, & A. Timmermann (Eds.), Handbook of economic forecasting (Vol. 2, pp. 57–140): Amsterdam, The Netherlands: North-Holland.Find this resource:

Diebold, F. X., Schorfheide, F., & Shin, M. (2016). Real-time forecast evaluation of DSGE models with stochastic volatility. Mimeo, University of Pennsylvania.Find this resource:

Doan, T., Litterman, R., & Sims, C. A. (1984). Forecasting and conditional projection using realistic prior distributions. Econometric Reviews, 3, 1–100.Find this resource:

Faust, J., Rogers, J. H., & Wright, J. H. (2005). News and noise in G-7 GDP announcements. Journal of Money, Credit and Banking, 37(3), 403–417.Find this resource:

Fixler, D. J., & Grimm, B. T. (2005). Reliability of the NIPA estimates of U.S. economic activity. Survey of Current Business, 85, 9–19.Find this resource:

Fixler, D. J., & Grimm, B. T. (2008). The reliability of the GDP and GDI estimates. Survey of Current Business, 88, 16–32.Find this resource:

Galvão, A. B. (2017). Data revisions and DSGE models. Journal of Econometrics, 196(1), 215–232.Find this resource:

Garratt, A., Lee, K., Mise, E., & Shields, K. (2008). Real time representations of the output gap. Review of Economics and Statistics, 90, 792–804.Find this resource:

Garratt, A., Lee, K., Mise, E., & Shields, K. (2009). Real time representations of the UK output gap in the presence of model uncertainty. International Journal of Forecasting, 25, 81–102.Find this resource:

Howrey, E. P. (1978). The use of preliminary data in economic forecasting. Review of Economics and Statistics, 60, 193–201.Find this resource:

Jacobs, J. P. A. M., & van Norden, S. (2011). Modeling data revisions: Measurement error and dynamics of “true” values. Journal of Econometrics, 161, 101–109.Find this resource:

Kishor, N. K., & Koenig, E. F. (2012). VAR estimation and forecasting when data are subject to revision. Journal of Business and Economic Statistics, 30(2), 181–190.Find this resource:

Koenig, E. F., Dolmas, S., & Piger, J. (2003). The use and abuse of real-time data in economic forecasting. Review of Economics and Statistics, 85(3), 618–628.Find this resource:

Landefeld, J. S., Seskin, E. P., & Fraumeni, B. M. (2008). Taking the pulse of the economy. Journal of Economic Perspectives, 22, 193–216.Find this resource:

Litterman, R. (1986). Forecasting with Bayesian vector autoregressions: Five years of experience. Journal of Business and Economic Statistics, 4, 25–38.Find this resource:

Mankiw, N. G., & Shapiro, M. D. (1986). News or noise: An analysis of GNP revisions. Survey of Current Business (May 1986), US Department of Commerce, Bureau of Economic Analysis, 20–25.Find this resource:

Patterson, K. D. (1995). An integrated model of the data measurement and data generation processes with an application to consumers’ expenditure. Economic Journal, 105, 54–76.Find this resource:

Sargent, T. J. (1989). Two models of measurements and the investment accelerator. Journal of Political Economy, 97, 251–287.Find this resource:

Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48, 1–48.Find this resource:

Smets, F., & Wouters, R. (2007). Shocks and frictions in US business cycles: A Bayesian DSGE approach. American Economic Review, 97(3), 586–606.Find this resource:

Swanson, N. R., & van Dijk, D. (2006). Are statistical reporting agencies getting it right? Data rationality and business cycle asymmetry. Journal of Business and Economic Statistics, 24, 240–242.Find this resource:

Zwijnenburg, J. (2015). Revisions of quarterly GDP in selected OECD countries. Paris, France: OECD Statistics Briefing no. 22, 1–12.Find this resource:

Notes:

(1.) We focus on real GDP, given its importance and its widespread use in the literature on real-time forecasting. We also primarily consider the United States, although similar considerations apply for other countries.

(2.) The GNP/GDP data of the BEA are subject to three annual revisions in July of each year; see, e.g., Fixler and Grimm (2005, 2008) and Landefeld et al. (2008).

(3.) In the case of the United States, the Bureau of Economic Analysis provides descriptions of the methodologies employed at bea.gov.

(4.) Taken from the Federal Reserve Bank of Philadelphia. See Croushore and Stark (2001).

(5.) For example, there are eight benchmark revisions in the data vintages from 1965Q3 up to 2010Q1. In fact there are 36 annual Q3 revisions rather than the 44 that would otherwise have occurred. There are 44 combined benchmark and annual revisions: the 8 benchmark revisions and the 36 annual revisions.

(6.) Throughout, T+1 will be used to denote the forecast-origin-vintage. Hence the latest-vintage values of data the forecaster will have access to are { yT1T+1,yTT+1 }.

(7.) This will not be literally true. More precisely, we suppose that any differences between the two are unpredictable.