# Data Revisions and Real-Time Forecasting

## Summary and Keywords

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations.

The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.

Keywords: data revisions, news, noise, unobserved components, vector autoregressions, point predictions, density forecasts

Introduction

The majority of the macroeconomic time series of interest to policymakers and forecasters are subject to data revisions. Monetary policy decisions require accurate measurement of the current state of the economy and the likely future paths for inflation and economic activity. Fiscal policy decisions also rely on current and predicted values for economic activity to enable the calculation of the size of future government deficits. Data revisions to key economic activity aggregates, such as gross domestic product (GDP), add an extra layer of uncertainty to the measurement of current and future economic conditions and to the conduct of economic policy.

Both forecasting and policy analysis have to be undertaken in real time. We look at different approaches to forecasting in real time, when the available information set is restricted to only the vintages of data that were available at each historical period. With the advent of readily available real-time data sets of key macroeconomic indicators (e.g., Croushore & Stark, 2001), we are able to look back and gauge how well different approaches would have fared over the past 50 years or so, which can serve as a guide to what might be expected to work for forecasters today. As an example, by forecasting data revisions we are able to improve the reliability of output gap estimates in real time (see, e.g., Garratt, Lee, Mise, & Shields, 2008, 2009; Clements & Galvão, 2012).

This article considers the implications of the nature of the estimates of macroeconomic data published by national statistics agencies, and the subsequent revisions to these estimates, for forecasting the future values of those variables. Excellent general surveys on data revisions and real-time analysis that go beyond our emphasis on forecasting are provided by Croushore (2006, 2011a, 2011b).

We begin in section “Why Data Revisions?” by explaining why statistical offices revise macroeconomic data, including details of the nature and timing of their updates to initial estimates (i.e., data revisions) for the United States, and for other Organisation for Economic Cooperation and Development (OECD) member countries.

One approach to forecasting when data are subject to revision is to attempt to model the behavior of the statistics office (henceforth, SO) and incorporate it into the forecasting model. Section “Forecasting Methods and a Model of the Behavior of the Statistical Office” discusses some simple models of the behavior of the SO in terms of its processing of source data to generate initial estimates of macroeconomic variables, and how these models lead to published data inheriting the properties of news or noise revisions. Initially, we assume a very simple revisions process, namely that the second estimate (equivalently, first revision) reveals the truth. Later on, we describe the practical implementation of the approach suggested by Kishor and Koenig (2012) for data subject to multiple revisions. This approach requires a model for the true process, data revisions, and the application of the Kalman Filter. Section “Data Revisions and Forecasting With DSGE Models” considers data revisions in the context of forecasting with dynamic stochastic general equilibrium (DSGE) models. DSGE models are typically central bankers’ model of choice, especially for policy analysis and increasingly for forecasting.

The data revisions modeling approaches in sections “Forecasting Methods and a Model of the Behavior of the Statistical Office” and “Data Revisions and Forecasting With DSGE Models” rely on state-space models and a set of unobserved components. In Section “Vintage-Based VARs: Models in Terms of Observables” we survey approaches for modeling data revisions and forecasting that do not require the estimation of unobserved components. Section “Single-Equation Approaches: EOS and RTV” considers simple single-equation approaches to forecasting and includes the “traditional approach,” which amounts to effectively ignoring revisions. The scope of our survey is also widened to consider density forecasts in addition to point forecasts.

With the aim of providing guidance on which modeling approach to choose when forecasting with data subject to revision, section “Evidence on the Performance of Alternative Methods of Forecasting” reviews some of the empirical evidence on the impact of data revisions on forecasting performance and on the relative performance of some of the approaches discussed in this article. Section “Conclusion” offers some concluding remarks.

Why Data Revisions?

Why are macroeconomic data revised? This can perhaps be best understood with reference to the US Bureau of Economic Analysis (BEA) timetable for data releases.^{1} The BEA publishes its first or “advance” estimates of quarterly national accounts data about a month after the end of the quarter in question. These estimates are necessarily based on only partial source data, and are subject to revision as more complete data becomes available. They are then revised twice more at monthly intervals. These two monthly revisions were formerly known as the “preliminary” and “final” estimates, but are now simply the second and third estimates. As described by Landefeld, Seskin, and Fraumeni (2008), 25% of the GDP components at the time of the release of the first estimate are trend-based data obtained from extrapolations supported by related indicator series. The proportion of trend-based data in the second and third estimates is 23% and 13%, respectively. Hence these revisions typically reflect the availability of more complete source data. The series is then subject to three annual rounds of revisions (in the third quarters of the year)^{2} to incorporate new annual source data into the estimates. Finally, comprehensive or benchmark revisions make use of major periodic source data as well as methodological and conceptual improvements.^{3}

In terms of the question we posed—why data revisions?—the need for *timely* estimates of recent economic developments necessitates that estimates are produced long before all the data have been collected. As indicated, data collection and refinement of official statistics ought perhaps be viewed as an ongoing process.

Table 1. Real GDP Data Vintages—Snapshot

DATE |
13Q1 |
13Q2 |
13Q3 |
13Q4 |
14Q1 |
14Q2 |
14Q3 |
14Q4 |
15Q1 |
---|---|---|---|---|---|---|---|---|---|

47:Q1 |
1770.7 |
1770.7 |
1932.6 |
1932.6 |
1932.6 |
1932.6 |
1934.5 |
1934.5 |
1934.5 |

47:Q2 |
1768.0 |
1768.0 |
1930.4 |
1930.4 |
1930.4 |
1930.4 |
1932.3 |
1932.3 |
1932.3 |

47:Q3 |
1766.5 |
1766.5 |
1928.4 |
1928.4 |
1928.4 |
1928.4 |
1930.3 |
1930.3 |
1930.3 |

47:Q4 |
1793.3 |
1793.3 |
1958.8 |
1958.8 |
1958.8 |
1958.8 |
1960.7 |
1960.7 |
1960.7 |

48:Q1 |
1821.8 |
1821.8 |
1987.6 |
1987.6 |
1987.6 |
1987.6 |
1989.5 |
1989.5 |
1989.5 |

48:Q2 |
1855.3 |
1855.3 |
2019.9 |
2019.9 |
2019.9 |
2019.9 |
2021.9 |
2021.9 |
2021.9 |

48:Q3 |
1865.3 |
1865.3 |
2031.2 |
2031.2 |
2031.2 |
2031.2 |
2033.2 |
2033.2 |
2033.2 |

48:Q4 |
1868.2 |
1868.2 |
2033.3 |
2033.3 |
2033.3 |
2033.3 |
2035.3 |
2035.3 |
2035.3 |

$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |
$\vdots $ |

12:Q4 |
13647.6 |
13665.4 |
15539.6 |
15539.6 |
15539.6 |
15539.6 |
15433.7 |
15433.7 |
15433.7 |

13:Q1 |
#N/A |
13750.1 |
15583.9 |
15583.9 |
15583.9 |
15583.9 |
15538.4 |
15538.4 |
15538.4 |

213:Q |
#N/A |
#N/A |
15648.7 |
15679.7 |
15679.7 |
15679.7 |
15606.6 |
15606.6 |
15606.60 |

13:Q3 |
#N/A |
#N/A |
#N/A |
15790.1 |
15839.3 |
15839.3 |
15779.9 |
15779.9 |
15779.9 |

13:Q4 |
#N/A |
#N/A |
#N/A |
#N/A |
15965.6 |
15942.3 |
15916.2 |
15916.2 |
15916.2 |

14:Q1 |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
15946.6 |
15831.7 |
15831.7 |
15831.7 |

14:Q2 |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
15985.7 |
16010.4 |
16010.4 |

14:Q3 |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
16150.6 |
16205.6 |

14:Q4 |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
#N/A |
16311.6 |

Consider now the quarterly vintages recorded in Croushore and Stark’s (2001) Real Time Data Set for Macroeconomists (RTDSM). Table 1 illustrates an excerpt of the real-time data set for for US real GDP.^{4} This resource has greatly faciliated real-time data analysis. The first available estimate for any quarter in the RTDSM is the “advance” estimate denoted by ${y}_{t}^{t+1}$. The second quarterly estimate for quarter $t$ is denoted by ${y}_{t}^{t+2}$, and so on. The subscript indicates the reference quarter and the superscript the quarterly vintage (or estimate). In the case of US BEA data, the data will typically then remain unchanged unless the $t+3$-vintage is an annual revision ($t+3\in Q3$) or a benchmark revision. In practice, a small modification to this simple seasonal pattern may occur. When benchmark revisions are anticipated to be published in January, annual revisions may not be published in the previous July.^{5}

Table 1 depicts only the vintages from 2013Q1 to 2015Q1. Quarterly vintages for real GDP are available back to 1965. The first period for which data exist is 1947Q1, and for each vintage data are available up to one quarter before the vintage, reflecting the delay of one quarter in publishing data (when the vintages are recorded at the quarterly frequency, as here). Note that we drop the observations from 1949Q1 to 2012Q3, inclusive, to save space. To illustrate the patterns of revision described in the preceding paragraph, consider the 2013Q4 vintage estimate of the 2013Q3 observation. This is the first estimate of reference quarter 2013Q3, which is 15790.1. In 2014Q1 this estimate is revised up to 15839.3 (a $0.3\%$ increase). The estimate remains unaltered in 2014Q2 and is then revised down to 15779.9 in the 2014Q3 annual revision.

Our focus will be on quarterly revisions to quarterly data, primarily to keep the notation simple. But as shown by Clements and Galvão (2017b), monthly vintages can also be modeled and forecasted. It is generally the case that the models we discuss here are of the growth rates (or differences of logs) of the variable (such as GDP), and the revisions are then the differences between vintage estimates of the growth rates. This means that we are dealing with variables that are integrated of order zero, assuming that (the log of) real GDP is integrated of order one. An exception arises when we discuss the model of Garratt et al. (2008) in section “The Garratt et al. Model.”

We have described the institutional nature of the revisions to US national accounts data. The task of providing timely estimates based on incomplete source data is common to all government statistics offices, so data revisions are not specific to the United States. Zwijnenburg (2015) analyzed the revisions to national accounts data in 18 OECD countries in terms of regular revisions and benchmark revisions. Regular revisions are as described earlier, and are normally the focus of interest as they are likely to be more amenable to modeling. These revisions result from the updating of the data set used to compute the earlier estimates and are present in all 18 OECD countries. The mean revisions for most countries are not statistically significantly different from zero—data revisions normally have no effect on the unconditional mean of quarterly growth rates. However, Zwijnenburg (2015) has shown that revisions can be sizable, with a mean absolute error of 0.18 (the average across countries for the observations for the 1993–2014 period) for revisions made up to 5 months, and of 0.40 for revisions made up to 3 years.

Forecasting Methods and a Model of the Behavior of the Statistical Office

As mentioned earlier, regular data revisions can be related to the operational behavior of the government statistical office (SO). A model of this behavior is provided by Kishor and Koenig (2012), who drew on earlier contributions by Howrey (1978) and Sargent (1989). Kishor and Koenig provide models of the behavior of the SO that directly suggest ways of forecasting the true values of the series. Before considering models of SO behavior, it will be useful to make a distinction between news and noise revisions.

## News and Noise Data Revisions

Following Mankiw and Shapiro (1986), data revisions are sometimes characterized as news or noise. Data revisions are news when they add *new* information and noise when they reduce measurement error. If data revisions are noise, they can be predicted based on the current estimate. Mankiw and Shapiro (1986) and Faust, Rogers, and Wright (2005) provided empirical evidence that data revisions to US real GDP are largely news. Aruoba (2008) and Corradi, Fernandez, and Swanson (2009) provided recent extensions to testing for the properties of data revisions, which may lead to more nuanced findings, but the broad classification of each series as being subject to news or noise revisions in the simple setting below is a good starting point.

The standard tests for news and noise regress a revision on either the earlier estimate or on the later estimate. For example, consider the revision between the first estimates ${y}_{t}^{t+1}$, and the data available some three-and-a-half years later ${y}_{t}^{t+14}$ (chosen to include the three rounds of annual revisions). One can test for news and noise revisions using, respectively:

and

If we reject ${\beta}_{ne}=0$, then we reject the null hypothesis that data revisions are news, because we are able to predict revisions from knowledge of the earlier estimate, ${y}_{t}^{t+1}$. If we reject ${\beta}_{no}=0$, then we reject the null hypothesis that data revisions are noise, because they are correlated with the final estimate, ${y}_{t}^{t+14}$. If we reject both null hypotheses, then data revisions are neither news nor noise. If we fail to reject both hypotheses, we conclude that the data are not informative regarding the news/noise dichotomy.

Of course, the properties of revisions may depend on the choice of the earlier and later estimates, may vary over time, and need not give a definitive classification. The definition of efficiency underlying news is in terms of an information set consisting only of the initial estimate, leaving open the possibility that revisions may be predictable using initial estimates of other related variables (or other sources of information, as in Clements & Galvão, 2017b) so that the property of revisions being news, narrowly defined, may be less useful in multivariate settings.

To illustrate, using the standard tests, Clements (2017, p. 427, Table 2) compared the 1st estimates to the 15th estimates for the observations 1970Q2 to 2007Q2 for a number of variables. He found that 9 of the 25 variables analyzed had data revisions that appeared to be news, in that ${\beta}_{ne}=0$ was not rejected, but ${\beta}_{no}=0$ was rejected. Seven of the 25 variables were found to have noise revisions (${\beta}_{ne}=0$ rejected, but ${\beta}_{no}=0$ not rejected).

Rather than considering the revisions between an early estimate and a later estimate (such as the preliminary and fully revised data), Swanson and van Dijk (2006) considered the entire revision history and identified the point at which revisions become “rational” (or unpredictable). They also considered the properties of revisions separately in expansions and contractions.

## Model of Statistical Office Behavior

Suppose that ${\tilde{y}}_{t}$ is the true period $t$ value of the quarterly growth rate of a macroeconomic variable subject to revisions and that it follows a simple AR$\left(1\right)$:

There is however a delay in the publication of the true/revised values ${\tilde{y}}_{t}\mathrm{,}$ such that we assume it is not observed or published until $t+2$, i.e., ${y}_{t}^{t+2}={\tilde{y}}_{t}$. We relax this assumption when implementing the Kishor and Koenig (2012) modeling approach in practice in section “The Kishor and Koenig Approach in Practice.”

In period $t+1$, the SO observes source data ${w}_{t}^{t+1}={\tilde{y}}_{t}+{\eta}_{t}$, where ${\eta}_{t}$ is iid$\left(\mathrm{0,}{\sigma}_{\eta}^{2}\right)$. Based on ${w}_{t}^{t+1}$ (and possibly ${\tilde{y}}_{t-1}\mathrm{,}{\tilde{y}}_{t-2}\mathrm{,}\dots ;{w}_{t-1}^{t}\mathrm{,}\dots $), the SO produces its first estimate of ${y}_{t}$, denoted ${y}_{t}^{t+1}$. Private forecasters do not observe ${w}_{t}^{t+1}$, unless the SO sets ${y}_{t}^{t+1}={w}_{t}^{t+1}$.

From the perspective of a professional forecaster (PF) using only data published by the SO, the goal is to forecast future true values ${\tilde{y}}_{T}$, ${\tilde{y}}_{T+1}$, ${\tilde{y}}_{T+2}$, . . . based on period $T+1$ information, ${y}_{T}^{T+1}$, ${y}_{T-1}^{T+1}$ ($={\tilde{y}}_{T-1}$), and so on. Note that the superscripts denote when the value is published, while the subscripts refer to the reference quarter, or the time period to which the data refers.^{6}

The PF considers a possible strategy that consists of the following stages:

1. Estimate the model in Equation 1 over $t=2$ to $T-1$ using $\left\{{\tilde{y}}_{t}\right\}$. The model is estimated solely on the true data, and because of the publication delay assumed above the estimation sample runs only to $T-1$ (as the observation for reference period $T$ is a first estimate).

2. Obtain an estimate of ${\widehat{y}}_{T}$. This is an estimate of the true value of the variable at period $T$, and how this should be done depends on how the SO chooses ${y}_{T}^{T+1}$, as discussed below.

3. Then compute forecasts as ${\tilde{y}}_{T+h|T}={\widehat{f}}^{h}{\widehat{y}}_{T}$ for $h=\mathrm{1,2,}$ . . .

This strategy avoids what Kishor and Koenig (2012) refer to as the mixing of “apples and oranges.” They state that:

The problem with conventional practice is that it mixes apples and oranges. Data toward the tail end of the sample (oranges) have undergone little or no revision. Data early in the sample (apples) are heavily revised. For most series and typical sample sizes, the heavily revised data dominate estimation. Consequently, the VAR approximates the dynamic relationship between apples and apples. However, the data that are substituted in to the VAR equations to generate a forecast are end-of-sample oranges. Essentially, conventional practice constructs a cider press and then feeds oranges into it, expecting somehow to get cider. (From the Discussion Paper version of Kishor & Koenig, 2012; Federal Reserve Bank of Dallas, Working Paper 0501, 2005, p. 1)

The strategy estimates the model on fully revised data (or “apples”) and also conditions the forecast on (an estimate of) a fully revised data point. The conventional approach estimates the model on $\left\{{y}_{t}^{T+1}\right\}$ for $t=\mathrm{2,}\dots \mathrm{,}T$, so that all but the last observation is fully revised, and then sets ${\widehat{y}}_{T}={y}_{T}^{T+1}$, that is, conditions on a first estimate, so falling afoul of the dictum not to mix apples and oranges.

For Step 2 we need an estimate of ${\widehat{y}}_{T}$. Suppose the SO simply sets ${y}_{T}^{T+1}={w}_{T}^{T+1}$, so that:

and the PF assumes this is the case. That is, the SO makes no attempt to process the source data in order to generate an efficient first estimate, and the subsequent revision is noise, as defined in the previous section. To see this, note that

but that

which implies that the revision ${\tilde{y}}_{t}-{y}_{t}^{t+1}$ is uncorrelated with the true value but is predictable from ${y}_{t}^{t+1}$.

Suppose now that the SO actively processes the source data. Intuitively, the release of a high value ${y}_{T}^{T+1}$ could be because of a large positive ${\eta}_{T}$ (noise) or because of a high positive ${v}_{T}$ (i.e., signaling a high true value ${\tilde{y}}_{T}$), and some weight ought to be attributed to both of these possibilities. If ${v}_{t}$ and ${\eta}_{t}$ are normally distributed, then the standard signal extraction problem gives the optimal estimate of ${\tilde{y}}_{T}$ as:

where ${\gamma}_{N}$$={\sigma}_{v}^{2}/\left({\sigma}_{v}^{2}+{\sigma}_{\eta}^{2}\right)$ and ${\gamma}_{N}$ is the weight on the first estimate, ${y}_{T}^{T+1}$, and $\left(1-{\gamma}_{N}\right)$ is the weight on the model predicted value, $f\phantom{\rule{0.2em}{0ex}}{\tilde{y}}_{T-1}$. When normality does not hold, ${\gamma}_{N}$ still has the justification of minimizing

over $\gamma $.

Setting ${\widehat{y}}_{T}$ equal to the righthand side of (2) results in an efficient estimate, in the sense that the induced revision ${\tilde{y}}_{T}-{\widehat{y}}_{T}$ is uncorrelated with ${\widehat{y}}_{T}$; $Cov\left({\widehat{y}}_{T}\mathrm{,}{\tilde{y}}_{T}-{\widehat{y}}_{T}\right)=0$.

Hence the release of the unprocessed source data by the SO leads to inefficient initial estimates and subsequent revisions are predictable. In our setup, defining ${\widehat{y}}_{T}$ by (2) in the second step of the three-stage forecasting strategy would result in unpredictable revisions.

## Extensions

A number of extensions and generalizations are possible. For example, Howrey (1978) allows revisions to be serially correlated:

and then the estimator of ${\tilde{y}}_{t}$ is given by:

with ${\gamma}_{H}={\sigma}_{v}^{2}/\left({\sigma}_{v}^{2}+{\sigma}_{w}^{2}\right)$.

Suppose instead, as suggested by Sargent (1989), that the SO does not announce ${y}_{t}^{t+1}={w}_{t}^{t+1}$, but filters the source data itself, and in so doing inadvertently introduces an additive random error so that the first announcement ${y}_{t}^{t+1}$ is given by:

where $g={\gamma}_{N}$ and ${\xi}_{t}$ is the idiosyncratic error induced by the filtering.

From the perspective of the PF, who now does not observe ${w}_{t}^{t+1}$, we obtain:

setting ${\epsilon}_{t}={\xi}_{t}+g{\eta}_{t}$.

This suggests the optimal forecast of ${\tilde{y}}_{T}$ is given by:

where ${\gamma}_{S}=g{\sigma}_{v}^{2}/\left({g}^{2}{\sigma}_{v}^{2}+{\sigma}_{\epsilon}^{2}\right)$ and, as before, ${\gamma}_{S}$ minimizes the expected squared deviation between ${\tilde{y}}_{T}$ and a linear combination of $f\phantom{\rule{0.2em}{0ex}}{\tilde{y}}_{T-1}$ and ${y}_{T}^{T+1}$.

Clearly, from (4), the PF should only set ${\widehat{y}}_{T}={y}_{T}^{T+1}$ when ${\gamma}_{S}=1$. This condition requires that the government filters the source data (as in Sargent, 1989) and does so without error. Then, ${\xi}_{t}=0$, so ${\sigma}_{\epsilon}^{2}={g}^{2}{\sigma}_{\eta}^{2}$, and using $g={\sigma}_{v}^{2}/\left({\sigma}_{v}^{2}+{\sigma}_{\eta}^{2}\right)$, we get ${\gamma}_{S}=1$.

We can write the equations for the true process and the first estimate for the Sargent setup as:

Combining these, revisions are given by:

so that the error in the revisions equation is correlated with that in the equation for the true process, but revisions are not serially correlated. By contrast, in Howrey’s (1978) setup the errors in the revisions and true process equations are not correlated, but revisions are serially correlated.

Kishor and Koenig (2012) suggest combining these two models by allowing correlated errors and serially correlated revisions and specifying the revisions process as:

Then the best estimate of ${\tilde{y}}_{T}$ is given by:

with ${\gamma}_{K}={\gamma}_{S}$ and “best” defined following (2) or (4).

As a consequence, the PF could use a model that encompasses the Howrey (1978) and Sargent (1989) specifications and combines Equation 1 for the true process and Equation 5 for the revisions process, written for convenience as:

Because of the cross-equation correlation in the error terms, efficient estimation of the two-equation system requires the seemingly unrelated regressions estimator (SURE). This is carried out on observations from $t=2$ up to $T-1$. The covariance matrix of (${v}_{t}$, ${\epsilon}_{t}-\left(1-g\right){v}_{t}$) is given by:

From the estimates of the elements of $Q$, the parameter ${\gamma}_{k}=g{\sigma}_{v}^{2}/\left({g}^{2}{\sigma}_{v}^{2}+{\sigma}_{\epsilon}^{2}\right)$ in Equation 6 can be computed. Combining this with the estimates of $f$ and $k$ from the system in Equation 7 provides an estimate of ${\widehat{y}}_{T}$, as required by Step 2 of the forecasting strategy.

For ease of exposition we have assumed that the first revision reveals the true value, but this clearly needs to be generalized given that such data typically experience many rounds of revisions. This suggests that at time $T+1$ estimates are required of the true values of not just ${\tilde{y}}_{T}$ but of earlier periods, ${\tilde{y}}_{T-1}$, ${\tilde{y}}_{T-2}\mathrm{,}\dots $ as well. The following section describes a practical implementation of the Kishor and Koenig (2012) approach allowing for multiple revisions.

## The Kishor and Koenig Approach in Practice

For expositional purposes, we assumed in the previous two sections that the true value ${\tilde{y}}_{t}$ is observed two quarters after the observational quarter, that is, ${\tilde{y}}_{t}={y}_{t}^{t+2}$. In practice, data are subject to annual revisions that may shape the true values, such that in general ${\tilde{y}}_{t}={y}_{t}^{t+q}$. For US data, accommodating the three rounds of annual revisions would require setting $q=14$. The model described in this section is motivated by, and generalizes, the simpler cases described previously.

To implement the Kishor and Koenig (2012) approach in practice, we assume an AR$\left(p\right)$ for the “final data,” and equations for $q-1$ rounds of revisions. The key components are the $t+1$-vintage vector ${y}^{t+1}$ and the true values vector ${\tilde{y}}_{t}$:

If we assume that ${y}_{t-q+1}^{t+1}={\tilde{y}}_{t-q+1}$: ${y}_{t-q+1}^{t+1}$ is supposed to be an efficient estimate of the true value ${\tilde{y}}_{t-q+1}$, the model can be written succinctly in state-space form with measurement and state equations given by:

and

In Equation 8, ${I}_{q}$ denotes an identity matrix of order $q$. The disturbance vectors are ${v}_{t}=({v}_{1t}{\mathrm{,0,...0})}^{\text{'}}$ and ${\epsilon}_{t}=({\epsilon}_{1t}\mathrm{,...,}{\epsilon}_{q-1t}{\mathrm{,0})}^{\text{'}}$. The errors in the data revision equations, ${\epsilon}_{t}$, are allowed to be correlated with the disturbances to the true values, ${v}_{1t}$, as well as being contemporaneously correlated. Defining ${v}_{t}={({v}_{t}\mathrm{,}{\epsilon}_{t})}^{\text{'}}$, we let $E({v}_{t}{v}_{t}^{\text{'}})=Q$.

The true values ${\tilde{y}}_{t}$ follow an autoregression of order $p$, defined by the first block of Equation 9 with:

where $f=\left({f}_{1}\mathrm{,}\dots \mathrm{,}{f}_{p}\right)$ is the $1\times p$ coefficient vector ($p<q$). The matrix $K$ describes the dynamics of $q-1$ data revisions ${y}^{t+1}-{y}_{t}$:

The $q\times 1$ vectors ${c}_{1}$ and ${c}_{2}$ are ${c}_{1}=({c}_{1}{\mathrm{,0,...,0})}^{\text{'}}$ and ${c}_{2}=({c}_{21}\mathrm{,...,}{c}_{2q-1}{\mathrm{,0})}^{\text{'}}$.

This provides a complete specification of the model and allows for multiple revisions. The unknown parameters ${c}_{1}$, ${c}_{2}\mathrm{,}$$F$, $K$ and $Q$ are estimated by SURE. Because ${\tilde{y}}_{t}$ consists solely of final values, given the assumption that ${y}_{t-q+1}^{t+1+i}={\tilde{y}}_{t-q+1}$ for $i\ge 0$, the estimation sample has to end at $t=T-q+1$ when $T+1$ is the forecast-origin data vintage. Application of the Kalman filter then provides estimates of the fully revised values of past observations (i.e., of the post-revision values of current and past observations, ${\tilde{y}}_{T-q+2}$ up to ${\tilde{y}}_{T}$). Forecasts of post-revision future observations ${\tilde{y}}_{T+1}$, . . ., ${\tilde{y}}_{T+h}$ are obtained by iterating the state equation for ${y}_{t}$ using the estimate of ${\widehat{y}}_{T}$ obtained from the Kalman filtering using all data through vintage $T+1$.

Alternative approaches to dealing with many rounds of data revisions are proposed by Cunningham, Eklund, Jeffery, Kapetanios, and Labhard (2009) and Jacobs and van Norden (2011). The model of Jacobs and van Norden (2011) is discussed in section “A News and Noise Model of Data Revisions.”

Data Revisions and Forecasting With DSGE Models

Dynamic stochastic general equilibrium (DSGE) models are now routinely used for forecasting (see, e.g., Del Negro & Schorfheide, 2013). Just as with the autoregressive model for the true process in section “The Kishor and Koenig Approach in Practice” (or vector autoregressive models more generally), forecasts will need to be generated when only early estimates of the data for the more recent time periods are available.

Galvão (2017) showed how to accomplish this by explicitly modeling data revisions while estimating and forecasting DSGE models. The approach aims to predict revised values of macroeconomic variables and computes density forecasts while making an allowance for data uncertainty. As in Kishor and Koenig (2012), Galvão (2017) estimates the forecasting model—a DSGE model in her case—with final true or fully revised data. Galvão (2017) assumes that ${y}_{t}^{t+q}={\tilde{y}}_{t}$, that is, after $q-1$ revisions the true value is revealed. This means that at time $t=T+1$ we only observe true values up to $t=T-q+1$, and consequently a model of data revisions is used to provide estimates of the true values of the data still subject to revision.

Galvão (2017) proposed a one-step method to jointly estimate the parameters of the DSGE model with revised data and the parameters describing the data revisions process. Based on the model, one can compute backcasts and forecasts for the data subject to revision, including their underlying predictive density. A description of her approach follows.

Define ${y}_{t}$ as an $n\times 1$ vector of the endogenous DSGE variables written as deviations from their steady state values. In practice, ${y}_{t}$ may also include lagged variables. The elements of the vector ${y}_{t}$ need not be observable and the absence of superscripts is deliberate. Define $\theta $ as the vector of structural parameters. The solution of the DSGE model for a given vector of parameters $\theta $ is written as:

where ${v}_{t}$ is a $r\times 1$ vector of structural shocks, and thus, the matrix $G(\theta )$ is $n\times r$. Note also that ${v}_{t}\sim iidN(\mathrm{0,}Q)$ and that $Q$ is a diagonal matrix because the shocks are regarded as being structural. Equation 10 is the state equation of the state space representation of the DSGE model.

Define ${\tilde{Y}}_{t}=({\tilde{y}}_{\mathrm{1,}t}\mathrm{,...,}{\tilde{y}}_{m\mathrm{,}t}{)}^{\text{'}}$ as the $m\times 1$ vector of true values of the endogenous variables in ${y}_{t}$ that are assumed to be observable; typically $m<n$ and $m\le r$. The Smets and Wouters (2007) medium-sized model has $m=r=7$. The measurement equation is:

that is, the set of observable variables, such as inflation and output growth, are measured without error, although Galvão (2017) also showed that the approach would work if there were measurement errors.

The DSGE is estimated using the values observed after $q-1$ rounds of revisions, that is, assuming that ${\tilde{Y}}_{t-q+1}={Y}_{t-q+1}^{t+1}$. Then the measurement equations are:

and the last $q-1$ observations (that is, ${Y}_{t-q+2}^{t+1}\mathrm{,}\dots \mathrm{,}{Y}_{t}^{t+1}$) have to be excluded.

Define the demeaned observed revisions between first releases ${Y}_{t}^{t+1}$ and true values ${Y}_{t}^{t+q}$ as:

where ${M}_{1}$ is $m\times 1$ vector of mean revisions. This implies that we observe $T-q+1$ values of the full revision process to a first release at $T+1$ and that the full revision process for observation $t$ is only observed at $t+q$ because of the statistics office data release schedule. In general, for the ${k}^{th}$ release, the (demeaned) remaining revisions up to the true values are:

The released-based approach in Galvão (2017) augments the measurement Equation 12 to include a time series of first releases, second releases, and so on, as:

for $t=\mathrm{1,...,}T$ and:

where the $m\times 1$ vectors ${M}_{v}$ allow for nonzero mean data revisions.

The state equations are augmented by data revision processes as:

where the serial correlation allows for predictable revisions if the $m\times m$ matrix ${K}_{k}$ is nonzero. The innovation term ${\xi}_{t}^{k}$ allows for data revisions that are caused by a reduction of measurement errors and they are assumed to be uncorrelated across variables, so ${R}_{k}$ is diagonal. The last term, ${A}_{k}{v}_{t}$, implies that the data revisions may be caused by new information not available at the time of the current release but included in the revised data used to compute the complete effects of the structural shocks. Because the same vector of structural shocks ${v}_{t}$ may lead to revisions for each one of the variables in ${Y}_{t}^{t+k}$, then the processes $re{v}_{\mathrm{1,}t}^{k}\mathrm{,...,}re{v}_{m\mathrm{,}t}^{k}$ may be correlated.

Galvão (2017) proposed a Metropolis-in-Gibbs approach to jointly estimate the DSGE parameters $\left\{\theta \mathrm{,}Q\right\}$ and the parameters of the revision process:

She showed that forecasts of future values of consumption and investment growth using a Smets and Wouters (2007) DSGE model are improved by using the release-based instead of the conventional approach, which employs only the last vintage of data.

Vintage-Based VARs: Models in Terms of Observables

Whereas the Kishor and Koenig (2012) and Galvão (2017) models include unobserved components (namely, the true values of variables at the time when only earlier estimates are available), other modeling approaches consider the relationships between the observables directly. A natural way to model a range of vintages and not just a single release is to use a vector autoregression (VAR; see Sims, 1980). A number of VAR-type models based only on observables have been proposed for modeling revisions.

VARs were originally used to model the dynamic relationships among interconnected macro variables without the need for “incredible” identifying restrictions and hard-to-defend assumptions about exogeneity (see Sims, 1980). It was soon recognized that such models would have many parameters to estimate for reasonable numbers of series and lagged values, so Bayesian methods were sometimes adopted (see, e.g., Doan, Litterman, & Sims, 1984; Litterman, 1986; Bańbura, Giannone, & Reichlin, 2010, for a more recent contribution). In terms of modeling data subject to revision, the observable variables are the data defined by both the release data and the reference quarter, and the potential for having highly parameterized models arises.

## The Garratt et al. Model

The first VAR-type model we consider is that of Garratt et al. (2008). Following Patterson (1995), Garratt et al. (2008) worked in terms of the level (of the log) of a variable (e.g., output, denoted by $Y$). The variable is assumed to be integrated of order one ($I\left(1\right)$; see, e.g., Banerjee, Dolado, Galbraith, & Hendry, 1993) and it is assumed that different vintage estimates are cointegrated such that data revisions are integrated of order zero (written $I\left(0\right)$). So, for example, ${Y}_{t-1}^{t+1}$ and ${Y}_{t-1}^{t}$ are both $I(1)$—these are the second and first estimates of reference quarter $t-1$, respectively. But ${Y}_{t-1}^{t+1}-{Y}_{t-1}^{t}$, the revision between the first and second estimates of the period $t-1$ value, is $I\left(0\right)$. Garratt et al. model a vector of variables comprising three elements, ${Z}^{t+1}={\left({Y}_{t}^{t+1}-{Y}_{t-1}^{t}\mathrm{,}{Y}_{t-1}^{t+1}-{Y}_{t-1}^{t}\mathrm{,}{Y}_{t-2}^{t+1}-{Y}_{t-2}^{t}\right)}^{\text{'}}$. The first element of ${Z}^{t+1}$ is a difference across vintage and observation, and the subsequent terms are revisions to past data. The inclusion of two revisions reflects the view that a revision horizon of 2 is appropriate in the sense that revisions such as ${Y}_{t-2-j}^{t+1}-{Y}_{t-2-j}^{t}$ for $j>0$ are supposed to be largely unpredictable and hence are not included in the vector of variables to be modeled. This assumption also serves to limit the dimensionality of the system to be estimated.

Garratt et al. (2008) relate ${Z}^{t+1}$ to two lags of itself, where the lagging is applied to both the vintage and reference quarter scripts:

where ${Z}^{t+1-i}={\left({Y}_{t-i}^{t+1-i}-{Y}_{t-1-i}^{t-i}\mathrm{,}{Y}_{t-1-i}^{t+1-i}-{Y}_{t-1-i}^{t-i}\mathrm{,}{Y}_{t-2-i}^{t+1-i}-{Y}_{t-2-i}^{t-i}\right)}^{\text{'}}$, for $i=\mathrm{0,1,2}$. The ${\text{\Phi}}_{i}$ are $3$-by-$3$ matrices of coefficients with third columns consisting solely of zeros (see Garratt et al., 2008, for details), that is, their approach implies specific restrictions on a VAR.

A disadvantage of the formulation in Equation 15 is that shifts in the levels of $Y$ due to base-year changes (or to other definitional changes) at times of benchmark revisions cannot be easily handled by the Garratt et al. model. These arise because the model is based on differences in $Y$ across time periods and vintages. Differencing-across-vintages means that level shifts between vintages affect the resulting series. The level-shift components of the benchmark revisions are removed from the real-time data set prior to the formulation and estimation of models such as Equation 15.

## The Vintage-Based VARs

The problems created by re-basings of the data can be circumvented by instead specifying the model in same-vintage-growth rates (see, e.g., Clements & Galvão, 2012, 2013a; Carriero, Clements, & Galvão, 2015). For example, let ${y}_{t}^{t+1}=400\left({Y}_{t}^{t+1}-{Y}_{t-1}^{t+1}\right)$ be the (approximate) quarterly percentage change (at an annual rate) for period $t$ computed using data vintage $t+1$. Because the growth rate is calculated between two data points from the same vintage, level shifts or base-year changes will have no effect (to the extent that the change is simply a rescaling of the data).

Suppose that in addition to modeling ${y}_{t}^{t+1}$, the first estimate of the growth rate for period $t$, we also wish to model the revisions for the next $q-1$ quarters. This can be accomplished by modeling the vintage $t+1$ values of observations $t-q+1$ through $t$ as a vintage VAR (V-VAR):

where ${y}^{t+1}={\left({y}_{t}^{t+1}\mathrm{,}{y}_{t-1}^{t+1}\mathrm{,}\dots \mathrm{,}{y}_{t-q+1}^{t+1}\right)}^{\text{'}}$, ${y}^{t+1-i}={\left({y}_{t-i}^{t+1-i}\mathrm{,}{y}_{t-1-i}^{t+1-i}\mathrm{,}\dots \mathrm{,}{y}_{t-q+1-i}^{t+1-i}\right)}^{\text{'}}$, and $c$ is $q\times 1$, ${\text{\Gamma}}_{i}$ is $q\times q$, and ${\epsilon}^{t+1}$ is a $q\times 1$ vector of disturbances. The first element of ${y}^{t+1}$ is the new observation ${y}_{t}^{t+1}$, and subsequent estimates are the revised estimates of past observations, ${y}_{t-1}^{t+1}\mathrm{,}\dots \mathrm{,}{y}_{t-q+1}^{t+1}$.

When $q$ is relatively large, the autoregressive order $p$ may be set to a low value, such as $p=1$ (see, e.g., Clements & Galvão, 2013a). Nevertheless, if $q$ is large, say, $q=14$, in order to capture the three rounds of annual revisions to which US national accounts data are subject, there will be 14 free coefficients to estimate in each of the 14 equations when $p=1$ (plus an intercept in each).

The number of coefficients to be estimated can be reduced by restricting the model by supposing that, after a small number of revisions, further revisions are unpredictable. Suppose that after $n-1$ revisions, the next estimate ${y}_{t}^{t+n+1}$ is an efficient forecast in the sense that the revision from ${y}_{t}^{t+n}$ to ${y}_{t}^{t+n+1}$ is unpredictable, that is, $E\left[\left({y}_{t}^{t+n+1}-{y}_{t}^{t+n}\right)|{y}^{t+n}\right]=0$, whereas $E\left[\left({y}_{t}^{t+i+1}-{y}_{t}^{t+i}\right)|{y}^{t+i}\right]\ne 0$ for $i<n$. We can impose this restriction on the VAR, where it translates to $E\left({y}_{t-n}^{t+1}|{y}_{t-n}^{t}\right)={y}_{t-n}^{t}$. This is achieved by specifying ${\mathrm{\Gamma}}_{1}$ and ${\mathrm{\Gamma}}_{i}$ ($i=\mathrm{2,}\mathrm{\dots}\mathrm{,}p$) in Equation 16 as:

If $n$ is set equal to $2$, for example, then values after the first revision are assumed to be efficient forecasts (for example, for the United States this would correspond to the BEA estimate published two quarters after reference quarter constituting an efficient forecast). An unrestricted intercept is included in each equation to accommodate nonzero mean revisions. We refer to this model as the news-restricted vintage-based VAR, or RV-VAR.

Clements and Galvão (2013a) compared the forecasting performance of the RV-VAR with that of the V-VAR. They also considered a periodic specification that captures the seasonal nature of some data revisions: the annual rounds of revisions that occur in July of each year (for US data).

Clements and Galvão (2013a) discussed the interpretation of the forecasts generated by the V-VAR and related models. Consider the forecast-origin-vintage $T+1$. At this time, the information set will include all the data vintages up to and including the time-$T+1$ vintage, that is, ${Y}^{t+1}$ for $t=\mathrm{1,2,}\dots T$, where ${Y}^{t+1}=\left\{\dots \mathrm{,}{y}_{t-1}^{t+1}\mathrm{,}{y}_{t}^{t+1}\right\}$. The $h$-step ahead forecast of the vector ${y}^{T+1+h}$ is defined as the conditional expectation, given the model and the information set:

The elements of vector ${y}^{T+1+h|T+1}$ are $({y}_{T+h}^{T+1+h|T+1}\mathrm{,...,}{y}_{T+h-q+1}^{T+1+h|T+1})$, and thus provide forecasts of the first estimate of ${y}_{T+h}\mathrm{,}$ of the second estimate of ${y}_{T+h-1}$, and so on down to a forecast of the ${q}^{\text{th}}$ estimate of $T+h-q+1$.

Suppose we require forecasts of the “true” (revised) values, that is, ${\tilde{y}}_{T-1}\mathrm{,}{\tilde{y}}_{T}\mathrm{,}{\tilde{y}}_{T+1}\mathrm{,}{\tilde{y}}_{T+2}$. As before, we assume that for a reasonably large $q$ (e.g., $q=14$, chosen to include the annual revisions), we have ${\tilde{y}}_{t}={y}_{t}^{t+q}$. That is, we can equate the true values with the values available after $q-1$ revisions.^{7} We need then to consider forecasts of a subset of the elements of ${y}^{T+1+h|T+1}$, namely, the last elements of each vector for $h=\mathrm{1,2,3,}\dots \mathrm{,}{h}^{\ast}$. For the forecast origin $T+1$, this gives the following set of forecasts: ${y}_{T+2-q}^{T+2|T+1}$, ${y}_{T+3-q}^{T+3|T+1}$, . . ., ${y}_{T+1+{h}^{\ast}-q}^{T+1+{h}^{\ast}|T+1}$, all of which are the $q$-th estimates. Some of these estimates will relate to the past and others to the present, relative to the vintage-origin $T+1$. When ${h}^{\ast}+(1-q)\le 0$, we have forecasts of past observations (or backcasts), ${\tilde{y}}_{T}\mathrm{,}{\tilde{y}}_{T-1}$, ${\tilde{y}}_{T-2}$, and so on, and for ${h}^{\ast}+(1-q)>0$ forecasts of fully revised future observations: ${\tilde{y}}_{T+1}\mathrm{,}$${\tilde{y}}_{T+2}$, etc.

Single-Equation Approaches: EOS and RTV

Single-equation approaches have also been considered, and empirically have been found to work relatively well. We use the statistical framework of Jacobs and van Norden (2011) to show that, in principle, the traditional approach is not the best way to estimate an autoregressive model for forecasting. The statistical framework of Jacobs and van Norden (2011) separately identifies news and noise data revisions, as defined in section “News and Noise Data Revisions.” Their model can be used to estimate the importance of the news and noise contributions to the data revisions of a particular series, and generally this would be facilitated by specifying a relatively small number of revisions. Otherwise it may be difficult to identify the separate news and noise components with any precision. In this section we use their model as a coherent statistical framework for deriving the properties of single-equation approaches, including ignoring data revisions when forecasting. The clear demarcation of revisions into news and noise allows us to determine the implications of each for the relative forecast performance of the single-equation approaches, at least in population (that is, abstracting from model selection and parameter estimation issues).

Section “A News and Noise Model of Data Revisions” describes the statistical framework. Sections “Estimating AR Forecasting Models Using EOS Data” and “Estimating AR Forecasting Models Using RTV Data” provide the properties of the traditional approach and an approach suggested by Koenig, Dolmas, and Piger (2003) and show that the latter is optimal, at least in population. Section “Density Forecasting” discusses the properties of interval and density forecasts derived from the two single-equation models.

## A News and Noise Model of Data Revisions

The Jacobs and van Norden (2011) statistical framework for modeling data revisions is based directly on the news/noise distinction. Each release is set equal to the true value plus an error, or errors, where the errors correspond to news or noise and are unobserved. So, for example, at period $t+s$, the SO releases an estimate of the value of $y$ for reference period $t$, which we denote ${y}_{t}^{t+s}$, written as:

where ${\tilde{y}}_{t}$ is the true value and ${v}_{t}^{t+s}$ and ${\epsilon}_{t}^{t+s}$ are the news and noise components. We allow for up to $q$ releases, with $s=\mathrm{1,}\dots \mathrm{,}q$. For any given $s$, one or other of the news and noise components may be absent.

Jacobs and van Norden (2011) stack the $q$ releases of ${y}_{t}$, namely, ${y}_{t}^{t+1}\mathrm{,}\dots \mathrm{,}{y}_{t}^{t+q}$, in the vector ${y}_{t}={\left({y}_{t}^{t+1}\mathrm{,}\dots \mathrm{,}{y}_{t}^{t+q}\right)}^{\text{'}}$, and similarly ${\epsilon}_{t}={\left({\epsilon}_{t}^{t+1}\mathrm{,}\dots \mathrm{,}{\epsilon}_{t}^{t+q}\right)}^{\text{'}}$ and ${v}_{t}={\left({v}_{t}^{t+1}\mathrm{,}\dots \mathrm{,}{v}_{t}^{t+q}\right)}^{\text{'}}$, so that:

where $i$ is a $q$ vector of ones. In order that the news revisions are not correlated with the earlier release, namely that $Cov\left({v}_{t}^{t+s}\mathrm{,}{y}_{t}^{t+s}\right)=0$, where ${v}_{t}^{t+s}={y}_{t}^{t+s}-{\tilde{y}}_{t}$, we need to assume a process for ${\tilde{y}}_{t}$ that includes the news components. For example, if ${\tilde{y}}_{t}$ is assumed to follow an AR$\left(p\right)$, say, with iid disturbances ${R}_{1}{\eta}_{1t}$, then we need to add in the sum of $q$ news components ${v}_{i\mathrm{,}t}$:

The ${v}_{i\mathrm{,}t}$ are specified as ${v}_{i\mathrm{,}t}={\sigma}_{{v}_{i}}{\eta}_{2t\mathrm{,}i}$ (for $i=\mathrm{1,...,}l$), and both ${\eta}_{1t}$ and ${\eta}_{2t\mathrm{,}i}$ are $iid(\mathrm{0,1})$.

The news and noise components of each vintage in ${y}_{t}$ are:

where ${\eta}_{3t\mathrm{,}i}$ is $iid(\mathrm{0,1})$. The shocks are also mutually independent, that is, if ${\eta}_{t}=\left[{\eta}_{1t}\mathrm{,}{\eta}_{2t}^{\text{'}}\mathrm{,}{\eta}_{3t}^{\text{'}}\right]$, then $E\left({\eta}_{t}\right)=0$, with $E\left({\eta}_{t}{\eta}_{t}^{\text{'}}\right)=I$.

To see how this setup delivers appropriately defined news and noise revisions, consider a few illustrative cases. The first estimate of ${y}_{t}$, ${y}_{t}^{t+1}$ is: ${y}_{t}^{t+1}={\tilde{y}}_{t}+{v}_{t}^{t+1}+{\epsilon}_{t}^{t+1}={\rho}_{0}+{\sum}_{i=1}^{p}\phantom{\rule{0.2em}{0ex}}{\rho}_{i}{\tilde{y}}_{t-i}+{R}_{1}{\eta}_{1t}+{\sigma}_{{\epsilon}_{1}}{\eta}_{3t,1}$. This does not include any news component. The second estimate is: ${y}_{t}^{t+2}={\tilde{y}}_{t}+{v}_{t}^{t+2}+{\epsilon}_{t}^{t+2}={\rho}_{0}+{\sum}_{i=1}^{p}\phantom{\rule{0.2em}{0ex}}{\rho}_{i}{\tilde{y}}_{t-i}+{R}_{1}{\eta}_{1t}+{\sigma}_{{\epsilon}_{2}}{\eta}_{3t,2}+{\sigma}_{{v}_{i}}{\eta}_{2t,1}$. Suppose there is no noise, so ${\eta}_{3t,2}={\eta}_{3t,2}=0.$. Then clearly ${y}_{t}^{t+2}$ is a more accurate estimate of ${\tilde{y}}_{t}$ than ${y}_{t}^{t+1}$, as it includes the news ${\sigma}_{{v}_{i}}{\eta}_{2t\mathrm{,1}}$. Further, the revision between ${y}_{t}^{t+2}$ and the true value ${\tilde{y}}_{t}$ is uncorrelated with ${y}_{t}^{t+2}$, so that ${y}_{t}^{t+2}$ is an efficient estimate:

Suppose now that there is only noise, so ${v}_{t}^{t+2}=0$ but ${\epsilon}_{t}^{t+2}\ne 0$. It follows immediately that the revisions induced by the second estimate are predictable using ${y}_{t}^{t+2}$:

News revisions imply that $var({y}_{t}^{t+1})\mathrm{<}var({y}_{t}^{t+q})$, while noise revisions imply that $var({y}_{t}^{t+1})>var({y}_{t}^{t+q})$, assuming that later estimates are less “noisy” (${\sigma}_{{\epsilon}_{1}}\mathrm{>}{\sigma}_{{\epsilon}_{q}}$). If ${\sigma}_{{v}_{l}}=0$ and ${\sigma}_{{\epsilon}_{l}}=0$ the ${q}^{th}$ released value is the true value ${y}_{t}^{t+l}={\tilde{y}}_{t}$. The assumption that ${\tilde{y}}_{t}$ is an $I\left(0\right)$ stationary process ensures that ${y}_{t}$ is a stationary process from Equation 18, as both the news and noise terms are stationary. This is a reasonable assumption when the model is applied to an $I\left(0\right)$ transformation of the data, such as growth rates.

This model implies that both noise and news revisions are zero mean, so that the unconditional mean of the underlying series $\left\{{\tilde{y}}_{t}\right\}$ and the observed data $\left\{{y}_{t}\right\}$ are equal at ${\rho}_{0}{\left(1-\rho \left(1\right)\right)}^{-1}$. Nonzero mean revisions can easily be accommodated. Assume that each news term is instead ${v}_{i\mathrm{,}t}={\mu}_{{v}_{i}}+{\sigma}_{{v}_{i}}{\eta}_{2t\mathrm{,}i}$ and the noise components are ${\epsilon}_{t}^{t+i}=-{\mu}_{{\epsilon}_{i}}+{\sigma}_{{\epsilon}_{i}}{\eta}_{3t\mathrm{,}i}$. The true process is now:

since now ${{{\displaystyle \sum}}^{\phantom{\rule{0.2em}{0ex}}}}_{i=1}^{q}{v}_{it}={{{\displaystyle \sum}}^{\phantom{\rule{0.2em}{0ex}}}}_{i=1}^{q}{\mu}_{{v}_{i}}+{{{\displaystyle \sum}}^{\phantom{\rule{0.2em}{0ex}}}}_{i=1}^{q}\phantom{\rule{0.2em}{0ex}}{\sigma}_{{v}_{i}}{\eta}_{2t,i}.$ The news and noise processes of each vintage are:

The statistical model can be cast in state-space form using Equation 18 as the measurement equation and combining Equations 21 and 22 to obtain the transition equation. The parameters can be estimated by maximum likelihood using the Kalman filter as described by Jacobs and van Norden (2011).

## Estimating AR Forecasting Models Using EOS Data

Assuming the variable we wish to forecast can be described by the model set out in the previous section, we begin with the standard or conventional approach, which effectively ignores the data revisions. We suppose the forecasting model is an autoregression. The conventional approach estimates this model on the vintage of data available at the forecast origin. From the discussion in section “Forecasting Methods and a Model of the Behavior of the Statistical Office,” such an approach is expected to be non-optimal, as it mixes apples and oranges. The framework outlined in the previous section allows us to determine why the conventional approach is not able to deliver optimal forecasts in real time.

Consider forecasting at time $T+1$. The $T+1$ vintage of data contains data up to $T$, $\left\{\dots \mathrm{,}{y}_{T-1}^{T+1}\mathrm{,}{y}_{T}^{T+1}\right\}$, and the model is estimated on this data, termed end of sample (EOS) by Koenig et al. (2003). For an AR$\left(p\right)$ we have:

where the unknown parameters are estimated on the observations $t=p+\mathrm{1,}\dots \mathrm{,}T$.

Writing the model in matrix notation:

where ${Y}_{-1}^{T+1}=\left[{Y}_{-1}^{T+1}\mathrm{,}\dots \mathrm{,}{Y}_{-p}^{T+1}\right]$, $i$ is a $T-p$ vector of $1$‘s, and the vectors of observations ${Y}^{T+1}$ and ${Y}_{-i}^{T+1}$, $i=\mathrm{1,}\dots \mathrm{,}p$, are:

for $i=\mathrm{1,}\dots \mathrm{,}p$.

Clements and Galvão (2013b) derive the population values of the least-squares estimator of the parameters in Equation 23, when the data are generated by Equations 18–20 as:

where ${\mathrm{\Sigma}}_{\underset{\_}{v}}$ and ${\mathrm{\Sigma}}_{\underset{\_}{\epsilon}}$ are second moment matrices of the news and noise components and ${\text{\Sigma}}_{\tilde{y}\underset{\_}{v}}$ is the second moment matrix between the news and the underlying process, ${\tilde{y}}_{t}$, and ${\mu}_{\tilde{y}}\equiv E\left({\tilde{y}}_{t}\right)$ (see Clements & Galvão, 2013b, for details).

Clements and Galvão (2013b) also showed that these parameter values are not optimal, that is, they are not the values that minimize the real-time out-of-sample expected squared forecast error when the forecast is conditioned upon the forecast-origin vintage values of the data (as is standard practice). That is, if the forecast is given by ${\varphi}_{0}+{\varphi}^{\text{'}}{y}_{T}^{T+1}$, where ${y}_{T}^{T+1}={\left({y}_{T}^{T+1}\mathrm{,}\dots {y}_{T-p+1}^{T+1}\right)}^{\text{'}}$, then setting ${\varphi}_{0}={\alpha}_{0}^{\ast}$, and $\varphi ={\alpha}^{\ast}$, does not minimize the expected squared forecast error whether the aim is to forecast ${y}_{T+1}^{T+2}$ (the first estimate) or some later vintage estimate. This is perhaps not surprising, given that the majority of the data underlying the estimation are mature or revised data, whereas ${y}_{T}^{T+1}$ contains the first estimate of ${y}_{T}$, the second estimate of ${y}_{T-1}$, and so on. In terms of the cider press analogy mentioned eaerlier, Equation 23 is estimated on (mostly) mature data—the “apples”—but the forecast is conditioned on ${y}_{T}^{T+1}$—the “oranges.”

We let ${\varphi}_{0}^{\ast}$ and ${\varphi}^{\ast}$ denote the values of the parameters in the forecast function ${\varphi}_{0}+{\varphi}^{\text{'}}{y}_{T}^{T+1}$ , which minimizes the expected squared error for forecasting ${y}_{T+1}^{T+2}$.

## Estimating AR Forecasting Models Using RTV Data

Building on Koenig et al. (2003), Clements and Galvão (2013b) showed that estimating the AR model using real-time-vintage (RTV) data delivers optimal estimators of the forecasting model, in population, when the forecast is to be conditioned on ${y}_{T}^{T+1}$. RTV estimates the model:

on observations, $t=p+\mathrm{1,}\dots \mathrm{,}T$, where in contrast to EOS, the superscript denoting the vintage is not fixed at the latest-available vintage. In matrix notation:

where ${Y}^{m}$ and ${Y}_{-1}^{m}=\left[{Y}_{-1}^{m}\mathrm{,}\dots \mathrm{,}{Y}_{-p}^{m}\right]$ are given by:

${Y}^{m}$ and ${Y}_{-i}^{m}$ contain data of a constant maturity, as indicated by the superscript $m$. All the observations in ${Y}^{m}$ are first estimates, and those in ${Y}_{-i}^{m}$ are ${i}^{th}$ estimates. This means that, for example, the first row of $\left[{Y}^{m}\text{:}{Y}_{-1}^{m}\right]$ contains the first estimate of ${y}_{p+1}$, the first estimate of ${y}_{p}$, the second estimate of ${y}_{p-1}$, and so on. The last row has the same maturities: the first estimate of ${y}_{T}$, the first estimate of ${y}_{T-1}$, the second estimate of ${y}_{T-2}$, and so on.

Estimation of Equation 26 results in estimates of the $\beta $ parameters, which minimize the expected squared error, at least in terms of forecasting the first estimate, ${y}_{T+1}^{T+2}$, and simple adjustments can be applied for forecasting later estimates when revisions have nonzero means; that is, ${\beta}_{0}^{\ast}={\varphi}_{0}^{\ast}$, and ${\beta}^{\ast}={\varphi}^{\ast}$.

Clements and Galvão (2013b) provided the general formula for the parameter estimator. We illustrate with the special case of an AR$\left(1\right)$ for the true process and for general revisions that are a combination of news and noise when an AR$\left(1\right)$ forecasting model is used. The optimal value of the AR parameter is:

where ${\sigma}_{v}^{2}\equiv {{\displaystyle \sum}}_{i=1}^{q}{\sigma}_{{v}_{i}}^{2}$, and ${\sigma}_{\tilde{y}}^{2}=Var\left({\tilde{y}}_{t}\right)$ and ${\beta}_{1}^{\ast}$ is the population value of the parameter from RTV. When revisions are pure news (${\sigma}_{{\epsilon}_{1}}^{2}=0$):

${\beta}_{\mathrm{1,}news}^{\ast}={\rho}_{1}\mathrm{,}\phantom{\rule{0.2em}{0ex}}\phantom{\rule{0.2em}{0ex}}\phantom{\rule{0.2em}{0ex}}{\beta}_{\mathrm{0,}news}^{\ast}={\rho}_{0}\mathrm{,}$

so RTV returns the population parameters of the true process. However, ${\beta}_{\mathrm{1,}news}^{\ast}={\rho}_{1}$ only holds for $p=1$. Generally speaking, when there are news revisions the parameter vector of the underlying process ${\tilde{y}}_{t}$ (i.e., $\rho $) is not optimal from a forecasting perspective when the forecasts are conditioned on early estimates.

For pure noise (${\sigma}_{v}^{2}=0$):

so that ${\beta}_{\mathrm{1,}noise}^{\ast}<{\rho}_{1}$ when ${\sigma}_{{\epsilon}_{1}}^{2}\ne 0$.

Consider now EOS. When revisions are news, we can show that the EOS estimator simplifies such that ${\alpha}_{1}^{\ast}={\rho}_{1}$, matching the optimal value, but this is true only for the special case of $p=1$.

Under noise:

An immediate implication is that $\left|{\alpha}_{1}^{\ast}\right|\mathrm{>}\left|{\varphi}_{1}^{\ast}\right|$ if earlier revisions are larger than later revisions, as might be expected (compare Equations 28 and 27 when ${\sigma}_{{\epsilon}_{1}}^{2}\mathrm{>}{\sigma}_{{\epsilon}_{q}}^{2}$). Note that if ${\sigma}_{{\epsilon}_{q}}^{2}=0$, so that the truth is eventually revealed when there is noise, then ${\alpha}_{1}^{\ast}={\rho}_{1}$ for a large estimation sample. Even so, ${\rho}_{1}$ is not the parameter vector that minimizes the real-time squared forecast loss (${\varphi}_{1}^{\ast}\ne {\rho}_{1}$).

The difference between EOS and RTV can be visualized in terms of Table 1. Before the estimation, data is converted to quarterly growth rates within each column. EOS uses the column of data corresponding to the latest vintage of data available at the forecast origin. By way of contrast, RTV uses the elements in the diagonals of the data array for the lefthand side and explanatory variables.

For example, for forecasting in 2015Q1 for an AR$\left(2\right)$, the last RTV observation is ${y}_{T}^{T+1}=$100[ln(${Y}_{\text{14Q4}}^{\text{15Q1}})-\mathrm{ln}({Y}_{\text{14Q3}}^{\text{15Q1}})]$ while lags are taken from the previous vintage as ${y}_{T-1}^{T}=$100[ln(${Y}_{\text{14Q3}}^{\text{14Q4}})-\mathrm{ln}({Y}_{\text{14Q2}}^{\text{14Q4}})]$ and ${y}_{T-2}^{T}=$100[ln(${Y}_{\text{14Q2}}^{\text{14Q4}})-\mathrm{ln}({Y}_{\text{14Q1}}^{\text{14Q4}})]$. A similar approach is followed for all $t=p+\mathrm{1,...,}T$. EOS would use data only from the 2015Q1 vintage for both the left-side values and the lags.

Finally, RTV is related to the vintage-based VAR model. Equation 26 corresponds to the first equation of the system of equations given by Equation 16. When the lag order of the VAR is one and the dimension of the VAR vector is $p$ there is an exact equivalence between the RTV model and the first equation of the VAR. RTV directly models the first release, ${y}_{t}^{t+1}$, as does the first equation of the VAR, whereas in addition the VAR models later estimates beyond the first.

## Density Forecasting

Most of the literature has looked at first-moment prediction, but a few papers consider the impact on second-moment prediction and the calculation of prediction intervals and density forecasts. Clements (2017) and Clements and Galvão (2017a) compared RTV and EOS in the context of computing predictive interval and predictive densities for short-horizon forecasting. They were mainly interested in correctly measuring the forecasting uncertainty around one-step-ahead forecasts computed in real time, with the aim of predicting the first-release value.

A simple model of data revisions suffices to show that RTV delivers predictive densities that match the true underlying densities, while EOS delivers predictive densities that are too wide when data revisions are news but too narrow when they are noise.

The model for data revisions is a simplified version of the one in section “ A News and Noise Model of Data Revisions.” It assumes the true (i.e., fully revised) values ${\tilde{y}}_{t}$ follow an AR$\left(1\right)$:

where ${\eta}_{t}$ is the underlying disturbance, ${v}_{t}$ is a news revision with variance ${\sigma}_{v}^{2}$, and the first estimate is given by:

with ${y}_{t}^{t+n}={y}_{t}$ for $n=\mathrm{2,3,}\dots $. Here ${\epsilon}_{t}$ is a noise revision with variance ${\sigma}_{\epsilon}^{2}$. Then the revision ${y}_{t}^{t+2}-{y}_{t}^{t+1}\equiv {\tilde{y}}_{t}-{y}_{t}^{t+1}={v}_{t}-{\epsilon}_{t}$ consists of a noise component (when ${\sigma}_{\epsilon}^{2}\ne 0$) and a news component (when ${\sigma}_{v}^{2}\ne 0$). ${\eta}_{t}$, ${v}_{t}$ and ${\epsilon}_{t}$ are assumed to be mutually uncorrelated, zero-mean, random variables.

Clements (2017) supposed the ${\eta}_{t}$ are homoscedastic, $var({\eta}_{t})={\sigma}_{\eta}^{2}$. Clements and Galvão (2017a) also allowed for conditional heteroscedasticity—$var({\eta}_{t})$ follows an ARCH(1), or a GARCH(1,1), or a stochastic volatility AR(1) process. That macroeconomic volatility may be time-varying has been reported by Clark (2011), Clark and Ravazzolo (2014), and Diebold, Schorfheide, and Shin (2016), inter alia.

In the homoscedastic case, Clements (2017) showed that prediction intervals (calculated in the standard way, i.e., Box and Jenkins (1970) using EOS are too wide when revisions are news, because the predictive variance is overestimated, and that the reverse situation holds when revisions are noise. RTV delivers correctly sized intervals. Clements and Galvão (2017a) extended these results to the heteroscedastic case. When there is conditional heteroscedasticity, they showed that estimating the forecasting model by RTV with an appropriate model for the variance of the errors will result in well-calibrated one-step-ahead predictive densities.

Evidence on the Performance of Alternative Methods of Forecasting

The relative performance of the different models and methods surveyed in this article will likely depend on the properties of the series under consideration, and in particular on the nature of the data revisions to that series. It may also depend on whether the aim is to forecast an early release or a more mature vintage, such as the fully revised value. Finally, relative performance may depend on whether point forecasts, density forecasts, or prediction intervals are required. In this section we briefly summarize some of the findings in the literature to provide guidance on which method might be better in a particular instance.

Clements and Galvão (2013a), Carriero et al. (2015), and Galvão (2017) provided evaluations of different approaches when the goal is to forecast the *revised values*, denoted ${\tilde{y}}_{T+1}\mathrm{,}{\tilde{y}}_{T+2}\mathrm{,...,}{\tilde{y}}_{T+h}$. Clements and Galvão (2013a) compared the forecasting performance of the Kishor and Koenig (2012) approach and the Garratt et al. (2008) approach with the vintage-based VAR (VB-VAR) for US GDP growth and inflation. The models are univariate in the sense of modeling a single variable, but allow up to 14 releases of the variable in question. The findings favor the VB-VAR, which delivers more accurate point forecasts for both variables. Carriero et al. (2015) showed that forecasting accuracy can be improved by using their Bayesian approach. Their approach better controls the adverse effects of parameter uncertainty in such large VAR models. They were also able to allow for more than one variable, allowing for cross-equation dynamics between revisions to different macroeconomic variables (such as output growth and inflation in their application).

While these papers considered point forecasts of fully revised values, Galvão (2017) evaluated the density forecasts of the Smets and Wouters (2007) DSGE model when data revisions are modeled as in Kishor and Koenig (2012), and compared the findings to those obtained using the conventional approach. Galvão found gains in terms of logarithmic scores from the release-based approach for predicting the revised values of macroeconomic variables such as consumption and investment growth.

As well as forecasting the revised values of *future* outcomes, there are times when forecasts of the revised values of current and past observations are required. As an example, Clements and Galvão (2012) showed that improved real-time estimates of output and inflation gaps result from the use of VB-VAR model “backcasts.”

Clements and Galvão (2013b, 2017b) and Clements (2017) presented forecast comparisons in terms of predicting *initial releases*, that is, ${y}_{T+1}^{T+2}\mathrm{,}{y}_{T+2}^{T+3}\mathrm{,...,}{y}_{T+h}^{T+h+1}$. Clements and Galvão (2013b) described some of the circumstances under which RTV might be expected to outperform EOS. In particular, their findings suggested that larger gains might occur when there are explanatory variables—i.e., autoregressive-distributed lag models—as opposed to purely autoregressive models. They also suggested that ignoring data revisions, as in using EOS compared to RTV, might attract a greater penalty in terms of accuracy when the estimation sample is small, the process is more persistent, and revisions are news. Clements and Galvão (2013b) also found that more elaborate approaches, such as Kishor and Koenig (2012), do not outperform RTV for modeling US output growth and inflation. This parallels findings in the (nonreal-time) forecasting literature that more elaborate, complicated models do not necessarily outperform their simpler adversaries.

Clements (2017) suggested that larger relative gains might accrue to RTV when the goal is to provide well-calibrated prediction intervals. Clements and Galvão (2017a) extended the results to variables subject to time-varying conditional variance and found that RTV provides more accurate density forecasts for nominal national account variables.

Conclusion

We have not attempted an exhaustive survey of the literature on forecasting in real time, and our main focus has been on US data. Nevertheless, we have attempted to give the reader an introduction to the types of approaches that have been proposed and worked through some of these approaches in sufficient detail to lay their workings bare. We have summarized some of the evidence on the relative forecasting performance of the different approaches, but as in the macro-forecasting literature more generally, rankings across methods are unlikely to remain the same across different variables, or sample periods, and so on. In any specific instance it would seem sensible to consider a number of approaches.

The more complex models, which attempt to jointly model the true (or revised) values along with the revisions process, are not necessarily superior to simpler approaches in terms of forecasting. This is not surprising, given that simple models are often found to fare well in the forecasting literature. There are a number of possible explanations. The potential of the more complex models might be negated by the need to specify and estimate the models on relatively small historical samples. As an example, Clements and Galvão (2013b) provided a Monte Carlo study, whereby data were simulated from a vintage-based VAR model, or the Kishor and Koenig (2012) model, and the forecasting performance of RTV and EOS was compared to that of an estimated version of the model that generated the data. The authors found that the estimation sample has to be relatively large for forecasts from the model assumed as data-generating process to beat the simpler models. Another possible explanation stresses nonconstancy or structural breaks in the process being forecast, and suggests that simpler models might exhibit greater adaptivity or be more robust (see, e.g., Castle, Clements, & Hendry, 2016). Although the model of Kishor and Koenig (2012) assumes the processes for the data revisions are constant over time, this may not be the case, and if so, the potential advantages of such models may dissipate, especially when one takes into account the difficulties inherent in precisely estimating these channels unless the sample is large.

## References

Aruoba, S. B. (2008). Data revisions are not well-behaved. *Journal of Money, Credit and Banking*, *40*, 319–340.Find this resource:

Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector autoregressions. *Journal of Applied Econometrics*, *25*(1), 71–92.Find this resource:

Banerjee, A., Dolado, J. J., Galbraith, J. W., & Hendry, D. F. (1993). *Co-integration, error correction and the econometric analysis of non-stationary data*. Oxford, U.K.: Oxford University Press.Find this resource:

Box, G. E. P., & Jenkins, G. M. (1970). *Time series analysis, forecasting and control*. San Francisco, CA: Holden-Day.Find this resource:

Carriero, A., Clements, M. P., & Galvão, A. B. (2015). Forecasting with Bayesian multivariate vintage-based VARs. *International Journal of Forecasting*, *31*(3), 757–768.Find this resource:

Castle, J. L., Clements, M. P., & Hendry, D. F. (2016). An overview of forecasting facing breaks.. *Journal of Business Cycle Research*, *12*(1), 3–23Find this resource:

Clark, T. E. (2011). Real-time density forecasts from Bayesian vector autoregressions with stochastic volatility. *Journal of Business and Economic Statistics*, *29*, 327–341.Find this resource:

Clark, T. E., & Ravazzolo, F. (2014). Macroeconomic forecasting performance under alternative specifications of time-varying volatility. *Journal of Applied Econometrics*, *30*(4), 551–575.Find this resource:

Clements, M. P. (2017). Assessing macro uncertainty in real-time when data are subject to revision. *Journal of Business & Economic Statistics*, *35*(3), 420–433.Find this resource:

Clements, M. P., & Galvão, A. B. (2012). Improving real-time estimates of output gaps and inflation trends with multiple-vintage VAR models. *Journal of Business & Economic Statistics*, *30*(4), 554–562.Find this resource:

Clements, M. P., & Galvão, A. B. (2013a). Forecasting with vector autoregressive models of data vintages: US output growth and inflation. *International Journal of Forecasting*, *29*(4), 698–714.Find this resource:

Clements, M. P., & Galvão, A. B. (2013b). Real-time forecasting of inflation and output growth with autoregressive models in the presence of data revisions. *Journal of Applied Econometrics*, *28*(3), 458–477.Find this resource:

Clements, M. P., & Galvão, A. B. (2017a). *Data revisions and real-time probabilistic forecasting of macroeconomic variables* (Discussion paper ICM-2017-01). Reading, U.K.: Henley Business School, Reading University.Find this resource:

Clements, M. P., & Galvão, A. B. (2017b). Predicting early data revisions to US GDP and the effects of releases on equity markets. *Journal of Business and Economic Statistics*, *35*(3), 389–406.Find this resource:

Corradi, V., Fernandez, A., & Swanson, N. R. (2009). Information in the revision process of real-time datasets. *Journal of Business and Economic Statistics*, *27*, 455–467.Find this resource:

Croushore, D. (2006). Forecasting with real-time macroeconomic data. In G. Elliott, C. Granger, & A. Timmermann (Eds.), *Handbook of economic forecasting* (Vol. 1, pp. 961–982). Amsterdam, The Netherlands: North-Holland.Find this resource:

Croushore, D. (2011a). Forecasting with real-time data vintages. In M. P. Clements & D. F. Hendry (Eds.), *The Oxford handbook of economic forecasting* (pp. 247–267). Oxford, NY: Oxford University Press.Find this resource:

Croushore, D. (2011b). Frontiers of real-time data analysis. *Journal of Economic Literature*, *49*, 72–100.Find this resource:

Croushore, D., & Stark, T. (2001). A real-time data set for macroeconomists. *Journal of Econometrics*, *105*(1), 111.130.Find this resource:

Cunningham, A., Eklund, J., Jeffery, C., Kapetanios, G., & Labhard, V. (2009). A state space approach to extracting the signal from uncertain data. *Journal of Business & Economic Statistics*, *30*, 173–180.Find this resource:

Del Negro, M., & Schorfheide, F. (2013). DSGE model-based forecasting. In G. Elliott, & A. Timmermann (Eds.), *Handbook of economic forecasting* (Vol. 2, pp. 57–140): Amsterdam, The Netherlands: North-Holland.Find this resource:

Diebold, F. X., Schorfheide, F., & Shin, M. (2016). *Real-time forecast evaluation of DSGE models with stochastic volatility*. Mimeo, University of Pennsylvania.Find this resource:

Doan, T., Litterman, R., & Sims, C. A. (1984). Forecasting and conditional projection using realistic prior distributions. *Econometric Reviews*, *3*, 1–100.Find this resource:

Faust, J., Rogers, J. H., & Wright, J. H. (2005). News and noise in G-7 GDP announcements. *Journal of Money, Credit and Banking*, *37*(3), 403–417.Find this resource:

Fixler, D. J., & Grimm, B. T. (2005). Reliability of the NIPA estimates of U.S. economic activity. *Survey of Current Business*, *85*, 9–19.Find this resource:

Fixler, D. J., & Grimm, B. T. (2008). The reliability of the GDP and GDI estimates. *Survey of Current Business*, *88*, 16–32.Find this resource:

Galvão, A. B. (2017). Data revisions and DSGE models. *Journal of Econometrics*, *196*(1), 215–232.Find this resource:

Garratt, A., Lee, K., Mise, E., & Shields, K. (2008). Real time representations of the output gap. *Review of Economics and Statistics*, *90*, 792–804.Find this resource:

Garratt, A., Lee, K., Mise, E., & Shields, K. (2009). Real time representations of the UK output gap in the presence of model uncertainty. *International Journal of Forecasting*, *25*, 81–102.Find this resource:

Howrey, E. P. (1978). The use of preliminary data in economic forecasting. *Review of Economics and Statistics*, *60*, 193–201.Find this resource:

Jacobs, J. P. A. M., & van Norden, S. (2011). Modeling data revisions: Measurement error and dynamics of “true” values. *Journal of Econometrics*, *161*, 101–109.Find this resource:

Kishor, N. K., & Koenig, E. F. (2012). VAR estimation and forecasting when data are subject to revision. *Journal of Business and Economic Statistics*, *30*(2), 181–190.Find this resource:

Koenig, E. F., Dolmas, S., & Piger, J. (2003). The use and abuse of real-time data in economic forecasting. *Review of Economics and Statistics*, *85*(3), 618–628.Find this resource:

Landefeld, J. S., Seskin, E. P., & Fraumeni, B. M. (2008). Taking the pulse of the economy. *Journal of Economic Perspectives*, *22*, 193–216.Find this resource:

Litterman, R. (1986). Forecasting with Bayesian vector autoregressions: Five years of experience. *Journal of Business and Economic Statistics*, *4*, 25–38.Find this resource:

Mankiw, N. G., & Shapiro, M. D. (1986). News or noise: An analysis of GNP revisions. Survey of Current Business (May 1986), US Department of Commerce, Bureau of Economic Analysis, 20–25.Find this resource:

Patterson, K. D. (1995). An integrated model of the data measurement and data generation processes with an application to consumers’ expenditure. *Economic Journal*, *105*, 54–76.Find this resource:

Sargent, T. J. (1989). Two models of measurements and the investment accelerator. *Journal of Political Economy*, *97*, 251–287.Find this resource:

Sims, C. A. (1980). Macroeconomics and reality. *Econometrica*, *48*, 1–48.Find this resource:

Smets, F., & Wouters, R. (2007). Shocks and frictions in US business cycles: A Bayesian DSGE approach. *American Economic Review*, *97*(3), 586–606.Find this resource:

Swanson, N. R., & van Dijk, D. (2006). Are statistical reporting agencies getting it right? Data rationality and business cycle asymmetry. *Journal of Business and Economic Statistics*, *24*, 240–242.Find this resource:

Zwijnenburg, J. (2015). *Revisions of quarterly GDP in selected OECD countries*. Paris, France: OECD Statistics Briefing no. 22, 1–12.Find this resource:

## Notes:

(1.) We focus on real GDP, given its importance and its widespread use in the literature on real-time forecasting. We also primarily consider the United States, although similar considerations apply for other countries.

(2.) The GNP/GDP data of the BEA are subject to three annual revisions in July of each year; see, e.g., Fixler and Grimm (2005, 2008) and Landefeld et al. (2008).

(3.) In the case of the United States, the Bureau of Economic Analysis provides descriptions of the methodologies employed at bea.gov.

(4.) Taken from the Federal Reserve Bank of Philadelphia. See Croushore and Stark (2001).

(5.) For example, there are eight benchmark revisions in the data vintages from 1965Q3 up to 2010Q1. In fact there are 36 annual Q3 revisions rather than the 44 that would otherwise have occurred. There are 44 combined benchmark and annual revisions: the 8 benchmark revisions and the 36 annual revisions.

(6.) Throughout, $T+1$ will be used to denote the forecast-origin-vintage. Hence the latest-vintage values of data the forecaster will have access to are $\left\{\dots {y}_{T-1}^{T+1}\mathrm{,}{y}_{T}^{T+1}\right\}$.

(7.) This will not be literally true. More precisely, we suppose that any differences between the two are unpredictable.