Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, ECONOMICS AND FINANCE ( (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 22 September 2019

Mixed Frequency Models

Summary and Keywords

The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.

Keywords: state space models, MIDAS regressions, nowcasting, forecasting, direct and iterated forecasting


Most econometric models ignore the fact that many economic time series are sampled at different frequencies. Indeed, the most convenient approach is to aggregate high-frequency data to obtain a balanced data set at the same low frequency.

This practice has at least two undesirable consequences: (a) potential distortions of dynamic relationships among economic variables and (b) forgoing the possibility to take advantage of the real-time flow of economic data releases.

For example, if a researcher has quarterly data on Gross Domestic Product (GDP) growth with monthly data on Industrial Production (IP), and wants to study the relationship between GDP and IP, common practice is to aggregate the IP data to the quarterly frequency, by either taking an average of the monthly data or using the last month of each quarter. While simple, in general temporal aggregation entails a loss of information. This implies that key econometric features, such as Granger causality or exogeneity, can be spuriously modified when working with aggregated data (see, e.g., Granger, 1980; Marcellino, 1999; Ghysels, Hill, & Motegi, 2015, 2016 for details).

Decision makers, and policymakers in particular, also need to assess in real time the current state of the economy and its expected developments. GDP is released quarterly (and with a substantial temporal delay), while a range of leading and coincident indicators is timely available at a monthly or even higher frequency. Hence, we may want to construct a forecast of the current quarter GDP growth based on the available higher frequency information. Mixed frequency data in a forecasting setting invariably relate to the notion of nowcasting.

A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of mixed-data sampling (MIDAS) regressions also relates to research regarding direct versus iterated forecasting.

Continuing with GDP forecasting as an example, nowcasting in this context means that in a particular calendar month, GDP for the current quarter is not observed. It can even be the case that GDP is available only with considerable publication delay beyond the end of the quarter. The question then arises whether we can make a prediction about the current quarter GDP using monthly, weekly, daily economic time series.

Broadly speaking these methods fall into two categories: (a) settings involving a state space representation and (b) mixed-data sampling (MIDAS) type regressions or related methods. State space models involve latent processes, and therefore rely on filtering to extract hidden states that are used in order to predict future outcomes. State space models are, using the terminology of Cox (1981), parameter-driven models. The MIDAS regression models are, using again the same terminology, observation-driven models, as they are formulated exclusively in terms of observable data.

Parameter-Driven Models

State space models involving mixed frequency data have a long history. Early contributions include Harvey and Pierse (1984), Bernanke, Gertler, and Watson (1997), Zadrozny (1990), Mariano and Murasawa (2003), and Mittnik and Zadrozny (2004). More recent contributions include Nunes (2005), Giannone, Reichlin, and Small (2008), Aruoba, Diebold, and Scotti (2009), Ghysels and Wright (2009), Marcellino and Schumacher (2010), Foroni and Marcellino (2014), and Schorfheide and Song (2015); Eraker, Chiu, Foerster, Kim, and Seoane (2015), among others, have further explored state space models in a mixed frequency setting.

Two illustrative examples of state space models in a mixed frequency data setting are considered. First, Zadrozny (1990) looks at a state space representation of ARMA models, and studies a bivariate monthly model of unemployment and GDP growth:


where A(L)=A0-k=1pAkLk, and B(L)=k=0qBkLk, eti.i.d.N(0,Σe), Σe=Eetet. The matrices A0,,Bq depend on a parameter vector Θ. Moreover, B0ΣeB0 is positive definite. A0 is In, B0 is lower triangular, and Σe is In. Partition of stocks and flows results in:


with n1+n2=n. Let wt be a vector of “potential” observations on ut,


Abstracting from observation errors, we have:


where Ck are n2×n2 diagonal indicator matrices with zeros and ones. Note that Ck,ii=0 can produce delay effects of publication—treating stocks as “flows.” Next, define the state vector ξt=[ξ1t,,ξrt], where r is max (p,q+1,ν) and ξkt is n*×1, with n*=n1+2n2. Moreover, let Ak* be n*×n* and Bk* be n*×n:


where Ak*ij is quadrant (i,j) of Ak, and Bk*i is block-row i of Bk conformable with the partition ξt=[d1t,d2t]. Then the state equation can be written as:


with Ak*=0 for k>max(p,ν) and Bk*=0 for k>q. The observation equation is constructed in two steps—Step 1: Observation or measurement equation errors ζtm×1 are partitioned conformable to wt, yielding ζt=[ζ1t,ζ2t]. In the absence of cross-sectional aggregation, error-corrupted stocks, and flows, then


with ζti.i.d.N(0,Σζ), Σζ p.d., E(ζτet)=0, E(ζtξ1)=0 for τ, t geq 1. Step 2: yt is m(t)×1 vector of values of w for period t, which are actually observed m(t)m.


and Dt=ΛtΔ, vt=Λtζt. Zadrozny (1990) studies changes in employment—the high-frequency (monthly) series that is treated as a stock variable whereas the second low frequency GDP is treated as a flow. More recent examples of mixed frequency VAR models include Kuzin, Marcellino, and Schumacher (2011), Schorfheide and Song (2015), and Eraker, Chiu, Foerster, Kim, and Seoane (2015), among others.

A second example was introduced by Mariano and Murasawa (2003). It contains a monthly AR factor, a static factor representation for monthly indicators, and an unobserved monthly GDP growth. They consider four monthly series: EMP, IP, Personal Income minus transfers (INC), Manufacturing and Trade Sales (SLS). In addition, the model contains a very important time-aggregation constraint, which helps to relate the observed quarter-on-quarter GDP growth to unobserved month-on-month GDP rate of change. The model is therefore as follows:


where ft is scalar latent factor with AR(p) structure:


Lm monthly lag operator, and idiosyncratic AR(q) shocks:




The process y1t* is (hidden) monthly GDP whereas the process y2t is a monthly observable process. Uppercase variables are level series, and lowercase ones are log growth rates. Superscript stars refer to latent processes. Then:


Moreover, Y1t is of dimension N1 (typically equal to one) and Y2t is of dimension N2 with N1+N2=N. The state space representation is then as follows:


Assuming p, q4 and we need four lags of ft because y1t is linear function of y1t* through y1t4* yields:


We can then write the measurement equation as:




The Aruoba, Diebold, and Scotti (2009) model, yielding the so called ADS index, expands on the aforementioned model proposed by Mariano and Murasawa (2003). They work with a dynamic factor model, treating business conditions as an unobserved variable, related to observed indicators. Aruoba, Diebold, and Scotti (ADS) explicitly incorporate business conditions indicators measured at different frequencies. Important business conditions indicators do in fact arrive at a variety of frequencies, including quarterly (e.g., GDP), monthly (e.g., industrial production), weekly (e.g., employment), and continuously (e.g., asset prices). In particular, they explicitly incorporate indicators measured at high frequencies, given that their goal is to track the high-frequency evolution of real activity. Finally, ADS extract and forecast latent business conditions using linear yet statistically optimal procedures, which does not involve approximations.

Observation-Driven Models

So called bridge equations are linear regressions that link (“bridge”) high-frequency variables, such as industrial production or retail sales, to low-frequency ones, for example, the quarterly real GDP growth, providing some estimates of current and short-term developments in advance of the release. The “Bridge model” technique allows computing early estimates of the low-frequency variables by using high-frequency indicators. They are not standard macroeconometric models, because the inclusion of specific indicators is not mainly based on causal relations, but more on the statistical fact that they contain timely updated information.

In principle, bridge equation models require that the whole set of regressors should be known over the projection period, but this is rarely the case. Taking forecasting GDP as an example, because the monthly indicators are usually only partially available over the projection period, the predictions of quarterly GDP growth are obtained in two steps. First, monthly indicators are predicted over the remainder of the quarter, usually on the basis of univariate time series models (in some cases VAR have been implemented in order to obtain better forecasts of the monthly indicators), and then aggregated to obtain their quarterly correspondent values. Second, the aggregated values are used as regressors in the bridge equation, which allows the obtainment of forecasts of GDP growth.

It will be convenient to focus on a mixture of two frequencies, respectively high and low. In terms of notation, t=1,,T indexes the low-frequency time unit, and m is the number of times the higher sampling frequency appears in the same basic time unit (assumed fixed for simplicity). For example, for quarterly GDP growth and monthly indicators as explanatory variables, m=3. The low-frequency variable will be denoted by ytL, whereas a generic high-frequency series will be denoted by xtj/mH where tj/m is the jth (past) high-frequency period with j=0,m1. For a quarter/month mixture one has xtH, xt1/3H, xt2/3H as the last, second to last, and first months of quarter t. Obviously, through some aggregation scheme, such as flow or stock sampling, we can always construct a low-frequency series xtL. We will simply assume that xtL=i=0m1aixti/mH.1

Let us start with a static regression model:


where utL is an error term assumed to be identically independently distributed (i.i.d.) and the parameters can be estimated via ordinary least squares (OLS), yielding a^T and b^T. Suppose now we want to predict the first out-of-sample (low-frequency) observation, namely:


Unfortunately, we do not have xT+1L, but recall that we do have high-frequency observations xT+1i/mH available. For example, if we have values for xH for the first two months of quarter T+1 (xT+12/mH and xT+11/mH), then only xT+1H is missing to complete the quarterly value xT+1L. Using an empirically suitable univariate time series model applied to the high-frequency observations, we can obtain:


where ϕ^() is a polynomial lag operator for one high-frequency horizon prediction with parameter estimates obtained over a sample of size TH=T×m+2 (as we also have two months for quarter T+1) and L1/mxT+1i/mH=xT+1(i+1)/mH.

We can index ϕ^h() with h the forecast horizon in high frequency, to emphasize that it does depend on the forecast horizon (the index was dropped when h=1 in the previous formula). Then, in general for consecutive high-frequency observations we can replace the unknown regressor xT+1L with partial realizations of the high-frequency observations and complemented with estimates of the missing ones, namely the missing ones with high-frequency based predictions, namely:


for i=1,,m1. These are a collection of so called nowcasts (or flash estimates in statistical terminology), using bridge equations.2

The discussion so far brings us to mixed-data sampling (MIDAS) regressions. Unlike bridge equations, MIDAS regressions do not require a two-step procedure that consists of estimating a low-frequency and high-frequency data model separately. Schumacher (2016) provides a detailed discussion of the connection between bridge and MIDAS regressions.

MIDAS regressions are essentially tightly parameterized, reduced form regressions that involve processes sampled at different frequencies. The response to the higher-frequency explanatory variable is modeled using highly parsimonious distributed lag polynomials, to prevent the proliferation of parameters that might otherwise result, as well as the issues related to lag-order selection.

The basic single high-frequency regressor MIDAS model for h-step-ahead (low-frequency) forecasting, with high-frequency data available up to xtH is given by:


where C(L1/m;θ)=j=0jmax1c(j;θ)Lj/m, and C(1;θ)=j=0jmax1c(j;θ)=1.

The parameterization of the lagged coefficients of c(k;θ) in a parsimonious way is one of the key MIDAS features. Various other parsimonious polynomial specifications have been considered, including (1) beta polynomial, (2) Almon lag polynomial specifications, and (3) step functions, among others. Ghysels, Sinko, and Valkanov (2006) provide a detailed discussion.3 One of the most used parameterizations is the one known as “Exponential Almon Lag,” because it is closely related to the smooth polynomial Almon lag functions that are used to reduce multicollinearity in the distributed lag literature. It is often expressed as


This function is known to be quite flexible and can take various shapes with only a few parameters. These include decreasing, increasing, or hump-shaped patterns. Ghysels, Santa-Clara, and Valkanov (2006) use the functional form with two parameters, which allows a great flexibility and determines how many lags are included in the regression.

Another possible parameterization, also with only two parameters, is the so-called Beta Lag, because it is based on the Beta function:


and Γ(a)=0exxa1dx. One attractive specific case of the MIDAS Beta polynomial involves only one parameter, namely setting θ1=1 and estimating the single parameter θ2 with the restriction that it be larger than one, which yields single-parameter downward sloping weights more flexible than exponential or geometric decay patterns.

These are among the most popular parameterizations besides U-MIDAS and MIDAS with step functions. The parameterizations described are all quite flexible. For different values of the parameters, they can take various shapes: weights attached to the different lags can decline slowly or fast, or even have a hump shape. Therefore, estimating the parameters from the data automatically determines the shape of the weights and, accordingly, the number of lags to be included in the regression.

Suppose now, we want to predict the first out-of-sample (low-frequency) observation, namely considering equation (3.3) with h=1:


where the MIDAS regression model can be estimated using nonlinear least squares (NLS); see Ghysels, Santa-Clara, and Valkanov (2004) and Andreou, Ghysels, and Kourtellos (2010) for more details. Nowcasting, or MIDAS with leads as coined by Andreou, Ghysels, and Kourtellos (2013), involving equation (3.6) can also be obtained. For example, with i/m additional observations the horizon h shrinks to hi/m, and (3.6) becomes:


where we note that all the parameters are horizon specific. Therefore, the MIDAS regression needs to be re-estimated specifically for each forecast horizon. In other words, for a given choice of h, we will obtain different estimates of the model parameters, because we are projecting on a different information set (as usual in direct forecasting). Therefore MIDAS regressions always yield direct forecasts.

Because autoregressive models often provide competitive forecasts compared to those obtained with static models that include explanatory variables, the introduction of an autoregressive term in the MIDAS model is a desirable extension.

Andreou, Ghysels, and Kourtellos (2013) introduce the class of autoregressive distributed lag MIDAS regressions, or ADL-MIDAS regressions, extending the structure of autoregressive distributed (ARDL) models to a mixed frequency setting. Assuming an autoregressive augmentation of order one, the model can be written as:


Hence, an ADL-MIDAS regression is a direct forecasting tool projecting a low-frequency series, at some horizon h, namely yt+hL onto ytL (or more lags if we consider higher order AR augmentations) and high frequency data xtH. Nowcasting, or MIDAS with leads, can again be obtained via shifting forward the high-frequency data with 1/m increments. The parameters are horizon specific and the forecast is one that is direct (instead of iterated).

Foroni, Marcellino, and Schumacher (2015) study the performance of a variant of MIDAS that does not resort to functional distributed lag polynomials. In the paper, the authors discuss how unrestricted MIDAS (U-MIDAS) regressions can be derived in a general linear dynamic framework, and under which conditions the parameters of the underlying high-frequency model can be identified; see also Koenig, Dolmas, and Piger (2003).

Suppose m is small, like equal to three—as in quarterly/monthly data mixtures. Instead of estimating bhC(L1/m;θh) in equation (3.7) let us estimate the individual lags separately—hence the term unrestricted—yielding the following MIDAS regression:


which implies that in addition to the parameters ah and λh we estimate 1+mK˜ additional parameters. With m=3 and K˜ small, like, say, up to four (annual lags) and large enough to make the error term εt+hL uncorrelated, then, all the parameters in the U-MIDAS model can be estimated by simple OLS. From a practical point of view, the lag order v could differ across variables, and vi and c could be selected by an information criterion such as Akaike Information criteria (AIC), Schwarz information criteria (SIC), or Hannan-Quinn criteria.

The U-MIDAS regression has all parameters unconstrained and therefore runs against the idea that high-frequency data parameter proliferation has to be avoided. That is why U-MIDAS works only for small values of m. Is there an intermediate solution where we keep the appeal of simple OLS estimation, avoiding the nonlinear estimation setting of typical MIDAS regressions, and still keep the number of parameters small? The solution to this is called MIDAS with step functions, as introduced by Ghysels, Sinko, and Valkanov (2006) and Forsberg and Ghysels (2006). A MIDAS regression with S steps and K lags can be written as:


where a0=0<a1<<aS1=K. Hence, we only estimate S parameters for the high-frequency data projection with SK. The indicator function Ik(as1,as] applies parameters cs to segments of high-frequency data lags past as1 and prior or equal to as. The appeal is obvious, as we approximate the smooth polynomial lags via discrete step functions. Model selection amounts to selecting the appropriate set of steps, which can be again guided via information criteria. A popular application of MIDAS with step functions is the so called HAR model of Corsi (2009) involving daily, weekly- and monthly realized volatility.

To allow for the inclusion of several additional explanatory variables into the MIDAS framework, it is necessary to extend the basic model (3.9) as follows:


where I is the number of high-frequency series. Within the more general framework, it is also possible to include explanatory variables at different frequencies, given that each indicator is modeled with its own polynomial parameterization. As an example, quarterly GDP growth can be explained not only by monthly indicators but also by weekly financial variables, with the explanatory variables, therefore, sampled at two different frequencies. Generically, this includes regression models with different high frequencies, say m1,,mp:


Obviously, this specification may be extended to allow for the presence of an autoregressive structure.

In practice, adding explanatory variables substantially complicates estimation. An alternative, easier procedure, is to work with single indicator MIDAS models and then pool the resulting forecasts. This approach works well, for example, in an empirical application on nowcasting U.S. GDP growth; see Kuzin, Marcellino, and Schumacher (2013). Alternatively, Andreou, Ghysels, and Kourtellos (2013) use time varying forecast combination rules to handle large data of daily financial market time series.

Breitung and Roling (2015) introduce a nonparametric MIDAS regression, which they use to forecast inflation with daily data. The model can be written as:


where instead of imposing a polynomial specification, Breitung and Roling propose an alternative nonparametric approach that does not impose a particular functional form but merely assumes that the coefficient chj is a smooth function of j in the sense that the absolute values of the second differences are small.

Various MIDAS regression models involving asymmetries or other nonlinearities have been proposed in the context of volatility forecasting and are covered in Ghysels and Marcellino (2018). Also originating in the volatility literature, but of general interest, is the semi-parametric MIDAS regression model of Chen and Ghysels (2011):


where g() is a function estimated via kernel-based nonparametric methods. Hence, the time series dependence is a standard MIDAS polynomial and therefore parametric in combination with the estimation of a generic function. The asymptotic distribution of the estimation procedure has a parametric and nonparametric part. The latter is kernel-based, involves solving a so-called inverse problem, and is inspired by Linton and Mammen (2005). The mixed data sampling scheme in semi-parametric MIDAS regressions adds an extra term to the asymptotic variance compared to the result obtained by Linton and Mammen.

Galvão (2013) proposes a new regression model that combines a smooth transition regression with a mixed data sampling approach, which she calls STMIDAS. In particular, let us write equation (3.3) as follows:


where x(θ)tH=C(L1/m;θh)xtH=k=0Kc(k;θ)Lk/mxtH. Then the smooth transition MIDAS regression can be written as:




The transition function is a logistic function that depends on the weighted sum of the explanatory variable in the current quarter. The time-varying structure allows for changes in the predictive power of the indicators. When forecasting output growth with financial variables in real time, statistically significant improvements over a linear regression are more likely to arise from forecasting with STMIDAS than with MIDAS regressions, because changes in the predictive power of asset returns on economic activity may be related to business cycle regimes.

Guérin and Marcellino (2013) incorporate regime changes in the parameters of the MIDAS models, whereas Ghysels, Plazzi, and Valkanov (2016) propose conditional MIDAS quantile regression models.

So far, we have seen models that take into account mixed-frequency data in a univariate approach. Now we focus on multivariate methods and in particular VAR models. Contributions to this literature include Anderson et al. (2015) and Ghysels (2016). In either case, both classical and Bayesian estimation has been considered. Ghysels (2016) introduces a different mixed frequency VAR representation, in which he constructs the mixed frequency VAR process as stacked skip-sampled processes. We will call the approach MIDAS-VAR to distinguish it from the parameter-driven approach. An example of an order one stacked VAR involving two series xtH and ytL with m=3 would be (ignoring intercept terms):


Note that a bivariate system turns into a four-dimensional VAR due to the stacking. Moreover, the approach does not involve latent shocks/states or latent factors. This means there is no need for (Kalman) filtering. Technically speaking the approach adapts techniques typically used to study seasonal time series with periodic structures (see, e.g., Gladyshev, 1961). The innovation vector is obviously also of dimension 4×1. This means that each entry to the VAR has its own shock. Note that there are no latent high-frequency shocks to the low-frequency series. One implication is that we can apply standard VAR model techniques such as impulse response functions and variance decompositions.

When we examine the last equation in the system (3.10) we recognize a U-MIDAS regression model:


In contrast, the first equation (as well as the second and third) measures the impact of low-frequency on series, namely:


What about nowcasting? For this we need a structural VAR extension. Building on standard VAR analysis we can pre-multiply equation (3.10) with a lower triangular matrix and obtain the following system:


Reading from this system for the second equation, we have:


and for the last equation:


The latter is a MIDAS with leads equation we encountered in the discussions on nowcasting, whereas the former is a regression predicting high-frequency data in real time.

It should be clear by now that the MIDAS-VAR approach discussed here relies only on standard VAR techniques. Note that this also applies to estimation, which can be either classical or Bayesian, where the latter is appealing when the dimension of the VAR is large, which is easily the case with MIDAS-VAR systems due to the stacking of low- and high-frequency series.

McCracken, Owyang, and Sekhposyan (2015) assess point and density forecasts from an observation-driven MIDAS-VAR to obtain intra-quarter forecasts of output growth as new information becomes available, imposing restrictions on the MIDAS-VAR to account explicitly for the temporal ordering of the data releases. They show that the MIDAS-VAR, estimated via Bayesian shrinkage, performs well for GDP nowcasting: it outperforms the considered time series models and does comparably well relative to the Survey of Professional Forecasters. Bacchiocchi, Bastianin, Missale, and Rossi (2016) study how monetary policy, economic uncertainty, and economic policy uncertainty impact on the dynamics of gross capital inflows in the United States using a MIDAS-VAR. While no relation is found when using standard quarterly data, exploiting the variability present in the series within the quarter shows that the effect of a monetary policy shock is greater the longer the time lag between the month of the shock and the end of the quarter. In general, the effect of economic and policy uncertainty on U.S. capital inflows are negative and significant. Finally, the effect of the three shocks is different when distinguishing between financial and bank capital inflows from one side, and FDI from the other.

State Space Models and MIDAS Regressions

It is worth mentioning that Bai, Ghysels, and Wright (2013) establish a connection between state space models and mixed-data sampling (MIDAS) regressions. Or, put differently, between parameter-driven and observation-driven models. In particular, they consider a stylized state space model:


In addition at the end of each quarter t, a second observation is available:


Using both low- and high-frequency past data yields a prediction formula:


where κi are steady state Kalman gains. This equation relates to the multiplicative MIDAS regression models. Note that the MIDAS regression setting is a reduced form that does not identify the individual parameters of the entire state space model equations. Nor does it recover the latent process, that is, it does not provide a straightforward solution to filtering. When combined with more sophisticated methods, like simulation-based indirect inference, it is possible to take advantage of MIDAS-type models to do filtering. See, for example, Gagliardini, Ghysels, and Rubin (2017).

Further Reading

The material presented here is in part based on recent surveys by Andreou, Ghysels, and Kourtellos (2011) and Foroni and Marcellino (2013). Bańbura, Giannone, and Reichlin (2011) and Bańbura, Giannone, Modugno, and Reichlin (2013) provide overviews with a stronger focus on Kalman filter-based factor modeling techniques.


Anderson, B. D., Deistler, M., Felsenstein, E., Funovits, B., Koelbl, L., & Zamani, M. (2015). Multivariate AR systems and mixed frequency data: G-identifiability and estimation. Econometric Theory, 31, 1–34.Find this resource:

Andreou, E., Ghysels, E., & Kourtellos, A. (2010). Regression models with mixed sampling frequencies. Journal of Econometrics, 158, 246–261.Find this resource:

Andreou, E., Ghysels, E., & Kourtellos, A. (2011). Forecasting with mixed-frequency data. In M. P. Clements & D. Hendry (Eds.), Oxford handbook of economic forecasting (pp. 225–245). New York: Oxford University Press.Find this resource:

Andreou, E., Ghysels, E., & Kourtellos, A. (2013). Should macroeconomic forecasters use daily financial data and how? Journal of Business and Economic Statistics, 31, 240–251.Find this resource:

Aruoba, S. B., Diebold, F. X., & Scotti, C. (2009). Real-time measurement of business conditions. Journal of Business and Economic Statistics, 27, 417–427.Find this resource:

Bacchiocchi, E., Bastianin, A., Missale, A., & Rossi, E. (2016). Monetary policy, uncertainty and gross capital flows: A mixed frequency approach (Discussion paper). University of Milan and University of Pavia.Find this resource:

Bai, J., Ghysels, E., & Wright, J. H. (2013). State space models and MIDAS regressions. Econometric Reviews, 32, 779–813.Find this resource:

Bańbura, M., Giannone, D., Modugno, M., & Reichlin, L. (2013). Now-casting and the real-time data flow. In G. Elliott & A. Timmermann (Eds.), Handbook of economic forecasting (Vol. 2, Part A, pp. 195–233). Amsterdam: Elsevier.Find this resource:

Bańbura, M., Giannone, D., & Reichlin, L. (2011). Nowcasting. In M. P. Clements & D. Hendry (Eds.), Oxford Handbook of economic forecasting (pp. 193–224). New York: Oxford University Press.Find this resource:

Bernanke, B., Gertler, M., & Watson, M. (1997). Systematic monetary policy and the effects of oil price shocks. Brookings Papers on Economic Activity, 1, 91–157.Find this resource:

Breitung, J., & Roling, C. (2015). Forecasting inflation rates using daily data: A nonparametric MIDAS approach. Journal of Forecasting, 34, 588–603.Find this resource:

Chen, X., & Ghysels, E. (2011). News—good or bad—and its impact on volatility predictions over multiple horizons. Review of Financial Studies, 24, 46–81.Find this resource:

Corsi, F. (2009). A simple approximate long-memory model of realized volatility. Journal of Financial Econometrics, 7, 174–196.Find this resource:

Cox, D. (1981). Statistical analysis of time series: Some recent developments [with discussion and reply]. Scandinavian Journal of Statistics, 8, 93–115.Find this resource:

Eraker, B., Chiu, C. W. J., Foerster, A. T., Kim, T. B., & Seoane, H. D. (2015). Bayesian mixed frequency VARs. Journal of Financial Econometrics, 13, 698–721.Find this resource:

Foroni, C., & Marcellino, M. G. (2013). A survey of econometric methods for mixed-frequency data. Oslo: Norges Bank.Find this resource:

Foroni, C., & Marcellino, M. G. (2014). Mixed-frequency structural models: Identification, estimation, and policy analysis. Journal of Applied Econometrics, 29, 1118–1144.Find this resource:

Foroni, C., Marcellino, M. G., & Schumacher, C. (2015). Unrestricted mixed data sampling (MIDAS): MIDAS regressions with unrestricted lag polynomials. Journal of the Royal Statistical Society: Series A, 178, 57–82.Find this resource:

Forsberg, L., & Ghysels, E. (2006). Why do absolute returns predict volatility so well? Journal of Financial Econometrics, 6, 31–67.Find this resource:

Gagliardini, P., Ghysels, E., & Rubin, M. (2017). Indirect inference estimation of mixed frequency stochastic volatility state space models using MIDAS regressions and ARCH models. Journal of Financial Econometrics, 15, 509–560.Find this resource:

Galvão, A. B. (2013). Changes in predictive ability with mixed frequency data. International Journal of Forecasting, 29, 395–410.Find this resource:

Ghysels, E. (2013). Matlab toolbox for mixed sampling frequency data analysis using MIDAS regression models.

Ghysels, E. (2016). Macroeconomics and the reality of mixed frequency data. Journal of Econometrics, 193, 294–314.Find this resource:

Ghysels, E., Hill, J. B., & Motegi, K. (2015). Simple Granger causality tests for mixed frequency data. Journal of Econometrics (forthcoming).Find this resource:

Ghysels, E., Hill, J. B., & Motegi, K. (2016). Testing for Granger causality with mixed frequency data. Journal of Econometrics, 192, 207–230.Find this resource:

Ghysels, E., Kvedaras, V., & Zemlys, V. (2016). Mixed frequency data sampling regression models: The R package midasr. Journal of Statistical Software, 72, 1–35.Find this resource:

Ghysels, E., & Marcellino, M. (2018). Applied economic forecasting using time series methods. New York: Oxford University Press.Find this resource:

Ghysels, E., Plazzi, A., & Valkanov, R. (2016). Why invest in emerging markets? The role of conditional return asymmetry. Journal of Finance, 71, 2145–2194.Find this resource:

Ghysels, E., Santa-Clara, P., & Valkanov, R. (2004). The MIDAS touch: Mixed data sampling regressions (Discussion paper). UNC and UCLA.Find this resource:

Ghysels, E., Santa-Clara, P., & Valkanov, R. (2006). Predicting volatility: Getting the most out of return data sampled at different frequencies. Journal of Econometrics, 131, 59–95.Find this resource:

Ghysels, E., Sinko, A., & Valkanov, R. (2006). MIDAS regressions: Further results and new directions. Econometric Reviews, 26, 53–90.Find this resource:

Ghysels, E., & Wright, J. (2009). Forecasting professional forecasters. Journal of Business and Economic Statistics, 27, 504–516.Find this resource:

Giannone, D., Reichlin, L., & Small, D. (2008). Nowcasting: The real-time informational content of macroeconomic data. Journal of Monetary Economics, 55, 665–676.Find this resource:

Gladyshev, E. G. (1961). Periodically correlated random sequences. Soviet Mathematics, 2, 385–388.Find this resource:

Granger, C. W. (1980). Testing for causality: A personal viewpoint. Journal of Economic Dynamics and Control, 2, 329–352.Find this resource:

Guérin, P., & Marcellino, M. G. (2013). Markov-switching MIDAS models. Journal of Business and Economic Statistics, 31, 45–56.Find this resource:

Harvey, A. C., & Pierse, R. G. (1984). Estimating missing observations in economic time series. Journal of the American Statistical Association, 79, 125–131.Find this resource:

Koenig, E. F., Dolmas, S., & Piger, J. (2003). The use and abuse of real-time data in economic forecasting. Review of Economics and Statistics, 85, 618–628.Find this resource:

Kuzin, V., Marcellino, M. G., & Schumacher, C. (2011). MIDAS versus mixed-frequency VAR: Nowcasting GDP in the Euro area. International Journal of Forecasting, 27, 529–542.Find this resource:

Kuzin, V., Marcellino, M. G., & Schumacher, C. (2013). Pooling versus model selection for nowcasting GDP with many predictors: Empirical evidence for six industrialized countries. Journal of Applied Econometrics, 28, 392–411.Find this resource:

Linton, O., & Mammen, E. (2005). Estimating semiparametric ARCH (∞) models by kernel smoothing methods. Econometrica, 73, 771–836.Find this resource:

Lütkepohl, H. (2012). Forecasting aggregated vector ARMA processes. Berlin: Springer.Find this resource:

Marcellino, M. (1999). Some consequences of temporal aggregation in empirical analysis. Journal of Business and Economic Statistics, 17, 129–136.Find this resource:

Marcellino, M. G., & Schumacher, C. (2010). Factor MIDAS for nowcasting and forecasting with ragged-edge data: A model comparison for German GDP. Oxford Bulletin of Economics and Statistics, 72, 518–550.Find this resource:

Mariano, R. S., & Murasawa, Y. (2003). A new coincident index of business cycles based on monthly and quarterly series. Journal of Applied Econometrics, 18, 427–443.Find this resource:

McCracken, M. W., Owyang, M., & Sekhposyan, T. (2015). Real-time forecasting with a large, mixed frequency, Bayesian VAR. FRB St. Louis Paper No. FEDLWP2015-030.Find this resource:

Mittnik, S., & Zadrozny, P. A. (2004). Forecasting quarterly German GDP at monthly intervals using monthly IFO business conditions data. Munich: Center for Economic Studies.Find this resource:

Nunes, L. C. (2005). Nowcasting quarterly GDP growth in a monthly coincident indicator model. Journal of Forecasting, 24, 575–592.Find this resource:

Schorfheide, F., & Song, D. (2015). Real-time forecasting with a mixed-frequency VAR. Journal of Business and Economic Statistics, 33, 366–380.Find this resource:

Schumacher, C. (2016). A comparison of MIDAS and bridge equations. International Journal of Forecasting, 32, 257–270.Find this resource:

Stock, J. H., & Watson, M. W. (2002). Macroeconomic forecasting using diffusion indexes. Journal of Business and Economic Statistics, 20, 147–162.Find this resource:

Zadrozny, P. A. (1990). Forecasting US GNP at monthly intervals with an estimated bivariate time series model. Federal Reserve Bank of Atlanta Economic Review, 75, 2–15.Find this resource:


(1.) For further discussion of aggregation schemes see, for example, Lütkepohl (2012) or Stock and Watson (2002, Appendix).

(2.) A flash estimate or nowcast is defined exactly as a preliminary estimate produced or published as soon as possible after the end of the reference period, using a more incomplete set of information than the set used for the final estimates.

(3.) Various software packages including the MIDAS Matlab Toolbox (Ghysels, 2013), the R Package midasr (Ghysels, Kvedaras, & Zemlys, 2016), EViews, and Gretl cover a variety of polynomial specifications.