Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Jesús Gonzalo and Jean-Yves Pitarakis
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Predictive regressions refer to models whose aim is to assess the predictability of a typically noisy time series, such as stock returns or currency returns with past values of a highly persistent predictor such as valuation ratios, interest rates, or volatilities, among other variables. Obtaining reliable inferences through conventional methods can be challenging in such environments mainly due to the joint interactions of predictor persistence, potential endogeneity, and other econometric complications. Numerous methods have been developed in the literature ranging from adjustments to test statistics used in significance testing to alternative instrumental variable based estimation methods specifically designed to neutralize inferences to the stochastic properties of the predictor(s).
Early developments in this area were mainly confined to linear and single predictor settings, but recent developments have raised the issue of adaptability of existing estimation and inference methods to more general environments so as to extend the use of predictive regressions to a wider range of potential applications.
An important extension involves allowing predictability to enter nonlinearly so as to capture time variation in the role of particular predictors. Economically interesting nonlinearities include, for instance, the use of threshold effects that allow predictability to vanish or strengthen during particular episodes, creating pockets of predictability. Such effects may kick in in the conditional means but also in the variances or both and may help uncover important phenomena such as the countercyclical nature of stock return predictability recently documented in the literature.
Due to the frequent need to consider multiple as opposed to single predictors it also becomes important to evaluate the validity and feasibility of inferences about linear and nonlinear predictability when multiple predictors of potentially different degrees of persistence are allowed to coexist in such settings.