1-12 of 12 Results

  • Keywords: forecasting x
Clear all

Article

The Evolution of Forecast Density Combinations in Economics  

Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk

Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.

Article

Forecasting Electricity Prices  

Katarzyna Maciejowska, Bartosz Uniejewski, and Rafal Weron

Forecasting electricity prices is a challenging task and an active area of research since the 1990s and the deregulation of the traditionally monopolistic and government-controlled power sectors. It is interdisciplinary by nature and requires expertise in econometrics, statistics or machine learning for developing well-performing predictive models, finance for understanding market mechanics, and electrical engineering for comprehension of the fundamentals driving electricity prices. Although electricity price forecasting aims at predicting both spot and forward prices, the vast majority of research is focused on short-term horizons which exhibit dynamics unlike in any other market. The reason is that power system stability calls for a constant balance between production and consumption, while being dependent on weather (in terms of demand and supply) and business activity (in terms of demand only). The recent market innovations do not help in this respect. The rapid expansion of intermittent renewable energy sources is not offset by the costly increase of electricity storage capacities and modernization of the grid infrastructure. On the methodological side, this leads to three visible trends in electricity price forecasting research. First, there is a slow but more noticeable tendency to consider not only point but also probabilistic (interval, density) or even path (also called ensemble) forecasts. Second, there is a clear shift from the relatively parsimonious econometric (or statistical) models toward more complex and harder to comprehend but more versatile and eventually more accurate statistical and machine learning approaches. Third, statistical error measures are regarded as only the first evaluation step. Since they may not necessarily reflect the economic value of reducing prediction errors, in recent publications they tend to be complemented by case studies comparing profits from scheduling or trading strategies based on price forecasts obtained from different models.

Article

Real-Time Transaction Data for Nowcasting and Short-Term Economic Forecasting  

John W. Galbraith

Transaction data from consumer purchases is used for monitoring, nowcasting, or short-term forecasting of important macroeconomic aggregates such as personal consumption expenditure and national income. Data on individual purchase transactions, recorded electronically at point of sale or online, offer the potential for accurate and rapid estimation of retail sales expenditure, itself an important component of personal consumption expenditure and therefore of national income. Such data may therefore allow policymakers to base actions on more up-to-date estimates of the state of the economy. However, while transaction data may be obtained from a number of sources, such as national payments systems, individual banks, or financial technology companies, data from each of these sources contain limitations. Data sets will differ in the forms of information contained in a record, the degree to which the samples are representative of the relevant population of consumers, and the different types of payments that are observed and captured in the record. As well, the commercial nature of the data may imply constraints on the researcher’s ability to make data sets available for replication. Regardless of the source, the data will generally require filtering and aggregation in order to provide a clear signal of changes in economic activity. The resulting series may be incorporated into any of a variety of model types, along with other data, for nowcasting and short-term forecasting.

Article

Mixed Frequency Models  

Eric Ghysels

The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.

Article

Bayesian Vector Autoregressions: Estimation  

Silvia Miranda-Agrippino and Giovanni Ricco

Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection. In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots. Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted. Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.

Article

Bayesian Vector Autoregressions: Applications  

Silvia Miranda-Agrippino and Giovanni Ricco

Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications. A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs. This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments. BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon. The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.

Article

Stochastic Volatility in Bayesian Vector Autoregressions  

Todd E. Clark and Elmar Mertens

Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.

Article

Structural Breaks in Time Series  

Alessandro Casini and Pierre Perron

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Article

Data Revisions and Real-Time Forecasting  

Michael P. Clements and Ana Beatriz Galvão

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations. The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.

Article

Asset Pricing: Cross-Section Predictability  

Paolo Zaffaroni and Guofu Zhou

A fundamental question in finance is the study of why different assets have different expected returns, which is intricately linked to the issue of cross-section prediction in the sense of addressing the question “What explains the cross section of expected returns?” There is vast literature on this topic. There are state-of-the-art methods used to forecast the cross section of stock returns with firm characteristics predictors, and the same methods can be applied to other asset classes, such as corporate bonds and foreign exchange rates, and to managed portfolios such mutual and hedge funds. First, there are the traditional ordinary least squares and weighted least squares methods, as well as the recently developed various machine learning approaches such as neutral networks and genetic programming. These are the main methods used today in applications. There are three measures that assess how the various methods perform. The first is the Sharpe ratio of a long–short portfolio that longs the assets with the highest predicted return and shorts those with the lowest. This measure provides the economic value for one method versus another. The second measure is an out-of-sample R 2 that evaluates how the forecasts perform relative to a natural benchmark that is the cross-section mean. This is important as any method that fails to outperform the benchmark is questionable. The third measure is how well the predicted returns explain the realized ones. This provides an overall error assessment cross all the stocks. Factor models are another tool used to understand cross-section predictability. This sheds light on whether the predictability is due to mispricing or risk exposure. There are three ways to consider these models: First, we can consider how to test traditional factor models and estimate the associated risk premia, where the factors are specified ex ante. Second, we can analyze similar problems for latent factor models. Finally, going beyond the traditional setup, we can consider recent studies on asset-specific risks. This analysis provides the framework to understand the economic driving forces of predictability.

Article

Asset Pricing: Time-Series Predictability  

David E. Rapach and Guofu Zhou

Asset returns change with fundamentals and other factors, such as technical information and sentiment over time. In modeling time-varying expected returns, this article focuses on the out-of-sample predictability of the aggregate stock market return via extensions of the conventional predictive regression approach. The extensions are designed to improve out-of-sample performance in realistic environments characterized by large information sets and noisy data. Large information sets are relevant because there are a plethora of plausible stock return predictors. The information sets include variables typically associated with a rational time-varying market risk premium, as well as variables more likely to reflect market inefficiencies resulting from behavioral influences and information frictions. Noisy data stem from the intrinsically large unpredictable component in stock returns. When forecasting with large information sets and noisy data, it is vital to employ methods that incorporate the relevant information in the large set of predictors in a manner that guards against overfitting the data. Methods that improve out-of-sample market return prediction include forecast combination, principal component regression, partial least squares, the LASSO and elastic net from machine learning, and a newly developed C-ENet approach that relies on the elastic net to refine the simple combination forecast. Employing these methods, a number of studies provide statistically and economically significant evidence that the aggregate market return is predictable on an out-of-sample basis. Out-of-sample market return predictability based on a rich set of predictors thus appears to be a well-established empirical result in asset pricing.

Article

The Growth of Health Spending in the United States From 1776 to 2026  

Thomas E. Getzen

During the 18th and 19th centuries, medical spending in the United States rose slowly, on average about .25% faster than gross domestic product (GDP), and varied widely between rural and urban regions. Accumulating scientific advances caused spending to accelerate by 1910. From 1930 to 1955, rapid per-capita income growth accommodated major medical expansion while keeping the health share of GDP almost constant. During the 1950s and 1960s, prosperity and investment in research, the workforce, and hospitals caused a rapid surge in spending and consolidated a truly national health system. Excess growth rates (above GDP growth) were above +5% per year from 1966 to 1970, which would have doubled the health-sector share in fifteen years had it not moderated, falling under +3% in the 1980s, +2% in 1990s, and +1.5% since 2005. The question of when national health expenditure growth can be brought into line with GDP and made sustainable for the long run is still open. A review of historical data over three centuries forces confrontation with issues regarding what to include and how long events continue to effect national health accounting and policy. Empirical analysis at a national scale over multiple decades fails to support a position that many of the commonly discussed variables (obesity, aging, mortality rates, coinsurance) do cause significant shifts in expenditure trends. What does become clear is that there are long and variable lags before macroeconomic and technological events affect spending: three to six years for business cycles and multiple decades for major recessions, scientific discoveries, and organizational change. Health-financing mechanisms, such as employer-based health insurance, Medicare, and the Affordable Care Act (Obamacare) are seen to be both cause and effect, taking years to develop and affecting spending for decades to come.