Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Article
Bayesian Vector Autoregressions: Applications
Silvia Miranda-Agrippino and Giovanni Ricco
Article
Bayesian Vector Autoregressions: Estimation
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Article
Bootstrapping in Macroeconometrics
Helmut Herwartz and Alexander Lange
Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.
Article
The Cointegrated VAR Methodology
Katarina Juselius
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
Article
Econometric Methods for Business Cycle Dating
Máximo Camacho Alonso and Lola Gadea
Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics.
Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere.
In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.
Article
The Evolution of Forecast Density Combinations in Economics
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Article
Financial Frictions in Macroeconomic Models
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
Article
Human Capital Inequality: Empirical Evidence
Brant Abbott and Giovanni Gallipoli
This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income.
Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them.
Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components.
A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck).
A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.
Article
Macroeconomic Aspects of Housing
Charles Ka Yui Leung and Cho Yiu Joe Ng
This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.
Article
Methodology of Macroeconometrics
Aris Spanos
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.
Article
Nonlinear Models in Macroeconometrics
Timo Teräsvirta
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
Article
Reduced Rank Regression Models in Economics and Finance
Gianluca Cubadda and Alain Hecq
Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.
Article
The Role of Wage Formation in Empirical Macroeconometric Models
Ragnar Nymoen
The specification of model equations for nominal wage setting has important implications for the properties of macroeconometric models and requires system thinking and multiple equation modeling. The main models classes are the Phillips curve model (PCM), the wage–price equilibrium correction model (WP-ECM), and the New Keynesian Phillips curve (NKPCM). The PCM was included in the macroeconometric models of the 1960s. The WP‑ECM arrived in the late 1980s. The NKPCM is central in dynamic stochastic general equilibrium models (DSGEs). The three model classes can be interpreted as different specifications of the system of stochastic difference equations that define the supply side of a medium-term macroeconometric model. This calls for an appraisal of the different wage models, in particular in relation to the concept of the non-accelerating inflation rate of unemployment (NAIRU, or natural rate of unemployment), and of the methods and research strategies used. The construction of macroeconomic model used to be based on the combination of theoretical and practical skills in economic modeling. Wage formation was viewed as being forged between the forces of markets and national institutions. In the age of DSGE models, macroeconomics has become more of a theoretical discipline. Nevertheless, producers of DSGE models make use of hybrid forms if an initial theoretical specification fails to meet a benchmark for acceptable data fit. A common ground therefore exists between the NKPC, WP‑ECM, and PCM, and it is feasible to compare the model types empirically.
Article
Stochastic Volatility in Bayesian Vector Autoregressions
Todd E. Clark and Elmar Mertens
Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.