Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
Carlos Garriga and Aaron Hedlund
The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.
Chao Gu, Han Han, and Randall Wright
The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
The geography of economic activity refers to the distribution of population, production, and consumption of goods and services in geographic space. The geography of growth and development refers to the local growth and decline of economic activity and the overall distribution of these local changes within and across countries. The pattern of growth in space can vary substantially across regions, countries, and industries. Ultimately, these patterns can help explain the role that spatial frictions (like transport and migration costs) can play in the overall development of the world economy.
The interaction of agglomeration and congestion forces determines the density of economic activity in particular locations. Agglomeration forces refer to forces that bring together agents and firms by conveying benefits from locating close to each other, or for locating in a particular area. Examples include local technology and institutions, natural resources and local amenities, infrastructure, as well as knowledge spillovers. Congestion forces refer to the disadvantages of locating close to each other. They include traffic, high land prices, as well as crime and other urban dis-amenities. The balance of these forces is mediated by the ability of individuals, firms, good and services, as well as ideas and technology, to move across space: namely, migration, relocation, transport, commuting and communication costs. These spatial frictions together with the varying strength of congestion and agglomeration forces determines the distribution of economic activity. Changes in these forces and frictions—some purposefully made by agents given the economic environment they face and some exogenous—determine the geography of growth and development.
The main evolution of the forces that influence the geography of growth and development have been changes in transport technology, the diffusion of general-purpose technologies, and the structural transformation of economies from agriculture, to manufacturing, to service-oriented economies. There are many challenges in modeling and quantifying these forces and their effects. Nevertheless, doing so is essential to evaluate the impact of a variety of phenomena, from climate change to the effects of globalization and advances in information technology.
Sushant Acharya and Paolo Pesenti
Global policy spillovers can be defined as the effect of policy changes in one country on economic outcomes in other countries. The literature has mainly focused on monetary policy interdependencies and has identified three channels through which policy spillovers can materialize. The first is the expenditure-shifting channel—a monetary expansion in one country depreciates its currency, making its goods cheaper relative to those in other countries and shifting global demand toward domestic tradable goods. The second is the expenditure-changing channel—expansionary monetary policy in one country raises both domestic and foreign expenditure. The third is the financial spillovers channel—expansionary monetary policy in one country eases financial conditions in other economies. The literature generally finds that the net transmission effect is positive but small. However, estimated spillovers vary widely across countries and over time. In the aftermath of the Great Recession, the policy debate has devoted special attention to the possibility that the magnitude and sign of international spillovers might have changed in an environment of low interest rates worldwide, as the expenditure-shifting channel becomes more relevant when the effective lower bound reduces the effectiveness of conventional monetary policies.
David E. Bloom, Michael Kuhn, and Klaus Prettner
The strong observable correlation between health and economic growth is crucial for economic development and sustained well-being, but the underlying causality and mechanisms are difficult to conceptualize. Three issues are of central concern. First, assessing and disentangling causality between health and economic growth are empirically challenging. Second, the relation between health and economic growth changes over the process of economic development. In less developed countries, poor health often reduces labor force participation, particularly among women, and deters investments in education such that fertility stays high and the economy remains trapped in a stagnation equilibrium. By contrast, in more developed countries, health investments primarily lead to rising longevity, which may not significantly affect labor force participation and workforce productivity. Third, different dimensions of health (mortality vs. morbidity, children’s and women’s health, and health at older ages) relate to different economic effects. By changing the duration and riskiness of the life course, mortality affects individual investment choices, whereas morbidity relates more directly to work productivity and education. Children’s health affects their education and has long-lasting implications for labor force participation and productivity later in life. Women’s health is associated with substantial intergenerational spillover effects and influences women’s empowerment and fertility decisions. Finally, health at older ages has implications for retirement and care.
Home bias in international macroeconomics refers to the fact that investors around the world tend to allocate majority of their portfolios into domestic assets, despite the potential benefits to be had from international diversification. This phenomenon has been occurring across countries, over time, and across equity or bond portfolios. The bias towards domestic assets tends to be larger in developing countries relative to developed economies, with Europe characterized by the lowest equity home bias, while Central and South America—by the highest equity home bias. In addition, despite the secular decline in the level of equity home bias over time in all countries and regions, home bias still remains a robust feature of the data.
Whether home bias is a puzzle depends on the portfolio allocation that one uses as a theoretical benchmark. For instance, home bias in equity portfolio is a puzzle when assessed through the lens of a simple international capital asset pricing model (CAPM) with homogeneous investors. This model predicts that investors should hold world market portfolios, namely a portfolio with the share of domestic asset equal to the share of those assets in the world market portfolio. For instance, since the share of US equity in the world capitalization in 2016 was 56%, then US investors should allocate 56% of their equity portfolio into local assets, while investing the remaining 44% into foreign equities. Instead, foreign equity comprised just 23% of US equity portfolio in 2016, hence the equity home bias.
Alternative portfolio benchmark comes from the theories that emphasize costs for trading assets in international financial markets. These include transaction and information costs, differential tax treatments, and more broadly, differences in institutional environments. This research, however, has so far been unable to reach a consensus on the explanatory power of such costs.
Yet another theory argues that equity home bias can arise due to the hedging properties of local equity. In particular, local equity can provide insurance from real exchange rate risk and non-tradable income risk (such as labor income risk), and thus a preference towards home equities is not a puzzle, but rather an optimal response to such risks.
These theories, main advances and results in the macroeconomic literature on home bias are discussed in this article. It starts by presenting some empirical facts on the extent and dynamics of equity home bias in developed and developing countries. It is then shown how home bias can arise as an equilibrium outcome of the hedging demand in the model with real exchange rate and non-tradable labor income risk. Since solving models with portfolio choice is challenging, the recent advances in solving such models are also outlined in this article.
Integrating the portfolio dynamics into models that can generate realistic asset price and exchange rate dynamics remains a fruitful avenue for future research. A discussion of additional open questions in this research agenda and suggestions for further readings are also provided.
Brant Abbott and Giovanni Gallipoli
This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income.
Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them.
Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components.
A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck).
A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Charles Ka Yui Leung and Cho Yiu Joe Ng
This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.
Chao Gu, Han Han, and Randall Wright
This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
“Reform” in the economics literature refers to changes in government policies or institutional rules because status-quo policies and institutions are not working well to achieve the goals of economic wellbeing and development. Further, reform refers to alternative policies and institutions that are available which would most likely perform better than the status quo. The main question examined in the “political economy of reform” literature has been why reforms are not undertaken when they are needed for the good of society. The succinct answer from the first generation of research is that conflict of interest between organized socio-political groups is responsible for some groups being able to stall reforms to extract greater private rents from status-quo policies. The next generation of research is tackling more fundamental and enduring questions: Why does conflict of interest persist? How are some interest groups able to exert influence against reforms if there are indeed large gains to be had for society? What institutions are needed to overcome the problem of credible commitment so that interest groups can be compensated or persuaded to support reforms?
Game theory—or the analysis of strategic interactions among individuals and groups—is being used more extensively, going beyond the first generation of research which focused on the interaction between “winners” and “losers” from reforms. Widespread expectations, or norms, in society at large, not just within organized interest groups, about how others are behaving in the political sphere of making demands upon government; and, beliefs about the role of public policies, or preferences for public goods, shape these strategic interactions and hence reform outcomes. Examining where these norms and preferences for public goods come from, and how they evolve, are key to understanding why conflict of interest persists and how reformers can commit to finding common ground for socially beneficial reforms. Political markets and institutions, through which the leaders who wield power over public policy are selected and sanctioned, shape norms and preferences for public goods. Leaders who want to pursue reforms need to use the evidence in favor of reforms to build broad-based support in political markets. Contrary to the first generation view of reforms by stealth, the next generation of research suggests that public communication in political markets is needed to develop a shared understanding of policies for the public good.
Concomitantly, the areas of reform have circled from market liberalization, which dominated the 20th century, back to strengthening governments to address problems of market failure and public goods in the 21st century. Reforms involve anti-corruption and public sector management in developing countries; improving health, education, and social protection to address persistent inequality in developed countries; and regulation to preserve competition and to price externalities (such as pollution and environmental depletion) in markets around the world. Understanding the functioning of politics is more important than ever before in determining whether governments are able to pursue reforms for public goods or fall prey to corruption and populism.
Menzie D. Chinn
The idea that prices and exchange rates adjust so as to equalize the common-currency price of identical bundles of goods—purchasing power parity (PPP)—is a topic of central importance in international finance. If PPP holds continuously, then nominal exchange rate changes do not influence trade flows. If PPP does not hold in the short run, but does in the long run, then monetary factors can affect the real exchange rate only temporarily. Substantial evidence has accumulated—with the advent of new statistical tests, alternative data sets, and longer spans of data—that purchasing power parity does not typically hold in the short run. One reason why PPP doesn’t hold in the short run might be due to sticky prices, in combination with other factors, such as trade barriers. The evidence is mixed for the longer run. Variations in the real exchange rate in the longer run can also be driven by shocks to demand, arising from changes in government spending, the terms of trade, as well as wealth and debt stocks. At time horizon of decades, trend movements in the real exchange rate—that is, systematically trending deviations in PPP—could be due to the presence of nontraded goods, combined with real factors such as differentials in productivity growth. The well-known positive association between the price level and income levels—also known as the “Penn Effect”—is consistent with this channel. Whether PPP holds then depends on the time period, the time horizon, and the currencies examined.