1-20 of 28 Results  for:

  • Macroeconomics and Monetary Economics x
Clear all

Article

George W. Evans and Bruce McGough

While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria. AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making. Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots. The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward. In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.

Article

Silvia Miranda-Agrippino and Giovanni Ricco

Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications. A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs. This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments. BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon. The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.

Article

Silvia Miranda-Agrippino and Giovanni Ricco

Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection. In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots. Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted. Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.

Article

Helmut Herwartz and Alexander Lange

Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.

Article

Alessandro Rebucci and Chang Ma

This paper reviews selected post–Global Financial Crisis theoretical and empirical contributions on capital controls and identifies three theoretical motives for the use of capital controls: pecuniary externalities in models of financial crises, aggregate demand externalities in New Keynesian models of the business cycle, and terms of trade manipulation in open-economy models with pricing power. Pecuniary and demand externalities offer the most compelling case for the adoption of capital controls, but macroprudential policy can also address the same distortions. So capital controls generally are not the only instrument that can do the job. If evaluated through the lenses of the new theories, the empirical evidence reviewed suggests that capital controls can have the intended effects, even though the extant literature is inconclusive as to whether the effects documented amount to a net gain or loss in welfare terms. Terms of trade manipulation also provides a clear-cut theoretical case for the use of capital controls, but this motive is less compelling because of the spillover and coordination issues inherent in the use of control on capital flows for this purpose. Perhaps not surprisingly, only a handful of countries have used capital controls in a countercyclical manner, while many adopted macroprudential policies. This suggests that capital control policy might entail additional costs other than increased financing costs, such as signaling the bad quality of future policies, leakages, and spillovers.

Article

Katarina Juselius

The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.

Article

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.

Article

The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.

Article

Chao Gu, Han Han, and Randall Wright

The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.

Article

Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk

Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.

Article

Alfred Duncan and Charles Nolan

In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.

Article

The development of a simple framework with optimizing agents and nominal rigidities is the point of departure for the analysis of three questions about fiscal and monetary policies in an open economy. The first question concerns the optimal monetary policy targets in a world with trade and financial links. In the baseline model, the optimal cooperative monetary policy is fully inward-looking and seeks to stabilize a combination of domestic inflation and output gap. The equivalence with the closed economy case, however, ends if countries do not cooperate, if firms price goods in the currency of the market of destination, and if international financial markets are incomplete. In these cases, external variables that capture international misalignments relative to the first best become relevant policy targets. The second question is about the empirical evidence on the international transmission of government spending shocks. In response to a positive innovation, the real exchange rate depreciates and the trade balance deteriorates. Standard open economy models struggle to match this evidence. Non-standard consumption preferences and a detailed fiscal adjustment process constitute two ways to address the puzzle. The third question deals with the trade-offs associated with an active use of fiscal policy for stabilization purposes in a currency union. The optimal policy assignment mandates the monetary authority to stabilize union-wide aggregates and the national fiscal authorities to respond to country-specific shocks. Permanent changes of government debt allow to smooth the distortionary effects of volatile taxes. Clear and credible fiscal rules may be able to strike the appropriate balance between stabilization objectives and moral hazard issues.

Article

Esteban Rossi-Hansberg

The geography of economic activity refers to the distribution of population, production, and consumption of goods and services in geographic space. The geography of growth and development refers to the local growth and decline of economic activity and the overall distribution of these local changes within and across countries. The pattern of growth in space can vary substantially across regions, countries, and industries. Ultimately, these patterns can help explain the role that spatial frictions (like transport and migration costs) can play in the overall development of the world economy. The interaction of agglomeration and congestion forces determines the density of economic activity in particular locations. Agglomeration forces refer to forces that bring together agents and firms by conveying benefits from locating close to each other, or for locating in a particular area. Examples include local technology and institutions, natural resources and local amenities, infrastructure, as well as knowledge spillovers. Congestion forces refer to the disadvantages of locating close to each other. They include traffic, high land prices, as well as crime and other urban dis-amenities. The balance of these forces is mediated by the ability of individuals, firms, good and services, as well as ideas and technology, to move across space: namely, migration, relocation, transport, commuting and communication costs. These spatial frictions together with the varying strength of congestion and agglomeration forces determines the distribution of economic activity. Changes in these forces and frictions—some purposefully made by agents given the economic environment they face and some exogenous—determine the geography of growth and development. The main evolution of the forces that influence the geography of growth and development have been changes in transport technology, the diffusion of general-purpose technologies, and the structural transformation of economies from agriculture, to manufacturing, to service-oriented economies. There are many challenges in modeling and quantifying these forces and their effects. Nevertheless, doing so is essential to evaluate the impact of a variety of phenomena, from climate change to the effects of globalization and advances in information technology.

Article

Sushant Acharya and Paolo Pesenti

Global policy spillovers can be defined as the effect of policy changes in one country on economic outcomes in other countries. The literature has mainly focused on monetary policy interdependencies and has identified three channels through which policy spillovers can materialize. The first is the expenditure-shifting channel—a monetary expansion in one country depreciates its currency, making its goods cheaper relative to those in other countries and shifting global demand toward domestic tradable goods. The second is the expenditure-changing channel—expansionary monetary policy in one country raises both domestic and foreign expenditure. The third is the financial spillovers channel—expansionary monetary policy in one country eases financial conditions in other economies. The literature generally finds that the net transmission effect is positive but small. However, estimated spillovers vary widely across countries and over time. In the aftermath of the Great Recession, the policy debate has devoted special attention to the possibility that the magnitude and sign of international spillovers might have changed in an environment of low interest rates worldwide, as the expenditure-shifting channel becomes more relevant when the effective lower bound reduces the effectiveness of conventional monetary policies.

Article

David E. Bloom, Michael Kuhn, and Klaus Prettner

The strong observable correlation between health and economic growth is crucial for economic development and sustained well-being, but the underlying causality and mechanisms are difficult to conceptualize. Three issues are of central concern. First, assessing and disentangling causality between health and economic growth are empirically challenging. Second, the relation between health and economic growth changes over the process of economic development. In less developed countries, poor health often reduces labor force participation, particularly among women, and deters investments in education such that fertility stays high and the economy remains trapped in a stagnation equilibrium. By contrast, in more developed countries, health investments primarily lead to rising longevity, which may not significantly affect labor force participation and workforce productivity. Third, different dimensions of health (mortality vs. morbidity, children’s and women’s health, and health at older ages) relate to different economic effects. By changing the duration and riskiness of the life course, mortality affects individual investment choices, whereas morbidity relates more directly to work productivity and education. Children’s health affects their education and has long-lasting implications for labor force participation and productivity later in life. Women’s health is associated with substantial intergenerational spillover effects and influences women’s empowerment and fertility decisions. Finally, health at older ages has implications for retirement and care.

Article

Home bias in international macroeconomics refers to the fact that investors around the world tend to allocate majority of their portfolios into domestic assets, despite the potential benefits to be had from international diversification. This phenomenon has been occurring across countries, over time, and across equity or bond portfolios. The bias towards domestic assets tends to be larger in developing countries relative to developed economies, with Europe characterized by the lowest equity home bias, while Central and South America—by the highest equity home bias. In addition, despite the secular decline in the level of equity home bias over time in all countries and regions, home bias still remains a robust feature of the data. Whether home bias is a puzzle depends on the portfolio allocation that one uses as a theoretical benchmark. For instance, home bias in equity portfolio is a puzzle when assessed through the lens of a simple international capital asset pricing model (CAPM) with homogeneous investors. This model predicts that investors should hold world market portfolios, namely a portfolio with the share of domestic asset equal to the share of those assets in the world market portfolio. For instance, since the share of US equity in the world capitalization in 2016 was 56%, then US investors should allocate 56% of their equity portfolio into local assets, while investing the remaining 44% into foreign equities. Instead, foreign equity comprised just 23% of US equity portfolio in 2016, hence the equity home bias. Alternative portfolio benchmark comes from the theories that emphasize costs for trading assets in international financial markets. These include transaction and information costs, differential tax treatments, and more broadly, differences in institutional environments. This research, however, has so far been unable to reach a consensus on the explanatory power of such costs. Yet another theory argues that equity home bias can arise due to the hedging properties of local equity. In particular, local equity can provide insurance from real exchange rate risk and non-tradable income risk (such as labor income risk), and thus a preference towards home equities is not a puzzle, but rather an optimal response to such risks. These theories, main advances and results in the macroeconomic literature on home bias are discussed in this article. It starts by presenting some empirical facts on the extent and dynamics of equity home bias in developed and developing countries. It is then shown how home bias can arise as an equilibrium outcome of the hedging demand in the model with real exchange rate and non-tradable labor income risk. Since solving models with portfolio choice is challenging, the recent advances in solving such models are also outlined in this article. Integrating the portfolio dynamics into models that can generate realistic asset price and exchange rate dynamics remains a fruitful avenue for future research. A discussion of additional open questions in this research agenda and suggestions for further readings are also provided.

Article

Brant Abbott and Giovanni Gallipoli

This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.

Article

The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.

Article

The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space. We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.

Article

Charles Ka Yui Leung and Cho Yiu Joe Ng

This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.