Central banks around the world use monetary policy tools to promote economic growth and stability; for example, in the United States, the Federal Reserve (Fed) uses federal funds rate adjustments, quantitative easing (QE) or tightening, forward guidance, and other tools “to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” Changes in monetary policy affect both businesses and consumers. For consumers, changes in monetary policy affect bank credit supply, refinancing activity, and home purchases, which in turn affect household consumption and thus economic growth and price stability. The U.S. Fed rate cuts and QE programs during COVID-19 led to historically low interest rates, which spurred a huge wave of refinancings. However, the pass-through of rate savings in the mortgage market declined during the pandemic. The weaker pass-through can be linked to the extraordinary growth of shadow bank mortgage lenders during the COVID-19 pandemic: Shadow bank mortgage lenders charged mortgage borrowers higher rates and fees; therefore, a higher market share of them means a smaller overall pass-through of rate savings to mortgage borrowers. It is important to note that these shadow banks did provide convenience to consumers, and they originated loans faster than banks. The convenience and speed could be valuable to borrowers and important in transmitting monetary policy in a timelier way, especially during a crisis.
Article
Central Bank Monetary Policy and Consumer Credit Markets
Xudong An, Larry Cordell, Raluca A. Roman, and Calvin Zhang
Article
Consumer Debt and Default: A Macro Perspective
Florian Exler and Michèle Tertilt
Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments.
Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016.
These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements.
Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules.
There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.
Article
Crises in the Housing Market: Causes, Consequences, and Policy Lessons
Carlos Garriga and Aaron Hedlund
The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.
Article
The Effects of Monetary Policy Announcements
Chao Gu, Han Han, and Randall Wright
The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.
Article
The Evolution of Forecast Density Combinations in Economics
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Article
Financial Bubbles in History
William Quinn and John Turner
Financial bubbles constitute some of history’s most significant economic events, but academic research into the phenomenon has often been narrow, with an excessive focus on whether bubble episodes invalidate or confirm the efficient markets hypothesis. The literature on the topic has also been somewhat siloed, with theoretical, experimental, qualitative, and quantitative methods used to develop relatively discrete bodies of research.
In order to overcome these deficiencies, future research needs to move beyond the rational/irrational dichotomy and holistically examine the causes and consequences of bubbles. Future research in financial bubbles should thus use a wider range of investigative tools to answer key questions or attempt to synthesize the findings of multiple research programs.
There are three areas in particular that future research should focus on: the role of information in a bubble, the aftermath of bubbles, and possible regulatory responses. While bubbles are sometimes seen as an inevitable part of capitalism, there have been long historical eras in which they were extremely rare, and these eras are likely to contain lessons for alleviating the negative effects of bubbles in the 21st century. Finally, the literature on bubbles has tended to neglect certain regions, and future research should hunt for undiscovered episodes outside of Europe and North America.
Article
The History of Central Banks
Eric Monnet
The historical evolution of the role of central banks has been shaped by two major characteristics of these institutions: they are banks and they are linked—in various legal, administrative, and political ways—to the state. The history of central banking is thus an analysis of how central banks have ensured or failed to ensure the stability of the value of money and the credit system while maintaining supportive or conflicting relationships with governments and private banks. Opening the black box of central banks is necessary to understanding the political economy issues that emerge from the implementation of monetary and credit policy and why, in addition to macroeconomic effects, these policies have major consequences on the structure of financial systems and the financing of public debt. It is also important to read the history of the evolution of central banks since the end of the 19th century as a game of countries wanting to adopt a dominant institutional model. Each historical period was characterized by a dominant model that other countries imitated - or pretended to imitate while retaining substantial national characteristics - with a view to greater international political and financial integration. Recent academic research has explored several issues that underline the importance of central banks to the development of the state, the financial system and on macroeconomic fluctuations: (a) the origin of central banks; (b) their role as a lender of last resort and banking supervisor; (c) the justifications and consequences of domestic macroeconomic policy objectives - inflation, output, etc. -of central banks (monetary policy); (d) the special loans of central banks and their role in the allocation of credit (credit policy); (e) the legal and political links between the central bank and the government (independence); (f) the role of central banks concerning exchange rates and the international monetary system; (g) production of economic research and statistics.
Article
The Indeterminacy School in Macroeconomics
Roger E. A. Farmer
The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.
Article
International Reserves, Exchange Rates, and Monetary Policy: From the Trilemma to the Quadrilemma
Joshua Aizenman
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Article
Macroeconomic Announcement Premium
Hengjie Ai, Ravi Bansal, and Hongye Guo
The macroeconomic announcement premium refers to the fact that a large fraction of the equity market risk premium is realized on a small number of trading days with significant macroeconomic announcements. Examples include monetary policy announcements by the Federal Open Market Committee, unemployment/non-farm payroll reports, the Producer Price Index published by the U.S. Bureau of Labor Statistics, and the gross domestic product reported by the U.S. Bureau of Economic Analysis. During the period 1961–2023, roughly 44 days per year with macroeconomic announcements account for more than 71% of the aggregate equity market risk compensation.
The existence of the macroeconomic announcement premium has important implications for modeling risk preferences in economics and finance. It provides strong support for non-expected utility analysis. The study of Ai and Bansal demonstrates that the existence of the macroeconomic announcement premium implies that investors’ preferences cannot have an expected utility representation and must satisfy generalized risk sensitivity, a property shared by many non-expected utility models such as the maxmin expected utility of Gilboa and Schmeidler, the recursive utility of Epstein and Zin, and the robust control preference of Hansen and Sargent.
Because the amount of risk compensation is proportional to the magnitude of variations in marginal utility, the macroeconomic announcement premium highlights information as the most important driver of marginal utility. This observation has profound implications for many economic analyses that rely on modeling either time-series variation or cross-sectional heterogeneity in marginal utility across agents, such as consumption risk sharing, the trade-off between equality and efficiency, exchange rate variations, and so on. The link between macroeconomic policy announcements and financial market risk compensation is an important direction for future research.
Article
Methodology of Macroeconometrics
Aris Spanos
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.
Article
New Monetarist Economics
Chao Gu, Han Han, and Randall Wright
This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.
Article
Q-Factors and Investment CAPM
Lu Zhang
The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests.
The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low).
As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.
Article
Reduced Rank Regression Models in Economics and Finance
Gianluca Cubadda and Alain Hecq
Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.
Article
Sparse Grids for Dynamic Economic Models
Johannes Brumm, Christopher Krause, Andreas Schaab, and Simon Scheidegger
Solving dynamic economic models that capture salient real-world heterogeneity and nonlinearity requires the approximation of high-dimensional functions. As their dimensionality increases, compute time and storage requirements grow exponentially. Sparse grids alleviate this curse of dimensionality by substantially reducing the number of interpolation nodes, that is, grid points needed to achieve a desired level of accuracy. The construction principle of sparse grids is to extend univariate interpolation formulae to the multivariate case by choosing linear combinations of tensor products in a way that reduces the number of grid points by orders of magnitude relative to a full tensor-product grid and doing so without substantially increasing interpolation errors. The most popular versions of sparse grids used in economics are (dimension-adaptive) Smolyak sparse grids that use global polynomial basis functions, and (spatially adaptive) sparse grids with local basis functions. The former can economize on the number of interpolation nodes for sufficiently smooth functions, while the latter can also handle non-smooth functions with locally distinct behavior such as kinks. In economics, sparse grids are particularly useful for interpolating the policy and value functions of dynamic models with state spaces between two and several dozen dimensions, depending on the application. In discrete-time models, sparse grid interpolation can be embedded in standard time iteration or value function iteration algorithms. In continuous-time models, sparse grids can be embedded in finite-difference methods for solving partial differential equations like Hamilton-Jacobi-Bellman equations. In both cases, local adaptivity, as well as spatial adaptivity, can add a second layer of sparsity to the fundamental sparse-grid construction. Beyond these salient use-cases in economics, sparse grids can also accelerate other computational tasks that arise in high-dimensional settings, including regression, classification, density estimation, quadrature, and uncertainty quantification.