While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria.
AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making.
Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots.
The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward.
In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.
Article
Adaptive Learning in Macroeconomics
George W. Evans and Bruce McGough
Article
Agent-Level Adaptive Learning
George W. Evans and Bruce McGough
Adaptive learning is a boundedly rational alternative to rational expectations that is increasingly used in macroeconomics, monetary economics, and financial economics. The agent-level approach can be used to provide microfoundations for adaptive learning in macroeconomics.
Two central issues of bounded rationality are simultaneously addressed at the agent level: replacing fully rational expectations of key variables with econometric forecasts and boundedly optimal decisions-making based on those forecasts. The real business cycle (RBC) model provides a useful laboratory for exhibiting alternative implementations of the agent-level approach. Specific implementations include shadow-price learning (and its anticipated-utility counterpart, iterated shadow-price learning), Euler-equation learning, and long-horizon learning. For each implementation the path of the economy is obtained by aggregating the boundedly rational agent-level decisions.
A linearized RBC can be used to illustrate the effects of fiscal policy. For example, simulations can be used to illustrate the impact of a permanent increase in government spending and highlight the similarities and differences among the various implements of agent-level learning. These results also can be used to expose the differences among agent-level learning, reduced-form learning, and rational expectations.
The different implementations of agent-level adaptive learning have differing advantages. A major advantage of shadow-price learning is its ease of implementation within the nonlinear RBC model. Compared to reduced-form learning, which is widely use because of its ease of application, agent-level learning both provides microfoundations, which ensure robustness to the Lucas critique, and provides the natural framework for applications of adaptive learning in heterogeneous-agent models.
Article
Bayesian Vector Autoregressions: Applications
Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Article
Bayesian Vector Autoregressions: Estimation
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Article
Bootstrapping in Macroeconometrics
Helmut Herwartz and Alexander Lange
Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.
Article
Capital Controls: A Survey of the New Literature
Alessandro Rebucci and Chang Ma
This paper reviews selected post–Global Financial Crisis theoretical and empirical contributions on capital controls and identifies three theoretical motives for the use of capital controls: pecuniary externalities in models of financial crises, aggregate demand externalities in New Keynesian models of the business cycle, and terms of trade manipulation in open-economy models with pricing power. Pecuniary and demand externalities offer the most compelling case for the adoption of capital controls, but macroprudential policy can also address the same distortions. So capital controls generally are not the only instrument that can do the job. If evaluated through the lenses of the new theories, the empirical evidence reviewed suggests that capital controls can have the intended effects, even though the extant literature is inconclusive as to whether the effects documented amount to a net gain or loss in welfare terms. Terms of trade manipulation also provides a clear-cut theoretical case for the use of capital controls, but this motive is less compelling because of the spillover and coordination issues inherent in the use of control on capital flows for this purpose. Perhaps not surprisingly, only a handful of countries have used capital controls in a countercyclical manner, while many adopted macroprudential policies. This suggests that capital control policy might entail additional costs other than increased financing costs, such as signaling the bad quality of future policies, leakages, and spillovers.
Article
Central Bank Monetary Policy and Consumer Credit Markets
Xudong An, Larry Cordell, Raluca A. Roman, and Calvin Zhang
Central banks around the world use monetary policy tools to promote economic growth and stability; for example, in the United States, the Federal Reserve (Fed) uses federal funds rate adjustments, quantitative easing (QE) or tightening, forward guidance, and other tools “to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” Changes in monetary policy affect both businesses and consumers. For consumers, changes in monetary policy affect bank credit supply, refinancing activity, and home purchases, which in turn affect household consumption and thus economic growth and price stability. The U.S. Fed rate cuts and QE programs during COVID-19 led to historically low interest rates, which spurred a huge wave of refinancings. However, the pass-through of rate savings in the mortgage market declined during the pandemic. The weaker pass-through can be linked to the extraordinary growth of shadow bank mortgage lenders during the COVID-19 pandemic: Shadow bank mortgage lenders charged mortgage borrowers higher rates and fees; therefore, a higher market share of them means a smaller overall pass-through of rate savings to mortgage borrowers. It is important to note that these shadow banks did provide convenience to consumers, and they originated loans faster than banks. The convenience and speed could be valuable to borrowers and important in transmitting monetary policy in a timelier way, especially during a crisis.
Article
China’s Housing Policy and Housing Boom and Their Macroeconomic Impacts
Kaiji Chen
The house price boom that has been present in most Chinese cities since the early 2000s has triggered substantial interest in the role that China’s housing policy plays in its housing market and macroeconomy, with an extensive literature employing both empirical and theoretical perspectives developed over the past decade. This research finds that the privatization of China’s housing market, which encouraged households living in state-owned housing to purchase their homes at prices far below their market value, contributed to a rapid increase in homeownership beginning in the mid-1990s. Housing market privatization also has led to a significant increase in both housing and nonhousing consumption, but these benefits are unevenly distributed across households. With the policy goal of making homeownership affordable for the average household, the Housing Provident Fund contributes positively to homeownership rates. By contrast, the effectiveness of housing policies to make housing affordable for low-income households has been weaker in recent years. Moreover, a large body of empirical research shows that the unintended consequences of housing market privatization have been a persistent increase in housing prices since the early 2000s, which has been accompanied by soaring land prices, high vacancy rates, and high price-to-income and price-to-rent ratios. The literature has differing views regarding the sustainability of China’s housing boom. On a theoretical front, economists find that rising housing demand, due to both consumption and investment purposes, is important to understanding China’s prolonged housing boom, and that land-use policy, which influences the supply side of the housing market, lies at the center of China’s housing boom. However, regulatory policies, such as housing purchase restrictions and property taxes, have had mixed effects on the housing market in different cities. In addition to China’s housing policy and its direct effects on the nation’s housing market, research finds that China’s housing policy impacts its macroeconomy via the transmission of house price dynamics into the household and corporate sectors. High housing prices have a heterogenous impact on the consumption and savings of different types of households but tend to discourage household labor supply. Meanwhile, rising house prices encourage housing investment by non–real-estate firms, which crowds out nonhousing investment, lowers the availability of noncollateralized business loans, and reduces productive efficiency via the misallocation of capital and managerial talent.
Article
The Cointegrated VAR Methodology
Katarina Juselius
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
Article
Consumer Debt and Default: A Macro Perspective
Florian Exler and Michèle Tertilt
Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments.
Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016.
These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements.
Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules.
There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.
Article
Crises in the Housing Market: Causes, Consequences, and Policy Lessons
Carlos Garriga and Aaron Hedlund
The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.
Article
Econometric Methods for Business Cycle Dating
Máximo Camacho Alonso and Lola Gadea
Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics.
Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere.
In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.
Article
Education and Economic Growth
Eric A. Hanushek and Ludger Woessmann
Economic growth determines the future well-being of society, but finding ways to influence it has eluded many nations. Empirical analysis of differences in growth rates reaches a simple conclusion: long-run growth in gross domestic product (GDP) is largely determined by the skills of a nation’s population. Moreover, the relevant skills can be readily gauged by standardized tests of cognitive achievement. Over the period 1960–2000, three-quarters of the variation in growth of GDP per capita across countries can be accounted for by international measures of math and science skills. The relationship between aggregate cognitive skills, called the knowledge capital of a nation, and the long-run growth rate is extraordinarily strong.
There are natural questions about whether the knowledge capital–growth relationship is causal. While it is impossible to provide conclusive proof of causality, the existing evidence makes a strong prima facie case that changing the skills of the population will lead to higher growth rates.
If future GDP is projected based on the historical growth relationship, the results indicate that modest efforts to bring all students to minimal levels will produce huge economic gains. Improvements in the quality of schools have strong long-term benefits.
The best way to improve the quality of schools is unclear from existing research. On the other hand, a number of developed and developing countries have shown that improvement is possible.
Article
The Effects of Monetary Policy Announcements
Chao Gu, Han Han, and Randall Wright
The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.
Article
The Evolution of Forecast Density Combinations in Economics
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Article
Financial Bubbles in History
William Quinn and John Turner
Financial bubbles constitute some of history’s most significant economic events, but academic research into the phenomenon has often been narrow, with an excessive focus on whether bubble episodes invalidate or confirm the efficient markets hypothesis. The literature on the topic has also been somewhat siloed, with theoretical, experimental, qualitative, and quantitative methods used to develop relatively discrete bodies of research.
In order to overcome these deficiencies, future research needs to move beyond the rational/irrational dichotomy and holistically examine the causes and consequences of bubbles. Future research in financial bubbles should thus use a wider range of investigative tools to answer key questions or attempt to synthesize the findings of multiple research programs.
There are three areas in particular that future research should focus on: the role of information in a bubble, the aftermath of bubbles, and possible regulatory responses. While bubbles are sometimes seen as an inevitable part of capitalism, there have been long historical eras in which they were extremely rare, and these eras are likely to contain lessons for alleviating the negative effects of bubbles in the 21st century. Finally, the literature on bubbles has tended to neglect certain regions, and future research should hunt for undiscovered episodes outside of Europe and North America.
Article
Financial Frictions in Macroeconomic Models
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
Article
Fiscal and Monetary Policy in Open Economy
Andrea Ferrero
The development of a simple framework with optimizing agents and nominal rigidities is the point of departure for the analysis of three questions about fiscal and monetary policies in an open economy.
The first question concerns the optimal monetary policy targets in a world with trade and financial links. In the baseline model, the optimal cooperative monetary policy is fully inward-looking and seeks to stabilize a combination of domestic inflation and output gap. The equivalence with the closed economy case, however, ends if countries do not cooperate, if firms price goods in the currency of the market of destination, and if international financial markets are incomplete. In these cases, external variables that capture international misalignments relative to the first best become relevant policy targets.
The second question is about the empirical evidence on the international transmission of government spending shocks. In response to a positive innovation, the real exchange rate depreciates and the trade balance deteriorates. Standard open economy models struggle to match this evidence. Non-standard consumption preferences and a detailed fiscal adjustment process constitute two ways to address the puzzle.
The third question deals with the trade-offs associated with an active use of fiscal policy for stabilization purposes in a currency union. The optimal policy assignment mandates the monetary authority to stabilize union-wide aggregates and the national fiscal authorities to respond to country-specific shocks. Permanent changes of government debt allow to smooth the distortionary effects of volatile taxes. Clear and credible fiscal rules may be able to strike the appropriate balance between stabilization objectives and moral hazard issues.
Article
Foreign Exchange Intervention
Helen Popper
The practice of central bank foreign exchange intervention for a time ran ahead of either compelling theoretical explanations of its use or persuasive empirical evidence of its effectiveness. Research accelerated when the emerging economy crises of the 1990s and the early 2000s brought fresh data in the form of urgent experimentation with foreign exchange intervention and related policies, and the financial crisis of 2008 propelled serious treatment of financial frictions into models of intervention.
Current foreign exchange intervention models combine financial frictions with relevant externalities: with the aggregate demand and pecuniary externalities that inform macroeconomic models more broadly, and with the trade-related learning externalities that are particularly relevant for developing and emerging economies. These models characteristically allow for normative evaluation of the use of foreign exchange intervention, although most (but not all) do so from a single economy perspective.
Empirical advances reflect the advantages of more variation in the use of foreign exchange intervention, better data, and novel econometric approaches to addressing endogeneity. Foreign exchange intervention is now widely viewed as influencing exchange rates at least to some extent, and sustained one-sided intervention; and its corresponding reserve accumulation appear to play a role in moderating exchange rate fluctuations and in reducing the likelihood of damaging consequences of financial crises.
Key avenues for future research include sorting out which frictions and externalities matter most, and where foreign exchange intervention—and perhaps international cooperation—properly fits (if at all) into the blend of policies that might appropriately address the externalities.
Article
Geography of Growth and Development
Esteban Rossi-Hansberg
The geography of economic activity refers to the distribution of population, production, and consumption of goods and services in geographic space. The geography of growth and development refers to the local growth and decline of economic activity and the overall distribution of these local changes within and across countries. The pattern of growth in space can vary substantially across regions, countries, and industries. Ultimately, these patterns can help explain the role that spatial frictions (like transport and migration costs) can play in the overall development of the world economy.
The interaction of agglomeration and congestion forces determines the density of economic activity in particular locations. Agglomeration forces refer to forces that bring together agents and firms by conveying benefits from locating close to each other, or for locating in a particular area. Examples include local technology and institutions, natural resources and local amenities, infrastructure, as well as knowledge spillovers. Congestion forces refer to the disadvantages of locating close to each other. They include traffic, high land prices, as well as crime and other urban dis-amenities. The balance of these forces is mediated by the ability of individuals, firms, good and services, as well as ideas and technology, to move across space: namely, migration, relocation, transport, commuting and communication costs. These spatial frictions together with the varying strength of congestion and agglomeration forces determines the distribution of economic activity. Changes in these forces and frictions—some purposefully made by agents given the economic environment they face and some exogenous—determine the geography of growth and development.
The main evolution of the forces that influence the geography of growth and development have been changes in transport technology, the diffusion of general-purpose technologies, and the structural transformation of economies from agriculture, to manufacturing, to service-oriented economies. There are many challenges in modeling and quantifying these forces and their effects. Nevertheless, doing so is essential to evaluate the impact of a variety of phenomena, from climate change to the effects of globalization and advances in information technology.