For hundreds of years, policymakers and academics have puzzled over how to add up the effects of trade and trade barriers on economic activity. The literature is vast. Trade theory generally focuses on the question of whether trade or trade barriers, like tariffs, make people and firms better off using models of the real economy operating at full employment and a net-zero trade balance. They yield powerful fundamental intuition but are not well equipped to address issues such as capital accumulation, the role of exchange rate depreciation, monetary policy, intertemporal optimization by consumers, or current account deficits, which permeate policy debates over tariffs. The literature on open-economy macroeconomics provides additional tools to address some of these issues, but neither literature has yet been able to answer definitively the question of what impact tariffs have on infant industries, current account deficits, unemployment, or inequality, which remain open empirical questions. Trade economists have only begun to understand how multiproduct retailers affect who ultimately pays tariffs and still are struggling to meaningfully model unemployment in a tractable way conducive to fast or uniform application to policy analysis, while macro approaches overlook sectoral complexity. The field’s understanding of the importance of endogenous capital investment is growing, but it has not internalized the importance of the same intertemporal trade-offs between savings and consumption for assessing the distributional impacts of trade on households. Dispersion across assessments of the impacts of the U.S.–China trade war illustrates the frontiers that economists face assessing the macroeconomic impacts of tariffs.
Article
Luca Gambetti
Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few.
SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks.
In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.
Article
Paul Bergin
While it is a long-standing idea in international macroeconomic theory that flexible nominal exchange rates have the potential to facilitate adjustment in international relative prices, a monetary union necessarily forgoes this mechanism for facilitating macroeconomic adjustment among its regions. Twenty years of experience in the eurozone monetary union, including the eurozone crisis, have spurred new macroeconomic research on the costs of giving up nominal exchange rates as a tool of adjustment, and the possibility of alternative policies to promote macroeconomic adjustment. Empirical evidence paints a mixed picture regarding the usefulness of nominal exchange rate flexibility: In many historical settings, flexible nominal exchanges rates tend to create more relative price distortions than they have helped resolve; yet, in some contexts exchange rate devaluations can serve as a useful correction to severe relative price misalignments.
Theoretical advances in studying open economy models either support the usefulness of exchange rate movements or find them irrelevant, depending on the specific characteristics of the model economy, including the particular specification of nominal rigidities, international openness in goods markets, and international financial integration. Yet in models that embody certain key aspects of the countries suffering the brunt of the eurozone crisis, such as over-borrowing and persistently high wages, it is found that nominal devaluation can be useful to prevent the type of excessive rise in unemployment observed.
This theoretical research also raises alternative polices and mechanisms to substitute for nominal exchange rate adjustment. These policies include the standard fiscal tools of optimal currency area theory but also extend to a broader set of tools including import tariffs, export subsidies, and prudential taxes on capital flows. Certain combinations of these policies, labeled a “fiscal devaluation,” have been found in theory to replicate the effects of a currency devaluation in the context of a monetary union such as the eurozone. These theoretical developments are helpful for understanding the history of experiences in the eurozone, such as the eurozone crisis. They are also helpful for thinking about options for preventing such crises in the future.
Article
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
Article
Stephanie Seguino
Stratification economics, which has emerged as a new subfield of research on inequality, is distinguished by a system-level analysis. It explores the role of power in influencing the processes and institutions that produce hierarchical economic and social orderings based on ascriptive characteristics. Macroeconomic factors play a role in buttressing stratification, especially by race and gender. Among the macroeconomic policy levers that produce and perpetuate intergroup inequality are monetary policy, fiscal expenditures, exchange rate policy, industrial policy, and trade, investment, and financial policies. These policies interact with a stratification “infrastructure,” comprised of racial and gender ideologies, norms, and stereotypes that are internalized at the individual level and act as a “stealth” factor in reproducing hierarchies. In stratified societies, racial and gender norms and stereotypes act to justify various forms of exclusion from prized economic assets such as good jobs. For example, gendered and racial stereotypes contribute to job segregation, with subordinated groups largely sequestered in the secondary labor market where wages are low and jobs are insecure. The net effect is that subordinated groups serve as shock absorbers that insulate members of the dominant group from the impact of negative macroeconomic phenomena such as unemployment and economic volatility. Further, racial and gender inequality have economy-wide effects, and play a role in determining the rate of economic growth and overall performance of an economy. The impact of intergroup inequality on macro-level outcomes depends on a country’s economic structure. While under some conditions, intergroup inequality acts as a stimulus to economic growth, under other conditions, it undermines societal well-being. Countries are not locked into a path whereby inequality has a positive or negative effect on growth. Rather, through their policy decisions, countries can choose the low road (stratification) or the high road (intergroup inequality). Thus, even if intergroup inequality has been a stimulus to growth in the past, it is possible to choose an equity-led growth path.
Article
Roger E. A. Farmer
The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.
Article
George W. Evans and Bruce McGough
While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria.
AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making.
Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots.
The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward.
In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.