1-10 of 31 Results  for:

  • Macroeconomics and Monetary Economics x
Clear all

Article

George W. Evans and Bruce McGough

Adaptive learning is a boundedly rational alternative to rational expectations that is increasingly used in macroeconomics, monetary economics, and financial economics. The agent-level approach can be used to provide microfoundations for adaptive learning in macroeconomics. Two central issues of bounded rationality are simultaneously addressed at the agent level: replacing fully rational expectations of key variables with econometric forecasts and boundedly optimal decisions-making based on those forecasts. The real business cycle (RBC) model provides a useful laboratory for exhibiting alternative implementations of the agent-level approach. Specific implementations include shadow-price learning (and its anticipated-utility counterpart, iterated shadow-price learning), Euler-equation learning, and long-horizon learning. For each implementation the path of the economy is obtained by aggregating the boundedly rational agent-level decisions. A linearized RBC can be used to illustrate the effects of fiscal policy. For example, simulations can be used to illustrate the impact of a permanent increase in government spending and highlight the similarities and differences among the various implements of agent-level learning. These results also can be used to expose the differences among agent-level learning, reduced-form learning, and rational expectations. The different implementations of agent-level adaptive learning have differing advantages. A major advantage of shadow-price learning is its ease of implementation within the nonlinear RBC model. Compared to reduced-form learning, which is widely use because of its ease of application, agent-level learning both provides microfoundations, which ensure robustness to the Lucas critique, and provides the natural framework for applications of adaptive learning in heterogeneous-agent models.

Article

The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.

Article

The house price boom that has been present in most Chinese cities since the early 2000s has triggered substantial interest in the role that China’s housing policy plays in its housing market and macroeconomy, with an extensive literature employing both empirical and theoretical perspectives developed over the past decade. This research finds that the privatization of China’s housing market, which encouraged households living in state-owned housing to purchase their homes at prices far below their market value, contributed to a rapid increase in homeownership beginning in the mid-1990s. Housing market privatization also has led to a significant increase in both housing and nonhousing consumption, but these benefits are unevenly distributed across households. With the policy goal of making homeownership affordable for the average household, the Housing Provident Fund contributes positively to homeownership rates. By contrast, the effectiveness of housing policies to make housing affordable for low-income households has been weaker in recent years. Moreover, a large body of empirical research shows that the unintended consequences of housing market privatization have been a persistent increase in housing prices since the early 2000s, which has been accompanied by soaring land prices, high vacancy rates, and high price-to-income and price-to-rent ratios. The literature has differing views regarding the sustainability of China’s housing boom. On a theoretical front, economists find that rising housing demand, due to both consumption and investment purposes, is important to understanding China’s prolonged housing boom, and that land-use policy, which influences the supply side of the housing market, lies at the center of China’s housing boom. However, regulatory policies, such as housing purchase restrictions and property taxes, have had mixed effects on the housing market in different cities. In addition to China’s housing policy and its direct effects on the nation’s housing market, research finds that China’s housing policy impacts its macroeconomy via the transmission of house price dynamics into the household and corporate sectors. High housing prices have a heterogenous impact on the consumption and savings of different types of households but tend to discourage household labor supply. Meanwhile, rising house prices encourage housing investment by non–real-estate firms, which crowds out nonhousing investment, lowers the availability of noncollateralized business loans, and reduces productive efficiency via the misallocation of capital and managerial talent.

Article

George W. Evans and Bruce McGough

While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria. AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making. Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots. The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward. In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.

Article

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.

Article

Stock-flow matching is a simple and elegant framework of dynamic trade in differentiated goods. Flows of entering traders match and exchange with the stocks of previously unsuccessful traders on the other side of the market. A buyer or seller who enters a market for a single, indivisible good such as a job or a home does not experience impediments to trade. All traders are fully informed about the available trading options; however, each of the available options in the stock on the other side of the market may or may not be suitable. If fortunate, this entering trader immediately finds a viable option in the stock of available opportunities and trade occurs straightaway. If unfortunate, none of the available opportunities suit the entrant. This buyer or seller now joins the stocks of unfulfilled traders who must wait for a new, suitable partner to enter. Three striking empirical regularities emerge from this microstructure. First, as the stock of buyers does not match with the stock of sellers, but with the flow of new sellers, the flow of new entrants becomes an important explanatory variable for aggregate trading rates. Second, the traders’ exit rates from the market are initially high, but if they fail to match quickly the exit rates become substantially slower. Third, these exit rates depend on different variables at different phases of an agent’s stay in the market. The probability that a new buyer will trade successfully depends only on the stock of sellers in the market. In contrast, the exit rate of an old buyer depends positively on the flow of new sellers, negatively on the stock of old buyers, and is independent of the stock of sellers. These three empirical relationships not only differ from those found in the familiar search literature but also conform to empirical evidence observed from unemployment outflows. Moreover, adopting the stock-flow approach enriches our understanding of output dynamics, employment flows, and aggregate economic performance. These trading mechanics generate endogenous price dispersion and price dynamics—prices depend on whether the buyer or the seller is the recent entrant, and on how many viable traders were waiting for the entrant, which varies over time. The stock-flow structure has provided insights about housing, temporary employment, and taxicab markets.

Article

The development of a simple framework with optimizing agents and nominal rigidities is the point of departure for the analysis of three questions about fiscal and monetary policies in an open economy. The first question concerns the optimal monetary policy targets in a world with trade and financial links. In the baseline model, the optimal cooperative monetary policy is fully inward-looking and seeks to stabilize a combination of domestic inflation and output gap. The equivalence with the closed economy case, however, ends if countries do not cooperate, if firms price goods in the currency of the market of destination, and if international financial markets are incomplete. In these cases, external variables that capture international misalignments relative to the first best become relevant policy targets. The second question is about the empirical evidence on the international transmission of government spending shocks. In response to a positive innovation, the real exchange rate depreciates and the trade balance deteriorates. Standard open economy models struggle to match this evidence. Non-standard consumption preferences and a detailed fiscal adjustment process constitute two ways to address the puzzle. The third question deals with the trade-offs associated with an active use of fiscal policy for stabilization purposes in a currency union. The optimal policy assignment mandates the monetary authority to stabilize union-wide aggregates and the national fiscal authorities to respond to country-specific shocks. Permanent changes of government debt allow to smooth the distortionary effects of volatile taxes. Clear and credible fiscal rules may be able to strike the appropriate balance between stabilization objectives and moral hazard issues.

Article

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.

Article

While it is a long-standing idea in international macroeconomic theory that flexible nominal exchange rates have the potential to facilitate adjustment in international relative prices, a monetary union necessarily forgoes this mechanism for facilitating macroeconomic adjustment among its regions. Twenty years of experience in the eurozone monetary union, including the eurozone crisis, have spurred new macroeconomic research on the costs of giving up nominal exchange rates as a tool of adjustment, and the possibility of alternative policies to promote macroeconomic adjustment. Empirical evidence paints a mixed picture regarding the usefulness of nominal exchange rate flexibility: In many historical settings, flexible nominal exchanges rates tend to create more relative price distortions than they have helped resolve; yet, in some contexts exchange rate devaluations can serve as a useful correction to severe relative price misalignments. Theoretical advances in studying open economy models either support the usefulness of exchange rate movements or find them irrelevant, depending on the specific characteristics of the model economy, including the particular specification of nominal rigidities, international openness in goods markets, and international financial integration. Yet in models that embody certain key aspects of the countries suffering the brunt of the eurozone crisis, such as over-borrowing and persistently high wages, it is found that nominal devaluation can be useful to prevent the type of excessive rise in unemployment observed. This theoretical research also raises alternative polices and mechanisms to substitute for nominal exchange rate adjustment. These policies include the standard fiscal tools of optimal currency area theory but also extend to a broader set of tools including import tariffs, export subsidies, and prudential taxes on capital flows. Certain combinations of these policies, labeled a “fiscal devaluation,” have been found in theory to replicate the effects of a currency devaluation in the context of a monetary union such as the eurozone. These theoretical developments are helpful for understanding the history of experiences in the eurozone, such as the eurozone crisis. They are also helpful for thinking about options for preventing such crises in the future.

Article

The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests. The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low). As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.