1-6 of 6 Results  for:

  • Macroeconomics and Monetary Economics x
  • Economic Theory and Mathematical Models x
Clear all


Consumer Debt and Default: A Macro Perspective  

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.


The Effects of Monetary Policy Announcements  

Chao Gu, Han Han, and Randall Wright

The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.


The Indeterminacy School in Macroeconomics  

Roger E. A. Farmer

The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.


Sparse Grids for Dynamic Economic Models  

Johannes Brumm, Christopher Krause, Andreas Schaab, and Simon Scheidegger

Solving dynamic economic models that capture salient real-world heterogeneity and nonlinearity requires the approximation of high-dimensional functions. As their dimensionality increases, compute time and storage requirements grow exponentially. Sparse grids alleviate this curse of dimensionality by substantially reducing the number of interpolation nodes, that is, grid points needed to achieve a desired level of accuracy. The construction principle of sparse grids is to extend univariate interpolation formulae to the multivariate case by choosing linear combinations of tensor products in a way that reduces the number of grid points by orders of magnitude relative to a full tensor-product grid and doing so without substantially increasing interpolation errors. The most popular versions of sparse grids used in economics are (dimension-adaptive) Smolyak sparse grids that use global polynomial basis functions, and (spatially adaptive) sparse grids with local basis functions. The former can economize on the number of interpolation nodes for sufficiently smooth functions, while the latter can also handle non-smooth functions with locally distinct behavior such as kinks. In economics, sparse grids are particularly useful for interpolating the policy and value functions of dynamic models with state spaces between two and several dozen dimensions, depending on the application. In discrete-time models, sparse grid interpolation can be embedded in standard time iteration or value function iteration algorithms. In continuous-time models, sparse grids can be embedded in finite-difference methods for solving partial differential equations like Hamilton-Jacobi-Bellman equations. In both cases, local adaptivity, as well as spatial adaptivity, can add a second layer of sparsity to the fundamental sparse-grid construction. Beyond these salient use-cases in economics, sparse grids can also accelerate other computational tasks that arise in high-dimensional settings, including regression, classification, density estimation, quadrature, and uncertainty quantification.


Stock-Flow Models of Market Frictions and Search  

Eric Smith

Stock-flow matching is a simple and elegant framework of dynamic trade in differentiated goods. Flows of entering traders match and exchange with the stocks of previously unsuccessful traders on the other side of the market. A buyer or seller who enters a market for a single, indivisible good such as a job or a home does not experience impediments to trade. All traders are fully informed about the available trading options; however, each of the available options in the stock on the other side of the market may or may not be suitable. If fortunate, this entering trader immediately finds a viable option in the stock of available opportunities and trade occurs straightaway. If unfortunate, none of the available opportunities suit the entrant. This buyer or seller now joins the stocks of unfulfilled traders who must wait for a new, suitable partner to enter. Three striking empirical regularities emerge from this microstructure. First, as the stock of buyers does not match with the stock of sellers, but with the flow of new sellers, the flow of new entrants becomes an important explanatory variable for aggregate trading rates. Second, the traders’ exit rates from the market are initially high, but if they fail to match quickly the exit rates become substantially slower. Third, these exit rates depend on different variables at different phases of an agent’s stay in the market. The probability that a new buyer will trade successfully depends only on the stock of sellers in the market. In contrast, the exit rate of an old buyer depends positively on the flow of new sellers, negatively on the stock of old buyers, and is independent of the stock of sellers. These three empirical relationships not only differ from those found in the familiar search literature but also conform to empirical evidence observed from unemployment outflows. Moreover, adopting the stock-flow approach enriches our understanding of output dynamics, employment flows, and aggregate economic performance. These trading mechanics generate endogenous price dispersion and price dynamics—prices depend on whether the buyer or the seller is the recent entrant, and on how many viable traders were waiting for the entrant, which varies over time. The stock-flow structure has provided insights about housing, temporary employment, and taxicab markets.


Time Consistent Policies and Quasi-Hyperbolic Discounting  

Łukasz Balbus, Kevin Reffett, and Łukasz Woźny

In dynamic choice models, dynamic inconsistency of preferences is a situation in which a decision-maker’s preferences change over time. Optimal plans under such preferences are time inconsistent if a decision-maker has no incentive to follow in the future the (previously chosen) optimal plan. A typical example of dynamic inconsistency is the case of present bias preferences, where there is a repeated preference toward smaller present rewards versus larger future rewards. The study of dynamic choice of decision-makers who possess dynamically inconsistent preferences has long been the focal point of much work in behavioral economics. Experimental and empirical literatures both point to the importance of various forms of present-bias. The canonical model of dynamically inconsistent preferences exhibiting present-bias is a model of quasi-hyperbolic discounting. A quasi-hyperbolic discounting model is a dynamic choice model, in which the standard exponential discounting is modified by adding an impatience parameter that additionally discounts the immediately succeeding period. A central problem with the analytical study of decision-makers who possess dynamically inconsistent preferences is how to model their choices in sequential decision problems. One general answer to this problem is to characterize and compute (if they exist) constrained optimal plans that are optimal among the set of time consistent sequential plans. Time consistent plans are those among the set of feasible plans that will actually be followed, or not reoptimized, by agents whose preferences change over time. These are called time consistent plans or policies (TCPs). Many results of the existence, uniqueness, and characterization of stationary, or time invariant, TCPs in a class of consumption-savings problems with quasi-hyperbolic discounting, as well as provide some discussion of how to compute TCPs in some extensions of the model are presented, and the role of the generalized Bellman equation operator approach is central. This approach provides sufficient conditions for the existence of time consistent solutions and facilitates their computation. Importantly, the generalized Bellman approach can also be related to a common first-order approach in the literature known as the generalized Euler equation approach. By constructing sufficient conditions for continuously differentiable TCPs on the primitives of the model, sufficient conditions under which a generalized Euler equation approach is valid can be provided. There are other important facets of TCP, including sufficient conditions for the existence of monotone comparative statics in interesting parameters of the decision environment, as well as generalizations of the generalized Bellman approach to allow for unbounded returns and general certainty equivalents. In addition, the case of multidimensional state space, as well as a general self generation method for characterizing nonstationary TCPs must be considered as well.