Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Economics and Finance. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 25 March 2023

The Indeterminacy School in Macroeconomicsfree

The Indeterminacy School in Macroeconomicsfree

  • Roger E. A. FarmerRoger E. A. FarmerDepartment of Economics, University of Warwick

Summary

The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.

Subjects

  • Economic Theory and Mathematical Models
  • Financial Economics
  • Macroeconomics and Monetary Economics

Indeterminacy and Multiple Equilibria in Economics

The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. Economists have long argued that business cycles are driven by shocks to the productivity of labor and capital. According to the indeterminacy school, the self-fulfilling beliefs of financial market participants are additional independent fundamental factors that drive periods of prosperity and depression.1

Monetary General Equilibrium Theory

The history of macroeconomics as a theory distinct from microeconomics began with the publication of The General Theory of Employment, Interest and Money, a book by the English economist John Maynard Keynes (Keynes, 1936). Keynes revolutionized the way economists think about the economy and he revolutionized the way politicians think about the role of economic policy. For the first time, with the publication of The General Theory, policymakers accepted that government has an obligation to maintain full employment. The publication of Keynes’ masterwork led to two decades of research that attempted to integrate the ideas of the general theory with general equilibrium (GE) theory, the branch of microeconomics that deals with the working of the economy as a whole. That research led to the publication of Patinkin’s book Money, Interest and Prices (Patinkin, 1956), which laid the foundation for much of the research that followed.

In Money, Interest and Prices, Patinkin integrated GE theory, which explains the determination of the relative price of one good to another, with the quantity theory of money, which explains the average price of all goods measured in units of money. Patinkin’s synthesis led to the development of monetary general equilibrium (MGE) theory, an approach that forms the basis for much of modern macroeconomics.

In MGE theory, as with nonmonetary versions of GE theory, market outcomes result from the interactions of hundreds of millions of market participants, each of whom assumes that he has no influence over market prices. An equilibrium is a set of equilibrium trades and an equilibrium price vector such that, when confronted with equilibrium prices, no market participant would choose to make additional trades. Although it has been known for decades that there may be more than one equilibrium price vector, much of the literature in macroeconomics that developed from Patinkin’s synthesis has made theoretical assumptions that render equilibrium unique.

Multiplicity and Determinacy of Equilibria

GE theory can be used to develop static GE models that explain the determination of prices and quantities traded at a single date. Or it can be used to develop dynamic GE models that explain the determination of prices and quantities traded at a sequence of dates. Static and dynamic GE models each have multiple equilibria. When the number of commodities and the number of people are finite, each equilibrium is locally isolated from every other equilibrium. In this case, each equilibrium is said to be determinate. When the number of commodities and the number of people are infinite, there may be a contiguous set of equilibria. In this case, each member of the set is said to be indeterminate.

Determinacy of equilibrium is an important property if one is interested in comparing how the equilibrium price vector changes in response to a change in economic fundamentals. For example, if the demand curve shifts to the right, what will happen to the equilibrium price of wheat? For this question to have a meaningful answer, the equilibrium price of wheat must be determinate.

Phase 1: Models of Dynamic Indeterminacy

The indeterminacy school in macroeconomics has gone through two phases. The first phase, developed at the University of Pennsylvania in the United States and at CEPREMAP in France, consisted of dynamic models driven by the self-fulfilling beliefs of economic actors. Initially, these models were populated by overlapping generations of finitely lived people. The focus soon shifted to infinite horizon models inhabited by an infinitely lived representative agent in which the technology exhibits increasing returns-to-scale. This literature, surveyed in Benhabib and Farmer (1999), generates equilibria that display dynamic indeterminacy, and the promise of the literature on dynamic indeterminacy was that it would provide a micro-foundation for Keynesian economics.2

The literature on dynamic indeterminacy made an important contribution to Keynesian economics by demonstrating that “animal spirits” may be fully consistent with market clearing and rational expectations. But it did not fulfill the promise of providing a micro-foundation for Keynes’ general theory. (See Farmer, 2014, for an elaboration of this point.) Like the real business cycle (RBC) model (King, Plosser, & Rebelo, 1988; Kydland & Prescott, 1982; Long & Plosser, 1983), models of dynamic indeterminacy represent business cycles as stationary stochastic fluctuations around a nonstochastic steady state. In RBC models, the driving source of fluctuations is technology shocks. In models that display dynamic indeterminacy, the driving source of fluctuations is the self-fulfilling beliefs of agents. In both types of models, equilibria are almost Pareto efficient. These models cannot explain large persistent unemployment rates of the kind that occurred during the Great Depression or the recent financial crisis.3

Phase 2: Models of Steady-State Indeterminacy

The second phase of the literature on indeterminacy departed from the assumption that the demand and supply of labor are always equal and assumed instead that labor is traded in a search market. This phase was able to generate large welfare losses and high persistent involuntary unemployment of the kind that Keynes discusses in The General Theory. The models developed in this literature possess equilibria that display steady-state indeterminacy.4 These equilibria are characterized as nonstationary probability measures, driven by shocks to self-fulfilling beliefs, and they have very different empirical implications from either RBC models or models of dynamic indeterminacy. They imply that the unemployment rate is nonstationary and that it can wander a very long way from the social optimum unemployment rate. As a consequence, the welfare losses generated by self-fulfilling fluctuations in these models can be very large.

GE Theory and Macroeconomics

The initial formulation of GE theory assumed that there are a finite number of goods and a finite number of people. There have been two extensions to deal with issues that arise naturally in macroeconomics from the passage of time and the fact that the future is uncertain. The first extension, by Debreu (Debreu, 1959), redefined a commodity to be specific to the date, location, and state of nature in which it is consumed. The second extension, by Hicks (1939), explicitly recognized the sequential nature of markets.5

Finite GE Theory

GE theory deals with market exchange of l commodities by m people and, as formulated by Arrow and Debreu (1954), l and m are finite numbers. By making assumptions about the structure of the economy, one arrives at an excess demand function, f(p):+ll, which is a list of the differences between the aggregate quantities demanded and supplied of each of the l commodities when the prices of each good are represented by the l-element vector p. A vector p* that satisfies the equation f(p*)=0 is called an equilibrium price vector.

In his initial formulation of GE theory, Walras (1899) assumed the existence of a fictitious character, the auctioneer, who stands on a platform in the center of the marketplace and calls out prices at which trades will take place. Given a vector of prices, each participant decides how much he would like to trade. The auctioneer adds up the desired trades of every person and if the aggregate quantity of every commodity demanded is equal to the aggregate quantity of every commodity supplied, the auctioneer declares success and trades are executed. Alternatively, if there is an excess demand or supply for one or more commodities, the auctioneer adjusts the vector of proposed prices and tries again. The adjustment process by which the auctioneer homes in on an equilibrium price vector is called tâtonnement, a French word that means “groping.”

Debreu Chapter 7 as a Paradigm for Macroeconomics

In Debreu’s (1959) extension of GE theory to an infinite dimensional space, a commodity is not just an apple, a banana, or a loaf of bread; it is an apple, a banana, or a loaf of bread traded on March 9, 2024, in Mexico City if and only if it is raining in Caracas.

Infinite horizon GE models may be populated by two kinds of agents. If trades are made by a finite number of infinitely lived families, the model is said to be a representative agent (RA) model. If trades are made by an infinite number of finitely lived people, it is said to be an overlapping generations (OLG) model. These two kinds of models have very different properties.6 In the RA model, there is a finite odd number of equilibria and, as in finite GE theory, every equilibrium is Pareto optimal.7 In the OLG model, there may be Pareto inefficient equilibria that do not display this property as a consequence of the double infinity of goods and people.8 The fact that market outcomes may be Pareto inefficient is important, because, when the competitive equilibrium is inefficient, there may be a government policy that would improve the welfare of everyone.

Temporary Equilibrium Theory as a Paradigm for Macroeconomics

Debreu’s extension of GE theory to infinite horizons does not explicitly require the passage of time. The date at which a commodity is consumed is simply one of many labels that index the good. Bread today is distinct from bread tomorrow in the same way that an apple is distinct from a banana. The tâtonnement process whereby the market achieves equilibrium takes place at the beginning of time, and once an equilibrium has been arrived at, the world begins, and trades are executed. This is a rather unrealistic and unsatisfactory description of the world we inhabit.

A more promising alternative, temporary equilibrium (TE) theory, envisages the passage of time as a sequence of weeks.9 Each week, market participants come to a marketplace to trade commodities with each other. Each person brings a bundle of commodities, his endowment, and he leaves with a different bundle of commodities, his allocation. Participants arrive at the marketplace with financial assets and liabilities contracted in previous weeks and they form beliefs about the prices of commodities they think will prevail in future weeks. TE theory allows for the beliefs of market participants about future prices to be different from prices that actually occur.

Asset Markets, Risk, and Uncertainty

This section discusses the connection between TE theory and Debreu Chapter 7 and demonstrates that, under some circumstances, the equilibria that occur in Debreu’s formulation of an equilibrium are the same as the equilibria that occur in a TE model. To understand the connection of Debreu Chapter 7 with TE theory, we turn first to an explanation of the way that GE theory accounts for the fact that the future is unknown.

The economist Frank Knight (Knight, 1921) distinguished risk from uncertainty. Risk refers to events that are quantifiable by a known probability distribution. Uncertainty refers to events that are unknown and unknowable. Almost all quantitative work in macroeconomics has been conducted in models where unknown future events fall into Knight’s first category; they can be quantified by a known probability distribution. That approach is also followed here.

Consider an environment where markets open each week but there is more than one possible future, characterized by a known set of N possible events. For example, if N=2, nature flips a coin that comes up heads with probability χH and tails with probability χL=1χH. To model this scenario, Arrow (1964) suggested people trade basic securities, called Arrow securities, that pay out a fixed dollar amount if and only if an event occurs. When there are as many Arrow securities as events, the markets are said to be complete.

Complete markets in the case of a binary event like a coin toss would require two securities. The H security is a promise to pay one dollar next week if and only if the outcome is heads. The T security is a promise to pay one dollar next week if and only if the outcome is tails. In week 1, person i faces the budget constraint:

p1(x1iw1i)+QHaHi+QTaTi0.(1)

Here, p1 is an l×1 vector of dollar prices in date 1, w1i is an l×1 vector that represents person is endowment, and x1i is an l×1 vector that represents her allocation. QH and QT are the dollar prices of the two Arrow securities and aHi and aTi, which may be positive or negative, are the positions taken by the ith person in the two securities.10

In week 2, one of two events may occur. If the outcome is heads, person i faces the constraint

p2H(x2Hiw2Hi)aHi0.(2)

If the outcome is tails, she faces the constraint

p2T(x2Tiw2Ti)aTi0.(3)

By substituting the expressions for aHi and aTi from the period 2 budget constraints, Equations 2 and 3, into the period 1 budget constraint, Equation 1, one arrives at the following consolidated budget constraint:

p1(x1iw1i)+QHp2H(x2Hiw2Hi)+QTp2T(x2Tiw2Ti)0.(4)

In the GE interpretation of uncertainty, people maximize utility in period 1,

max{x1i,x2Hi,x2Ti}Ui(x1i,x2Hi,x2Ti;χH,χT),(5)

subject to the constraint defined by Inequality 4. The dependence of utility on the probability of alternative outcomes is represented here by the appearance of the probabilities of heads or tails, χH and χL, in the utility function. In the TE interpretation of uncertainty, people solve two consecutive utility maximization problems. In period 1 they choose a vector of current consumptions x1i and a pair of asset positions aHi and aTi, subject to their beliefs about the prices that will occur in the future.

For the GE solution and the rational expectations TE solution to be the same, two conditions must hold. First, there must be as many Arrow securities as states of nature. This condition guarantees that the sequence of budget constraints can be reduced to a single budget constraint. Second, utility must be time consistent. This condition means that the way that people rank choices over the l elements of x2i must be independent of the choices they made in period 1, and it is one reason that economists often assume that the problem in Equation 5 is linear in probabilities, that is:

Ui(x1i,x2Hi,x2Ti;χH,χT)χHvi(x1i,x2Hi)+χTvi(x1i,x2Ti).(6)

The function vi(x1i,x2Si) for s{H,T}is called a von Neumann–Morgenstern utility function and when people maximize Equation 6, they are said to be expected utility maximizers. Von Neumann–Morgenstern expected utility maximizers are time consistent.

The assumption of complete markets allows for a relatively straightforward extension of the perfect-foresight assumption to a world with uncertainty. If there is one future state and people know all future prices, the agents in the model are said to possess perfect foresight. If there is more than one possible future state, and people know all future state-contingent prices, the agents in the model are said to possess rational expectations.

The translation of Debreu’s version of GE theory into the language of TE theory exposes a problem with the assumption that people take prices as given. GE theory does not guarantee that equilibrium is unique. If there are multiple equilibria, how do participants in this week’s market know which of the equilibrium price vectors will be attained in future markets? The following section turns to a description of the problems raised in GE models by the existence of multiple equilibria and offers a solution to the problem of indeterminacy. Beliefs should be introduced as a separate fundamental in addition to preferences, endowments, and technology.

Multiplicity and Determinacy of Equilibrium

This section compares finite- and infinite-horizon GE models and illustrates, by means of three simple figures, the meaning of determinacy of equilibrium. The section begins by demonstrating that there is a finite odd number of equilibria in a finite GE model, and it proceeds to explain how the equilibrium concept can be extended to deal with the passage of time.

Finite GE Theory: Why Equilibria Are Determinate

That equilibria are determinate is most easily understood in the case of a two-good model and is illustrated in Figure 1. Here, f(p) is the aggregate excess demand for good 1 and p=p1p1+p2 is the money price of good 1, normalized by the sum of the two money prices. One can show that f(0)>0, f(1)<0 and f(p) is continuous. It follows that the graph f(p) is a continuous function [0,1] that starts above the p-axis and ends below the p-axis. Hence f(p) must cross the p-axis at least once, and generically, the number of crossings is odd (Figure 1). Figure 1 illustrates the case of three equilibrium prices.

Figure 1. Three equilibria in a two-good model.

Source: Author.

This figure also shows that equilibrium cannot, generically, be indeterminate. Indeterminacy, in the finite case, would require the excess demand function to be coincident with the p-axis for an interval of p-values. That would be a very special case, as would a tangency of the excess demand function with the p-axis. Genericity means that, in the space of all parameterized two-good GE models, models with indeterminate equilibria or models with an even number of equilibria occur vanishingly often. Although such models could be constructed, a small perturbation of the parameters of the model would generate a different model where the indeterminacy or the tangency disappears.

Infinite-Horizon Models With RAs

When the number of commodities is infinite, an equilibrium price vector is an element of B, the space of nonnegative bounded sequences.11 If the number of people is finite, as in the RA model, there is an odd finite number of equilibria just as there is in the finite Arrow-Debreu model. These equilibria need not be stationary, but they cannot be indeterminate. If the number of people is infinite, as in the OLG model, there are always at least two stationary equilibrium price sequences and at least one of these stationary equilibrium price sequences is indeterminate.

Figure 2 illustrates the situation that typically occurs in RA models. The figure plots three sequences, p*, p1*, and p2* as functions of time. The elements of each sequence are indexed by subscripts that refer to weeks, and the sequence p* is, by assumption, a stationary perfect-foresight equilibrium price sequence. The statement that p* is an equilibrium price sequence means that the quantities of all goods demanded and supplied are equal in every week. The statement that p* is a stationary sequence means that pt* is constant over time. And the qualifier “perfect foresight” means that when people form their demands and supplies at date t, they are fully aware of what the prices will be in all future periods.

Figure 2. A unique determinate equilibrium in an infinite-horizon model.

Source: Author.

In general, pt* could be an element of +l, that is, there may be multiple goods traded in each period. For the purposes of exposition, it is assumed here that l=1, and that pt* is the dollar price of the unique date t commodity, which will be referred to as “wheat.” Using this convention, the notation p3*, for example, refers to the dollar price of wheat in week 3. In contrast to the stationary equilibrium price sequence, p*, the sequences p1* and p2* are nonstationary. A sequence that begins at p11* or p1*2 grows without bound and heads off either to plus infinity, in the case of p1*, or to negative infinity, in the case of p2*. The sequences p1* and p2* start close to the stationary equilibrium sequence, p*, but they eventually diverge from it.

To measure the distance between two elements of B, economists use the sup norm, which records the distance between two sequences as the largest distance between any two elements of the sequence. If p* is a stationary equilibrium price sequence of an RA model, often it will be possible to find perfect-foresight price sequences, p1* and p2*, that obey the market clearing conditions for some finite number of periods. But, however close these sequences are initially to the steady-state equilibrium price sequence p*, they will eventually move away from it. In the example depicted in Figure 2, the sequences p1* and p2* diverge to plus or minus infinity.

Using a result first proved by Negishi (1960), one can show that every equilibrium price sequence of an RA model is bounded away from every other equilibrium price sequence by a positive number. An implication of Negishi’s theorem is that stationary equilibrium price sequences must always display the local instability property depicted in Figure 2.

Infinite-Horizon Models With OLG

OLG models are different from RA models. In OLG models, it is no longer true that equilibrium price sequences must be isolated from each other; instead, sets of indeterminate equilibria are common. Figure 3 illustrates a situation that occurs generically in OLG models.

Figure 3. A set of indeterminate equilibria in an infinite horizon model.

Source: Author.

This figure plots three price sequences, p*, p1*, and p2*, as functions of time. p* is a stationary equilibrium price sequence, and p1* and p2* are nonstationary equilibrium price sequences.

Unlike the example in Figure 2, a sequence that begins at p11* or p1*2 converges to the steady-state equilibrium price sequence p*. It is easy to find examples of OLG models where there are nonstationary equilibrium price sequences, like p1* and p2*, that obey the market clearing conditions and remain bounded as t. All of these sequences are perfect-foresight equilibrium price sequences and all of them are indeterminate. For any one of these equilibrium price sequences, there is another one that is arbitrarily close, where closeness of one sequence to another is measured by the sup norm.

Indeterminacy and Rational Expectations

The existence of indeterminate equilibrium price sequences in OLG models is curious, but it might be thought uninteresting. In the examples depicted in Figure 3, almost all of the equilibria are nonstationary, and all of these nonstationary equilibria converge to a stationary perfect-foresight equilibrium. They appear to explain transitory phenomena that would never be observed in practice. They are, however, of considerable practical importance once one moves beyond the assumption of perfect foresight.

In rational expectations models, even with a complete set of Arrow securities, there exist multiple stationary rational expectations equilibria (Farmer & Woodford, 1984). In these equilibria, prices and allocations fluctuate from one week to the next purely because people believe that they will. They are examples of equilibria driven by self-fulfilling prophecies, a phenomenon that can occur in RA models where money is used as a medium of exchange as well as in OLG models with or without money.

Indeterminacy in Rational Expectations Models With RAs

The RA research school in macroeconomics that developed in the 1980s was initially restricted to purely real models. This RBC school added production to the pure exchange model but preserved the RA assumption. In their 1987 paper, Lucas and Stokey introduced money to the RBC model by assuming that cash must be held to purchase goods, and, following their lead, a large part of the profession adopted monetary versions of the RBC model to understand the real effects of monetary shocks. Almost all of the papers in this literature retained assumptions to guarantee that equilibrium is locally determinate.12

In monetary RA models with determinate equilibria, money is an appendage that plays no independent role in driving business cycles. A shock to the money supply or a shock to the money interest rate is predicted to feed immediately into the price level, but to have no effect on economic activity. This property was hard to square with the work by Sims (1980, 1989), who found that shocks to the nominal interest rate appear to have big causal effects on real GDP. The economics profession responded to this fact in two ways.

The indeterminacy school developed models where the real effects of monetary shocks are seen as identifying assumptions that select one of many possible equilibria in a model with multiple indeterminate dynamic equilibria. In these models, there often exists an equilibrium in which a shock to the nominal quantity of money influences real GDP in the short run but leads, in the long run, to higher prices and no long-run impact on real magnitudes. Models with this property were developed by Farmer and Woodford, (1984), Farmer (1991), Matheny (1998), and Benhabib and Farmer (2000) and are surveyed in Benhabib and Farmer (1999).

The response from economists working with RA models was different. They continued to rule out indeterminacy by restricting the parameters of their models to regions of the parameter space where equilibria are locally determinate. To understand the facts uncovered by Sims, they added the assumption that money prices are “sticky” as a consequence of small costs of price adjustment.13 This approach, dubbed “new Keynesian economics” by Mankiw and Romer, (1998), became the mainstream model adopted by central banks to explain the real effects of monetary policy. The following section summarizes the new Keynesian (NK) model.

The Flagship NK Model

The flagship NK model consists of the following three equations:

yt=Et[yt+1]a(itEt[pt+1pt])+ρ+utD,(7)
it=ηπ(ptpt1)+utP,(8)
(ptpt1)=βEt[pt+1pt]+κ(yty¯t)+utS.(9)

Here, yt is the log of GDP, pt is the log of the price level, y¯t is the log of potential GDP, it is the short-term money interest rate, and ptpt1, is the log difference of the price level.14 The log difference of the price level is also, by definition, the date t inflation rate. Et[] is the conditional expectations operator, and the symbols a,ρ,ηπ, β, and κ are parameters derived from assumptions about private-sector and government behavior. Equation 7, called an optimizing IS curve, is derived from the first-order intertemporal condition of an optimizing infinitely lived representative consumer, Equation 8 is a central-bank reaction function, also referred to as a Taylor rule (Taylor, 1999), and Equation 9 is a NK Phillips curve. The terms utD, utP, and utS are, respectively, a demand shock, a policy shock, and a supply shock.

The NK model can be amended to include the following equation:

BtPt=τ=tQtτsτ,(10)

where Bt is the dollar value of government debt, Pt=exp(pt) is the price level, Qtτ is the present value at date t, measured in units of date t goods, of a claim to goods at date τ, and sτ is the budget surplus, equal to the real value at date τ of government tax revenues net of expenditure. The present value price, Qtτ, is determined by the interest rates and inflation rates that hold between periods t and τ.

Active and Passive Fiscal and Monetary Policies in the Flagship NK Model

Leeper (1991) suggested the following classification of policies in the NK model. If the coefficient ηπ in the Taylor rule is greater than 1, monetary policy is said to be active. If it is less than 1, monetary policy is passive. This classification is useful because one can show that if monetary policy is active, the equilibrium of the NK model is locally determinate. The classification of fiscal policies as active or passive requires a little more explanation.

If the government were to be treated in the same way as any other actor in a GE model, the Treasury would need to ensure that Equation 10 holds for every possible price level Pt and every sequence of present value prices {Qtτ}τ=t. Under this interpretation of the constraints on feasible fiscal policies, Equation 10, is a government budget constraint. Taking prices and interest rates as given, the government would need to ensure that it raises enough revenue to eventually pay off its outstanding debt. If the Treasury does indeed adjust taxes or expenditure plans, or both, to ensure that it remains solvent for all possible prices and interest rates, the fiscal policy is said to be passive. If, instead, the government sets the sequence of surpluses {sτ}τ=t independently of Pt, Bt, or {Qtτ}τ=t, fiscal policy is said to be active.

Equilibrium Determinacy When Monetary Policy Is Active and Fiscal Policy Is Passive

When the NK model was first developed in the 1980s, little attention was paid to fiscal constraints, and it was assumed that fiscal policy is always passive. Like any other actor in a GE model, the government was assumed to be a price taker that cannot spend more than it receives in income. Attention during this period was focused on the possibility that active monetary policy can select a locally determinate equilibrium by influencing the stability properties of the steady-state equilibrium of a log-linear NK model.15

If one defines Xt[it,ptpt1,yt], the NK model is an example of a linear rational expectations model of the form

Xt=AEt[Xt+1]+C+Ut,(11)

where C is a 3×1 vector of constants, A is a 3×3 matrix of coefficients, and Ut=[utD,utP,utS] is a vector of random variables that have zero expected value and are independently and identically distributed.16 As long as monetary policy is active, all of the eigenvalues of the matrix A are inside the unit circle and, in this case, the NK model has the following reduced form:

Xt=(IA)1C+Ut.(12)

The inflation rate, GDP, and the interest rate are all functions only of the shocks, Ut, and the price level is pinned down by the definition of inflation in the initial period.

Equilibrium Determinacy When Monetary Policy Is Passive and Fiscal Policy Is Active

When monetary policy is passive, as it was in the United States prior to 1979, and again from 2009 to 2017, the price level is no longer determined by the equations of the NK model.17 Instead, the price level may fluctuate randomly, driven by the self-fulfilling beliefs of market participants. To handle this apparent “problem” with NK economics, some economists (Leeper, 1991; Sims, 1994; Woodford, 1995) have suggested that, when monetary policy is passive, government debt will remain bounded even if fiscal policy is active. This idea is called the fiscal theory of the price level (FTPL).

When fiscal policy is active, the Treasury no longer adjusts its tax and spending plan to ensure budget balance; instead, Equation 10 determines the price level as a function of the expected present value of all future surpluses. Equation 10, in this interpretation, is not a budget constraint, it is a debt valuation equation. According to Leeper’s classification, the price level is locally determinate if monetary policy is active and fiscal policy is passive, or if monetary policy is passive and fiscal policy is active.18

The indeterminacy of the price level in the RA model is a problem, but there is at least a potential resolution. One can restrict attention to the central bank’s preferred steady state and assume that the policy mix is always a combination of one active and one passive policy although as pointed out by Benhabib, Schmidt-Grohé, and Uribe (2001), this solution is restricted to linear models. The following section demonstrates that no such resolution is possible in the OLG model, where indeterminacy of the equilibrium prices and quantities is more pervasive. In an OLG model calibrated to US data, monetary and fiscal policy can both be active at the same time, and yet economic fundamentals are insufficient to completely determine either absolute or relative prices.

Indeterminacy in Rational Expectations Models With OLG

Kehoe and Levine (1985) compared the differences in the determinacy properties of RA models and OLG models. They demonstrated that, in the OLG model, equilibria may be generically indeterminate of arbitrary degree.19 It was widely believed in the 1980s that the Kehoe-Levine result that equilibria in the OLG model are generically indeterminate had little relevance to practical models of the macroeconomy. As shown in recent research by Farmer and Zabczyk (2019), that belief is incorrect.

A Calibrated Example of an OLG Model With Indeterminacy

Farmer and Zabczyk (2019) provided an example of an OLG model in which people live for 62 periods and where the income profile of the people in the model is calibrated to US data using estimates by Guvenen et al. (2015). The model they presented has an indeterminate steady-state equilibrium where money has value. This property is important because it implies that a very standard macroeconomic model, when calibrated to actual data, is incapable of uniquely determining prices and quantities.

Farmer and Zabczyk assumed initially that monetary policy is passive and fiscal policy is active. In the NK model, that combination of policies would result in local determinacy of the monetary steady-state equilibrium. Instead, they found that their calibrated model displays not just one, but two, degrees of indeterminacy. One degree of indeterminacy would be sufficient to invalidate the FTPL. The fact that they found two degrees of indeterminacy implies that the price level is indeterminate even if monetary and fiscal policy are both active. These results are extremely damaging to a research agenda that attempts to use DSGE (dynamic stochastic GE) models to determine the price level as a function of economic fundamentals alone.

The rational expectations research school, as envisioned by Lucas and Sargent, was an attempt to explain expectations in terms of a narrowly defined set of fundamentals.20 The results of Kehoe and Levine (1985) and the Farmer-Zabczyk (2019) example throw doubt on the ability of this program to explain economic facts. The assumption that equilibrium is unique requires a very strong set of restrictions that are unlikely to be satisfied in the real world. It follows that preferences, endowments, and technologies are insufficient to explain how prices and quantities are determined in a market economy. We must also ask how beliefs are formed and how those beliefs independently influence economic outcomes.21

The Implications of Indeterminacy for Theories of Efficient Asset Markets

A large literature, beginning with works by Shiller (1981) and by Leroy and Porter (1981), has found that asset prices fluctuate too much to be explained by subsequent fluctuations in dividend payments. There must, instead, be a substantial movement in the price of risk.22 Fluctuations in the price of risk are measured, in a rational expectations model, by variations in the Arrow security prices QH and QT in Equation 1. In the RA model, fluctuations in these prices are sometimes attributed to shocks to constraints on how much agents are allowed to borrow. In the OLG model, they can be fully explained as self-fulfilling prophecies, even in a model where there is a complete set of Arrow securities (Farmer, 2018).

Fama (1970) coined the term efficient markets hypothesis (EMH) to refer to the idea that the capital markets reflect all available information, and it is not possible, according to the EMH, to make money by buying and selling securities unless one has insider information. A large body of empirical evidence suggests that this hypothesis is at least approximately true, but it has nothing to do with the claim of GE theorists that markets are Pareto efficient.

The EMH refers to informational efficiency. This is the statement that there are no riskless arbitrage opportunities, and informational efficiency is a property of any economic model with a complete set of Arrow securities. The assumption that there are complete financial markets may or may not be a close approximation to the real world.23 Whatever position one takes in the debate over complete versus incomplete markets, the existence of second-degree indeterminacy in a calibrated model has serious implications for the proposition that the financial markets efficiently allocate capital to competing ends. The existence of indeterminate relative prices implies that, even when the capital markets are complete, they do not efficiently allocate capital to competing ends. Why might that be?

In the Farmer and Zabczyk (2019) example, the relative price of goods today for goods tomorrow is indeterminate, even when the central bank and the treasury each follow active policies. The individuals who inhabit this model are able to trade with each other and to write insurance contracts against every event that may occur in subsequent weeks. But each week, a new set of people arrive in the market. These people are unable to participate in insurance markets that open before they were born and their actions may differ across states of nature.

The important point here is that, even when there is a complete set of Arrow securities, that fact does not guarantee complete participation in the financial markets because people have finite lives. This was the main point of the seminal paper by Cass and Shell (1983) on sunspots. In research that builds on that idea, Farmer (2018) has shown that simple MGE models generate equilibria where there are substantial asset price fluctuations, and a large risk premium, even when there is no underlying uncertainty of any kind. Because people are assumed to be risk averse and these fluctuations are avoidable, they are necessarily Pareto inefficient.

Models of Steady-State Indeterminacy

The models of dynamic indeterminacy that arise from the OLG structure suffer from the same deficiencies as the models of dynamic indeterminacy that arise from models of increasing returns-to-scale. If the labor market is Walrasian, these models will generate business cycles as small fluctuations around a socially efficient steady state. That insight suggests that, if one seeks a micro-foundation for the Keynesian concept of involuntary unemployment, one must consider models where the quantity of labor traded each period is determined by some mechanism other than Walrasian market clearing. The following section elaborates on this theme.

Classical Search Theory as an Alternative Equilibrium Concept

The Walrasian auctioneer has been widely criticized, not just by non-economists or economists from outside the field, but also by Arrow, a leading GE theorist, who, along with Debreu and McKenzie, was the first to provide a rigorous mathematical proof of the existence of equilibrium (Arrow & Debreu, 1954; McKenzie, 1954). Criticisms of Walrasian equilibrium as a theory of market prices and quantities led to the development of a variety of alternative equilibrium concepts, including quantity-constrained equilibrium, asymmetric information equilibrium, contract theory, and search equilibrium. This article is limited by space considerations to a discussion of just one of these alternative concepts, search equilibrium.

In 1973, Phelps co-edited an influential volume of articles, The Microeconomic Foundations of Employment and Inflation Theory (Phelps, 1972), which contained two of the first articles on the economics of search unemployment (Alchian, 1969; Mortensen, 1972). These papers were precursors to the development of a large literature that Farmer (2016, p. 77) has referred to as classical search theory. Classical search theory was further developed by Diamond (1982) and Pissarides (1979) and in 2010, Diamond, Mortensen, and Pissarides were awarded the Nobel Prize in Economics “for their analysis of markets with search frictions.”

Classical search theory is an alternative equilibrium concept, distinct from Walrasian equilibrium. It sees the labor market as a dynamic changing environment where people are constantly moving between the states of employment and unemployment. One could envisage a Walrasian model where unemployed individuals must allocate time between searching for a job or enjoying leisure and where firms must allocate the time of their workers between filling vacancies or producing goods. In a Walrasian equilibrium, the auctioneer would steer the economy toward an outcome in which unemployed and employed members of the labor force were each optimally allocating their time between these alternative activities.

Diamond, Mortensen, and Pissarides envisaged a different mechanism from Walrasian equilibrium to decide what happens each week. Instead of an auctioneer who sets prices, unemployed workers bump randomly into firms with vacant jobs. In a Walrasian market, search by an unemployed worker for a job, and search by a corporate recruiter for a worker, are distinct activities that would be associated with different prices. Instead, in search theory, there are not enough relative prices and, as a consequence, the labor market is incomplete, and the equilibrium unemployment rate is indeterminate.

If an unemployed worker meets a firm with a vacancy, the firm is willing to pay any wage less than or equal to the worker’s marginal product. The worker is willing to accept any wage greater than or equal to her reservation wage. To resolve this indeterminacy, classical search theory introduces a new parameter, the bargaining weight, which picks a wage somewhere between the worker’s marginal product and the firm’s reservation wage. Unless the bargaining weight takes a very specific value, the classical search theory equilibrium will not coincide with a social optimum and there may be too much or too little unemployment (Hosios, 1990).

Keynesian Search Theory as an Alternative Equilibrium Concept

The indeterminacy school uses a different approach to resolve the indeterminacy of search equilibrium. Farmer (2016, p. 77) called this alternative Keynesian search theory.24 Instead of assuming that firms bargain with workers, Keynesian search theory assumes that firms employ enough workers to produce the quantity of goods demanded by consumers. This quantity depends on consumers’ wealth, which is itself determined by the value of their assets. Farmer posited that asset market participants form beliefs about the price of shares in the stock market, and that, in equilibrium, these beliefs are validated by the actions of future asset market participants.

In the data, there is a strong correlation between unemployment and the stock market (Farmer, 2012b, 2015). If classical search theory is correct, movements in the stock market are caused by the rational expectations of market participants that there will be future movements in fundamentals. For example, a future court decision might give unions more power and increase the bargaining power of workers. If Keynesian search theory is correct, the direction of causation is reversed. It is not movements in the bargaining weight that cause movements in the unemployment rate and consequent movements in asset prices. It is movements in self-fulfilling beliefs about asset prices that cause variations in demand and subsequent variations in employment. The bargaining weight, in this interpretation of the facts, is endogenous.

Keynesian Search Theory and Indeterminacy in Macroeconomics

Models with dynamic indeterminacy have similar implications for fiscal and monetary policy to those of classical and NK DSGE models. Markets, left to themselves, may misallocate resources, but these misallocations are not of the orders of magnitude that characterize real-world financial crises. In contrast, the shift from Walrasian equilibrium to Keynesian search equilibrium is a far more plausible candidate for a micro-founded theory of major recessions.

In Keynesian search theory, if asset market participants stubbornly persist in believing that the stock market has low value, those beliefs will be associated with a permanently elevated unemployment rate. The indeterminacy of equilibrium is not confined to many dynamic paths converging on an approximately efficient steady state. Instead, in Keynesian search theory, pessimistic expectations can steer the economy into one of many low-level equilibrium traps in which there is a permanently elevated unemployment rate.

The theoretical difference between models with an isolated determinate steady-state equilibrium and models with a continuum of contiguous steady-state equilibria has important implications for the time series properties of data. If beliefs about the value of the stock market can wander randomly, as they do in real-world data, so can the unemployment rate. Keynesian search theory predicts that persistent low-frequency movements in the stock market cause persistent low-frequency movements in the unemployment rate.

The Stock Market and the Unemployment Rate

The theoretical possibilities thrown up by Keynesian search theory are consistent with the behavior of unemployment and asset prices that we see in data. Figure 4 shows that there was a close connection between the stock market and the unemployment rate during the Great Depression and in the years up to the start of World War II. If the labor market is Walrasian, the mass unemployment associated with the Great Depression must be explained by a shift in the preference for leisure. Or as Modigliani quipped, the Great Depression was a “sudden attack of contagious laziness” (Modigliani, 1977, p. 6). This seems implausible.

Figure 4. Unemployment and the stock market in the United States, 1928–1942.

Source: R. E. Farmer, Prosperity for All, Figure 7.2, p. 99. New York, NY: Oxford University Press. Used by Permission.

The connection of the unemployment rate to the stock market documented in Figure 4 is not an experience confined to the Great Depression in the United States. It is a universal connection that holds in post-WWII US data (Farmer, 2012b, 2015), German data (Fritsche & Pierdzioch, 2016), and in a panel of industrialized and nonindustrialized countries (Pan, 2018). In all of the studies cited above, researchers have provided support for the finding that the unemployment rate and the stock market are co-integrated random walks.25 Unemployment and the real value of assets each wander randomly but they do not wander too far from each other. The Keynesian version of search theory explains these facts as movements among a continuum of possible steady-state equilibria, caused by self-fulfilling shifts in beliefs.

Keynesian Economics Without the Phillips Curve

This article reviews models that display dynamic and steady-state indeterminacy, but as macroeconomists are fond of saying, “it takes a model to beat a model” (Sargent, 2011, p. 198). The three-equation NK model has been used as a vehicle to understand how the interest rate, the inflation rate, and real GDP are related to each other. How might that model be amended if one accepts the indeterminacy school in macroeconomics? In two recent papers, Farmer and Nicolò (2018, 2019) answer that question. They run a horse race of the three-equation NK model, closed with the Phillips curve, against an alternative Farmer monetary model (FM model) that replaces the new Keynesian Phillips curve with a parameterized belief function. This alternative model retains Equations 7 and 8 but replaces Equation 9 with

Et[xt+1]=xt+utS,(9a)

where xt=(yt+pt)(yt1+pt1) is the growth rate of nominal GDP. This equation is an example of beliefs as a new fundamental. The belief function modeled in Equation 9a captures the idea that people expect nominal income growth next year to equal nominal income growth this year. The model, closed in this way, allows beliefs to wander randomly, and its reduced-form representation is a system of random walks in which inflation, the money interest rate, and the deviation of GDP from potential exhibit nonstationary but co-integrated behavior. The FM model displays dynamic indeterminacy. This feature allows it to capture the fact that prices are sticky in the data. And it displays steady-state indeterminacy. This feature allows it to capture the fact that the unemployment rate is highly persistent and co-integrated with nominal GDP growth, a proxy for movements in real wealth.

The NK model assumes that the steady-state equilibrium is determinate. The FM model allows the steady-state equilibrium to be indeterminate. To understand which model better explains the data, Farmer and Nicolò used Bayesian statistics to compare the posterior odds ratios of the two models. The reduced form of the NK model is a stationary vector autoregression. The reduced form of the FM model is a nonstationary vector error correction model. The NK model restricts the data to be stationary. The FM model allows the data to be nonstationary but co-integrated. Farmer and Nicolò showed that in US, UK, and Canadian data, the posterior odds ratio favors the FM model by a large margin. It takes a model to beat a model. The indeterminacy school wins the day by a decisive margin.

Acknowledgments

The author thanks Kenneth Kuttner for suggesting an article on the indeterminacy school in macroeconomics. The author also thanks Kazuo Nishimura and Makoto Yano, editors of the International Journal of Economic Theory, and Costas Azariadis, Jess Benhabib, and all those who contributed to a newly published Festschrift in honor of my contributions to economics, including my body of work on indeterminacy in macroeconomics (International Journal of Economic Theory, 15(1), 2019). Thanks also to two reviewers and to Jean Philippe Bouchaud for their comments on a first draft of this article, and especially to C. Roxanne Farmer for helpful suggestions.

Further Reading

  • Farmer, R. E. (1999). The macroeconomics of self-fulfilling prophecies (2nd ed.). Cambridge, MA: MIT Press.
  • Pearce, K. A., & Hoover, K. D. (1995). After the revolution: Paul Samuelson and the textbook Keynesian model. History of Political Economy, 27(5), 183–216.
  • Shiller, R. J. (2019). Narrative economics: How stories go viral and drive major economic events. Princeton NJ: Princeton University Press.

References

  • Alchian, A. A. (1969). Information costs, pricing and resource unemployment. Economic Inquiry, 7(2), 109–128.
  • Arrow, K. J. (1964). The role of securities in the optimal allocation of risk bearing. Review of Economic Studies, 31, 91–96.
  • Arrow, K. J., & Debreu, G. (1954). Existence of a competitive equilibrium for a competitive economy. Econometrica, 22(3), 265–290.
  • Azariadis, C. (1981). Self-fulfilling prophecies. Journal of Economic Theory, 25(3), 380–396.
  • Azariadis, C., & Guesnerie, R. (1986). Sunspots and cycles. Review of Economic Studies, 53(5), 725–738.
  • Benhabib, J., & Farmer, R. E. (1994). Indeterminacy and increasing returns. Journal of Economic Theory, 63, 19–46.
  • Benhabib, J., & Farmer, R. E. (1999). Indeterminacy and sunspots in macroeconomics. In J. T. Woodford (Ed.), The handbook of macroeconomics (pp. 387–448). New York, NY: North Holland.
  • Benhabib, J., & Farmer, R. E. (2000). The monetary transmission mechanism. Review of Economic Dynamics, 3, 523–550.
  • Benhabib, J., Schmitt Grohé, S., & Uribe, M. (2001). The perils of Taylor rules. Journal of Economic Theory, 96(1–2), 40–69.
  • Bianchi, F., & Nicolò, G. (2017). A generalized approach to indeterminacy in linear rational expectations models. Working Paper No. 23521. Cambridge, MA: National Bureau of Economic Research.
  • Brock, W. A. (1974). Money and growth: The case of long run perfect foresight. Interrnational Economic Review, 15(3), 750–777.
  • Cass, D., & Shell, K. (1983). Do sunspots matter? Journal of Political Economy, 91, 193–227.
  • Cherrier, B., & Saïdi, A. (2018). The indeterminate fate of sunspot theory in economics (1972–2014). History of Political Economy, 50(3), 425–481.
  • Cochrane, J. (2011). Presidential address: Discount rates. The Journal of Finance, 66(4), 1047–1108.
  • De Vroey, M. (2016). A history of macroeconomics. Cambridge, UK: Cambridge University Press.
  • Debreu, G. (1959). Theory of value: An axiomatic analysis of economic equilibrium (Cowles Foundation Monograph 17). New Haven, CT: Yale University Press.
  • Diamond, P. (1982). Aggregate demand management in search equilibrium. Journal of Political Economy, 90(5), 881–894.
  • Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. Journal of Finance, 25, 383–417.
  • Farmer, R. E. (1991). Sticky prices. Economic Journal, 101(409), 1369–1379.
  • Farmer, R. E. (1993). The macroeconomics of self-fulfilling prophecies (1st ed.). Cambridge, MA: MIT Press.
  • Farmer, R. E. (2008). Aggregate demand and supply. International Journal of Economic Theory, 4, 77–93.
  • Farmer, R. E. (2010a). Expectations, employment and prices. New York, NY: Oxford University Press.
  • Farmer, R. E. (2010b). How the economy works: Confidence, crashes and self-fulfilling prophecies. New York, NY: Oxford University Press.
  • Farmer, R. E. (2010c). How to reduce unemployment: A new policy proposal. Journal of Monetary Economics: Carnegie Rochester Conference Series on Public Policy, 57(5), 557–572.
  • Farmer, R. E. (2012a). Confidence crashes and animal spirits. Economic Journal, 122, 155–172.
  • Farmer, R. E. (2012b). The stock market crash of 2008 caused the Great Recession: Theory and evidence. Journal of Economic Dynamics and Control, 36, 696–707.
  • Farmer, R. E. (2013). Animal spirits, financial crises and persistent unemployment. Economic Journal, 123(568), 317–340.
  • Farmer, R. E. (2014). The evolution of endogenous business cycles. Macroeconomic Dynamics, 20(2), 554–557.
  • Farmer, R. E. (2015). The stock market crash really did cause the Great Recession. Oxford Bulletin of Economics and Statistics, 77(5), 617–633.
  • Farmer, R. E. (2016). Prosperity for all: How to prevent financial crises. New York, NY: Oxford University Press.
  • Farmer, R. E. (2018). Pricing assets in a perpetual youth model. Review of Economic Dynamics, 30, 106–124.
  • Farmer, R. E., & Guo, J. T. (1994). Real business cycles and the animal spirits hypothesis. Journal of Economic Theory, 63, 42–73.
  • Farmer, R. E., Khramov, V., & Nicolò, G. (2015). Solving and estimating indeterminate DSGE models. Journal of Economic Dynamics and Control, 54, 17–36.
  • Farmer, R. E., & Nicolò, G. (2018). Keynesian economics without the Phillips curve. Journal of Economic Dynamics and Control, 89, 137–150.
  • Farmer, R. E., & Nicolò, G. (2019). Some international evidence for Keynesian economics without the Phillips curve. Discussion Paper No. 13655. London, UK: Center for Economic Policy Research.
  • Farmer, R. E., & Platonov, K. (2019). Animal spirits in a monetary economy. European Economic Review, 115, 60–77.
  • Farmer, R. E., & Woodford, M. (1984). Self-fulfilling prophecies and the business cycle. Caress Working Paper 84–12. Pennsylvania, PA: University of Pennsylvania.
  • Farmer, R. E., & Zabczyk, P. (2019). A Requiem for the Fiscal Theory of the Price Level. Discussion Paper 13432. London, UK: Center for Economic Policy Research.
  • Fritsche, U., & Pierdzioch, C. (2016). Animal spirits, the stock market and the unemployment rate: Some evidence for German data. DEP (Socioeconomics) Discussion Papers, Macroeconomics and Finance Series. Hamburg, Germany: University of Hamburg.
  • Gelain, P., & Guerrazzi, M. (2010). A DSGE model from the old Keynesian economics: An empirical investigation. Centre for Dynamic Macroeconomic Analysis: WP Series #14 St. Andrews, Scotland: University of St. Andrews.
  • Grandmont, J. (1985). On endogenous competitive business cycles. Econometrica, 53(5), 995–1045.
  • Grandmont, J. M. (1977). Temporary general equilibrium theory. Econometrica, 45(3), 535–572.
  • Guerrazzi, M. (2011). Search and stochastic dynamics in the old Keynesian economics: A rationale for the Shimer puzzle. Metroeconomica, 62(4), 561–586.
  • Guerrazzi, M. (2012). The “Farmerian” approach to ending finance-induced recession: Notes on stability and dynamics. Economic Notes, 41, 81–89.
  • Guvenen, F., Karahan, F., Ozkan, S., & Song, J. (2015). What Do Data on Millions of U.S. Workers Reveal about Life-Cycle Earnings Risk? NBER Working Paper No. 20913.
  • Hicks, J. R. (1939). Value and capital (2nd ed.). Oxford, UK: The Clarendon Press.
  • Hosios, A. J. (1990). On the efficiency of matching and related models of search and unemployment. Review of Economic Studies, 57(2), 279–298.
  • Kehoe, T. J., & Levine, D. K. (1985). comparative statics and perfect foresight in infinite horizon economies. Econometrica, 53, 433–453.
  • Keynes, J. M. (1936). The general theory of employment, interest and money. London, UK: MacMillan.
  • King, R. G., Plosser, C. I., & Rebelo, S. T. (1988). Production, growth and business cycles: I. The basic neoclassical model. Journal of Monetary Economics, 21(2–3), 195–232.
  • Knight, F. H. (1921). Risk, uncertainty and profit. Cornell, NY: Hart, Schaffner and Marx.
  • Kocherlakota, N. (2012). Incomplete labor markets (Mimeo). Federal Reserve Bank of Minneapolis.
  • Kydland, F. E., & Prescott, E. C. (1982). Time to build and aggregate fluctuations. Econometrica, 50, 1345–1370.
  • Leeper, E. (1991). Equilibria under “active” and “passive” monetary and fiscal policies. Journal of Monetary Economics, 27, 129–147.
  • LeRoy, S. F., & Porter, R. D. (1981). The present value relation: Tests based on implied variance bounds. Econometrica, 49, 555–574.
  • Long, J. B., & Plosser, C. I. (1983). Real business cycles. Journal of Political Economy, 91(1), 39–69.
  • Lubik, T. A., & Schorfheide, F. (2004). Testing for indeterminacy: An application to U.S. monetary policy. American Economic Review, 94(1), 190–217.
  • Lucas, R. E., Jr. (1987). Models of business cycles. Oxford, UK: Basil Blackwell.
  • Lucas, R. E., Jr. (2003). Macroeconomic priorities. American Economic Review, 93(1), 1–14.
  • Lucas, R. E., Jr., & Sargent, T. J. (1981). After Keynesian macroeconomics. In R. E. Lucas, Jr., & T. J. Sargent (Eds.), Rational expectations and econometric practice (Vol. 1, pp. 295–320). Minneapolis: University of Minnesota Press.
  • Lucas, R. E., Jr., & Stokey, N. L. (1987). Money and interest in a cash-in-advance economy. Econometrica, 55(3), 491–513.
  • Mankiw, N. G., & Romer, D. (1998). New Keynesian economics (Vol. 1). Cambridge MA: MIT Press.
  • Matheny, K. (1998). Non-neutral responses to money supply shocks when consumption and leisure are Pareto substitutes. Economic Theory, 11, 379–402.
  • McCallum, B. (1981). Price level determinacy with an interest rate policy rule and rational expectations. Journal of Monetary Economics, 8(3), 319–329.
  • McKenzie, L. W. (1954). On equilibrium in Graham’s model of world trade and other competitive systems. Econometrica, 22(2), 147–161.
  • Modigliani, F. (1977). The monetarist controversy or, should we forsake stabilization policies? American Economic Review, 67, 1–19.
  • Mortensen, D. (1972). A theory of wage and employment dynamics. In E. Phelps (Ed.), Microeconomic foundations of employment and inflation theory (pp. 176–211). New York, NY: W. W. Norton.
  • Negishi, T. (1960). Welfare economics and existence of an equilibrium for a competitive economy. Metroeconomica, 12(2–3), 92–97.
  • Pan, W.-F. (2018). Does the stock market really cause unemployment? A cross-country analysis. The North American Journal of Economics and Finance, 44, 34–43.
  • Pareto, V. (1906). Manuel d’économie politique. Paris, France: Marcel Girard.
  • Patinkin, D. (1956). Money, interest and prices: An integration of monetary and value theory. Evanston, IL.: Row, Peterson.
  • Phelps, E. S. (1972). Microeconomic foundations of employment and inflation theory. New York, NY: W. W. Norton.
  • Pissarides, C. A. (1979). Job matchings with state employment agencies and random search. Economic Journal, 89(356), 818–833.
  • Plotnikov, D. (2019). Hysteresis in unemployment: A confidence channel. International Journal of Economic Theory, 15(1), 109–127.
  • Radner, R. (1972). Existence of equilibrium of plans, prices and price expectations in a sequence of markets. Econometrica, 40(2), 289–303.
  • Samuelson, P. A. (1958). An exact consumption-loan model of interest with or without the social contrivance of money. Journal of Political Economy, 66(6), 467–482.
  • Sargent, T. J. (2011). Where to draw lines: Stability versus efficiency. Economica, 78, 197–214.
  • Sargent, T. J., & Wallace, N. (1975). “Rational” expectations, the optimal monetary instrument, and the optimal money supply rule. Journal of Political Economy, 83(2), 241–254.
  • Shell, K. (1971). Notes on the economics of infinity. Journal of Political Economy, 79(5), 1002–1011.
  • Shell, K. (1977). Monnaie et allocation intertemporelle (Mimeo). Malinvaud Seminar. Paris, France: CNRS Paris.
  • Shiller, R. J. (1981). Do stock prices move too much to be justified by subsequent changes in dividends? American Economic Review, 71, 421–436.
  • Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48(1), 1–48.
  • Sims, C. A. (1989). Models and their uses. American Journal of Agricutural Economics, 71(2), 489–494.
  • Sims, C. A. (1994). A simple model for study of the determination of the price level and the interaction of monetary and fiscal policy. Economic Theory, 4, 381–399.
  • Spear, S. E. (1984). Rational expectations in the overlapping generations model. Journal of Economic Theory, 34(2), 251–275.
  • Spear, S. E., & Srivistava, S. (1986). Markov rational expectations equilibria in an overlapping generations model. Journal of Economic Theory, 38(1), 35–62.
  • Taylor, J. B. (1999). Monetary policy rules. Chicago, IL: University of Chicago Press.
  • Walras, L. (1899, English Translation 1954). Éléments d’économie politique pure (Elements of pure economics, W. Jaffé, Trans.). New York, NY: A. M. Kelly.
  • Wen, Y. (1998). Capital utilization under increasing returns to scale. Journal of Economic Theory, 81(1), 7–36.
  • Williamson, S. D. (2015). Keynesian inefficiency and optimal policy: A new monetarist approach. Journal of Money, Credit and Banking, 47(S2), 197–222.
  • Woodford, M. (1995). Price-level determinacy without control of a monetary aggregate. Carnegie-Rochester Conference Series on Public Policy, 43, 1–46.

Notes

  • 1. Azariadis (1981) used the term self-fulfilling prophecy to describe this idea. Farmer and Woodford (1984) extended the concept to examples of the kind of equilibria discussed in this article, in which equilibria are randomizations across indeterminate sequences of perfect foresight equilibria. Shell (1977) and Cass and Shell (1983) referred to the phenomena of random allocations driven solely by beliefs as sunspots. In Paris, Grandmont (1985) was working on endogenous cycles and Guesnerie collaborated with Azariadis to explore the relationship between sunspots and cycles (Azariadis & Guesnerie, 1986). Other early models that studied self-fulfilling beliefs in overlapping generations models include papers by Spear (1984) and Spear and Srivistava (1986). Farmer’s (1993) graduate textbook summarized these ideas.

  • 2. The initial work on models with increasing returns-to-scale (Benhabib & Farmer, 1994; Farmer & Guo, 1994) was criticized for assuming a degree of increasing returns that some considered unrealistic. In a response to the critics, Wen (1998) showed that by assuming that capital utilization is variable over the business cycle, the increasing-returns explanation of endogenous fluctuations is fully consistent with empirical evidence.

  • 3. An allocation of commodities to people is Pareto efficient if there is no way of rearranging them in a way that makes one person better off without making someone else worse off (Pareto, 1906). It would be possible for models with increasing returns-to-scale to have very different welfare implications from RBC models, but in practice the welfare losses that occur in these models are small. The fact that welfare losses are small in many business cycle models was pointed out by Lucas in 1987 in his book Models of Business Cycles (Lucas, 1987) and was updated in 2003 in an article in the American Economic Review to reflect a new generation of models that include incomplete markets and possibly sticky prices. Welfare losses are small in models of dynamic indeterminacy, as they are in RBC models, because both kinds of models generate fluctuations around a socially efficient steady state in which the demand and supply of labor are always equal.

  • 4. See Farmer (2016) and the references therein. Papers that rely on static indeterminacy include Farmer (2008, 2010c, 2012a, 2013), Gelain and Guerrazzi (2010), Guerrazzi (2011, 2012), Kocherlakota (2012), Williamson (2015), Farmer and Nicolò (2018, 2019), Plotnikov (2019), and Farmer and Platonov (2019). This literature is discussed in De Vroey (2016).

  • 5. For an introduction to the history of these ideas, see Farmer (2010b, p. 68).

  • 6. See Kehoe and Levine (1985) for a proof of this assertion and a discussion of the difference between the two classes of model.

  • 7. An allocation is Pareto optimal, named after the Italian scholar Vilfredo Pareto, if there is no way of reorganizing the social allocation of commodities to make at least one person better off without simultaneously making someone else worse off.

  • 8. Following the publication of Samuelson’s paper on the OLG model (Samuelson, 1958), it was widely believed the difference between RA and OLG models was a consequence of the different timing assumptions. Shell (1971) showed that even if everyone who will ever be born can participate in a market at the beginning of time, the OLG model still leads to inefficient equilibria.

  • 9. Hicks (1939) provided the first developed account of TE theory. Later developments include those by Patinkin (1956), Radner (1972), and Grandmont (1977). Although for Hicks a period was a week, there is nothing special about that length of time and more commonly, the period of a TE model is identified with the period of data availability, which is often a quarter or a year.

  • 10. The notation xTy for two n×1 vectors x and y represents vector multiplication where xT is the transpose of x.

  • 11. The space of nonnegative bounded sequences is defined as B{{pt}t=1|pt+,pt1,forallt}

  • 12. The assumptions required to generate this result are strong; for example, the uniqueness result does not survive the introduction of inflationary finance to pay for budget deficits. It has been known at least since the work of Brock (1974) that MGE models have at least two steady states and one of these steady states is generically indeterminate.

  • 13. Although there have been attempts to deal with dynamic indeterminacy in estimated NK models, notably in work by Lubik and Schorfheide, (2004), the core model still adopts a menu-cost approach to sticky prices and, whenever possible, introduces assumptions to render a single stationary equilibrium locally unique. Farmer, Khramov, and Nicolò (2015) and Bianchi and Nicolò (2017) provided methods to solve and estimate indeterminate GE models with indeterminate equilibria using standard software packages.

  • 14. It is defined as Qtτ=S=tτ1(1+iS)PSPS+1 where Pt is the price level and it is the money interest rate.

  • 15. Monetary GE models assumed initially that the central bank adopts a money supply rule in which the quantity of money grows at a fixed rate. This, for example, is the assumption in Brock (1974). In part, this assumption was motivated by the fact that monetary models where the central bank pegs the money interest rate lead to price-level indeterminacy (Sargent & Wallace, 1975). Since central banks appear to use the interest rate as their main instrument to influence the economy, the assumption that the central bank sets a money growth target was problematic for attempts to build a realistic monetary theory. McCallum (1981) showed that determinacy of the price level is restored if the central bank adjusts the interest rate aggressively enough in response to inflation, where “aggressively enough” is defined as an interest rate response coefficient, ηπ, greater than 1.

  • 16. It will be assumed, for the purpose of exposition, that y¯t is constant. The model is easily adapted to allow for growth in potential output.

  • 17. Lubik and Schorfheide (2004) showed in an estimated GE model that policy was passive before 1979 and active afterward. The period from 2009 to 2017 is characterized by an interest rate peg at, effectively, zero.

  • 18. To some, the classification into active and passive rules may appear natural; to others, it may appear artificial. Whatever one’s view of the elegance of the theory, it is not sufficient to determine the price level globally, even in the NK model with the correct combination of active and passive policies. Benhabib, Schmidt-Grohé, and Uribe (2001) pointed out that a linear Taylor rule is inconsistent with the existence of the fact that the nominal interest rate cannot be negative. When the model is amended to allow the Taylor rule to respect the zero lower bound, the model always has at least two steady-state equilibria. If the Taylor rule is active at the steady state that the central bank prefers, there will always exist a second steady state with a low and possibly negative real interest rate at which the Taylor rule is passive and the initial price level is indeterminate.

  • 19. Kehoe and Levine’s work was largely ignored by macroeconomists. This was due, in part, to resistance from leading figures in the development of the rational expectations school, who advocated for a research agenda in which expectations are endogenously determined by preferences, endowments, and technology. See Cherrier and Saïdi (2018) for a discussion of the history of these ideas.

  • 20. See the agenda described by Robert Lucas and Thomas J. Sargent in their essay, “After Keynesian Macroeconomics” (Lucas & Sargent, 1981).

  • 21. To resolve the indeterminacy problem in MGE models, Farmer (1993) suggested the introduction of a new independent equation to characterize the way people form their beliefs about future variables. That approach has been shown empirically to outperform the NK model (Farmer & Nicolò, 2018, 2019).

  • 22. This was the message that John Cochrane pushed home in his Presidential Address to the American Finance Association (Cochrane, 2011).

  • 23. Some economists point to the costs of establishing contingent markets. They argue that it is pure fiction to think that real-world financial markets are complete, and they study economic models with fewer Arrow securities than states of nature. Others argue that the world is well approximated by the complete-markets assumption. In their view, a large divergence from complete markets would create a big incentive for an arbitrager to create a new security.

  • 24. The term Keynesian search theory was coined by Farmer (Farmer, 2016, p. 77) to differentiate it from classical search models closed with Nash bargaining. Related papers that use this approach include Farmer, (2010c, 2012a, 2013), Gelain and Guerrazzi (2010), Guerrazzi (2011, 2012), and Williamson (2015). In earlier work, Farmer (2008) referred to models where the labor market is closed by Keynesian search theory as Old Keynesian economics. Farmer (2010a), Chapter 6, uses the Keynesian search model to understand the Great Depression.

  • 25. The unemployment rate cannot be an exact random walk because it is bounded between 0 and 1. The papers cited here apply a logistic transformation that maps the unemployment rate into the real line. It is the transformed unemployment rate that passes nonstationarity tests in data.