Global policy spillovers can be defined as the effect of policy changes in one country on economic outcomes in other countries. The literature has mainly focused on monetary policy interdependencies and has identified three channels through which policy spillovers can materialize. The first is the expenditure-shifting channel—a monetary expansion in one country depreciates its currency, making its goods cheaper relative to those in other countries and shifting global demand toward domestic tradable goods. The second is the expenditure-changing channel—expansionary monetary policy in one country raises both domestic and foreign expenditure. The third is the financial spillovers channel—expansionary monetary policy in one country eases financial conditions in other economies. The literature generally finds that the net transmission effect is positive but small. However, estimated spillovers vary widely across countries and over time. In the aftermath of the Great Recession, the policy debate has devoted special attention to the possibility that the magnitude and sign of international spillovers might have changed in an environment of low interest rates worldwide, as the expenditure-shifting channel becomes more relevant when the effective lower bound reduces the effectiveness of conventional monetary policies.
Global Spillovers in a Low Interest Rate Environment
Sushant Acharya and Paolo Pesenti
Health and Economic Growth
David E. Bloom, Michael Kuhn, and Klaus Prettner
The strong observable correlation between health and economic growth is crucial for economic development and sustained well-being, but the underlying causality and mechanisms are difficult to conceptualize. Three issues are of central concern. First, assessing and disentangling causality between health and economic growth are empirically challenging. Second, the relation between health and economic growth changes over the process of economic development. In less developed countries, poor health often reduces labor force participation, particularly among women, and deters investments in education such that fertility stays high and the economy remains trapped in a stagnation equilibrium. By contrast, in more developed countries, health investments primarily lead to rising longevity, which may not significantly affect labor force participation and workforce productivity. Third, different dimensions of health (mortality vs. morbidity, children’s and women’s health, and health at older ages) relate to different economic effects. By changing the duration and riskiness of the life course, mortality affects individual investment choices, whereas morbidity relates more directly to work productivity and education. Children’s health affects their education and has long-lasting implications for labor force participation and productivity later in life. Women’s health is associated with substantial intergenerational spillover effects and influences women’s empowerment and fertility decisions. Finally, health at older ages has implications for retirement and care.
The History of Central Banks
The historical evolution of the role of central banks has been shaped by two major characteristics of these institutions: they are banks and they are linked—in various legal, administrative, and political ways—to the state. The history of central banking is thus an analysis of how central banks have ensured or failed to ensure the stability of the value of money and the credit system while maintaining supportive or conflicting relationships with governments and private banks. Opening the black box of central banks is necessary to understanding the political economy issues that emerge from the implementation of monetary and credit policy and why, in addition to macroeconomic effects, these policies have major consequences on the structure of financial systems and the financing of public debt. It is also important to read the history of the evolution of central banks since the end of the 19th century as a game of countries wanting to adopt a dominant institutional model. Each historical period was characterized by a dominant model that other countries imitated - or pretended to imitate while retaining substantial national characteristics - with a view to greater international political and financial integration. Recent academic research has explored several issues that underline the importance of central banks to the development of the state, the financial system and on macroeconomic fluctuations: (a) the origin of central banks; (b) their role as a lender of last resort and banking supervisor; (c) the justifications and consequences of domestic macroeconomic policy objectives - inflation, output, etc. -of central banks (monetary policy); (d) the special loans of central banks and their role in the allocation of credit (credit policy); (e) the legal and political links between the central bank and the government (independence); (f) the role of central banks concerning exchange rates and the international monetary system; (g) production of economic research and statistics.
Home Bias in International Macroeconomics
Home bias in international macroeconomics refers to the fact that investors around the world tend to allocate majority of their portfolios into domestic assets, despite the potential benefits to be had from international diversification. This phenomenon has been occurring across countries, over time, and across equity or bond portfolios. The bias towards domestic assets tends to be larger in developing countries relative to developed economies, with Europe characterized by the lowest equity home bias, while Central and South America—by the highest equity home bias. In addition, despite the secular decline in the level of equity home bias over time in all countries and regions, home bias still remains a robust feature of the data. Whether home bias is a puzzle depends on the portfolio allocation that one uses as a theoretical benchmark. For instance, home bias in equity portfolio is a puzzle when assessed through the lens of a simple international capital asset pricing model (CAPM) with homogeneous investors. This model predicts that investors should hold world market portfolios, namely a portfolio with the share of domestic asset equal to the share of those assets in the world market portfolio. For instance, since the share of US equity in the world capitalization in 2016 was 56%, then US investors should allocate 56% of their equity portfolio into local assets, while investing the remaining 44% into foreign equities. Instead, foreign equity comprised just 23% of US equity portfolio in 2016, hence the equity home bias. Alternative portfolio benchmark comes from the theories that emphasize costs for trading assets in international financial markets. These include transaction and information costs, differential tax treatments, and more broadly, differences in institutional environments. This research, however, has so far been unable to reach a consensus on the explanatory power of such costs. Yet another theory argues that equity home bias can arise due to the hedging properties of local equity. In particular, local equity can provide insurance from real exchange rate risk and non-tradable income risk (such as labor income risk), and thus a preference towards home equities is not a puzzle, but rather an optimal response to such risks. These theories, main advances and results in the macroeconomic literature on home bias are discussed in this article. It starts by presenting some empirical facts on the extent and dynamics of equity home bias in developed and developing countries. It is then shown how home bias can arise as an equilibrium outcome of the hedging demand in the model with real exchange rate and non-tradable labor income risk. Since solving models with portfolio choice is challenging, the recent advances in solving such models are also outlined in this article. Integrating the portfolio dynamics into models that can generate realistic asset price and exchange rate dynamics remains a fruitful avenue for future research. A discussion of additional open questions in this research agenda and suggestions for further readings are also provided.
Housing and Macroeconomics
Charles Ka Yui Leung
The earlier literature on macroeconomics focused on determining aggregate variables such as gross domestic product (GDP), inflation rate, and unemployment rate. It had little interaction with the literature on housing. The importance of housing in the macroeconomy has been recently discovered, and the macro-housing field is in development. The recent literature addresses several policy-relevant issues that are important for macroeconomics and housing strands of literature. One of the significant developments is the research on the rental market, as a considerable portion of the world population are renters. For instance, the impact of some macroeconomic policies depends on how easily a unit is converted between rental or owner-occupied housing. Just as failure to keep up with the mortgage payment in owner-occupied housing would lead to bankruptcy, failure to pay rent as the contract described could lead to eviction. The literature has started to investigate the causes and costs of such displacement. Some authors also explore whether public rental housing is a desirable policy. Another active research area is affordability. Some people could afford to rent but not own housing in some cities. Some may move to places where they can be house owners. The literature has started to explore such interactions of the locational choice with the tenure choice (i.e., to rent or to own). The durability of housing makes it a long-term investment. Hence, the timing and pricing of the current period housing transaction depend on the expectations of future prices. Moreover, the recent period transactions in the housing market could, in turn, affect future prices. Therefore, self-fulfilling prophecies are possible, and it is crucial to study the formation and evolution of expectations in the housing market. Some researchers have taken up the challenges and made some progress. Last but not least, the literature has extended from a single-market to a multi-market setting. Emerging literature studies the local housing and labor market, such as the county level, and brings results that challenge conventional wisdom. In response, a few authors have developed sophisticated multi-regional dynamic general equilibrium models to match the cross-sectional and time series facts and maintain the forward-looking assumption in the macroeconomics tradition. Those new models also help us to identify shocks that are not directly observable to econometricians and, at the same time, are essential to account for cross-sectional economic facts. They can bring us closer to the actual situation. In sum, the recent developments in macro-housing literature are exciting and encouraging. They would accompany scholars on the journey of evidence-based public policy formation.
Human Capital Inequality: Empirical Evidence
Brant Abbott and Giovanni Gallipoli
This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.
The Indeterminacy School in Macroeconomics
Roger E. A. Farmer
The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.
International Trade and the Environment: Three Remaining Empirical Challenges
Jevan Cherniwchan and M. Scott Taylor
Considerable progress has been made in understanding the relationship between international trade and the environment since Gene Grossman and Alan Krueger published their now seminal working paper examining the potential environmental effects of the North American Free Trade Agreement in 1991. Their work articulated a simple framework through which international trade and economic growth could affect the environment by impacting: the scale of economic activity (the scale effect), the composition of production across industries (the composition effect), or the emission intensity of individual industries (the technique effect). GK provided preliminary evidence of the relative magnitudes of the scale, composition and technique effects, and reached a striking conclusion: international trade would not necessarily harm the environment. Much of the subsequent literature examining the effects of international trade and the environment has adopted Grossman and Krueger’s simple framework and builds directly from their initial foray into the area. We now have better empirical evidence of the relationship between economic growth and environmental quality, of how environmental regulations affect international trade and investment flows, and of the relative magnitudes of the scale, composition and technique effects. Yet, the need for further progress remains along three key fronts. First, despite significant advances in our understanding of how economic growth affects environmental quality, evidence of the interaction between international trade, economic growth, and environmental outcomes remains scarce. Second, while a growing body of evidence suggests that environmental regulations significantly alter trade flows, it is still unclear if these policies have a larger or smaller effect than traditional determinants of comparative advantage. Third, although it is clear the technique effect is the primary driver of changes in pollution, evidence as to how trade has contributed to the technique effect is limited. Addressing these Three Remaining Challenges is necessary for assessing whether Grossman and Krueger’s conclusion that international trade need not necessarily harm the environment still holds today.
International Trade With Heterogeneous Firms: Theory and Evidence
Alessandra Bonfiglioli, Rosario Crinò, and Gino Gancia
International trade is dominated by a small number of very large firms. Models of trade with heterogeneous firms have been developed to study the causes and consequences of this observation. The canonical model of trade with heterogeneous firms shows that trade leads to between-firm reallocations and selection: It shifts employment toward firms with the best attributes and forces marginal firms to exit. The model also illustrates the role of heterogeneity, and its various sources, in explaining the volume of trade and the firm-level margins of adjustment. Consistent with the model, the empirical literature has documented that exporting is a rare activity, that exporting firms are larger and more productive than other firms, and that trade liberalization reallocates market shares toward the best-performing firms in various countries. Studies using transaction-level data have unveiled additional salient features of trade flows. First, sales by foreign firms are very heterogeneous and highly concentrated. Second, both the extensive margin (number of exporting firms) and the intensive margin (average export per firm) are important in explaining the level of exports and its changes over time. More heterogeneity in sales across firms is associated with a higher volume of trade along both margins. Third, increased foreign competition reallocates market shares toward top firms and hence can increase concentration from any country of origin. Numerous extensions of the benchmark model have been proposed to study other important aspects, such as the relevance of multi-product and multinational firms, the import behavior of firms, and the extent to which heterogeneity is endogenous to firms’ choices, but some open challenges still remain.
International Reserves, Exchange Rates, and Monetary Policy: From the Trilemma to the Quadrilemma
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space. We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Macroeconomic Aspects of Housing
Charles Ka Yui Leung and Cho Yiu Joe Ng
This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.
The Macroeconomics of Stratification
Stratification economics, which has emerged as a new subfield of research on inequality, is distinguished by a system-level analysis. It explores the role of power in influencing the processes and institutions that produce hierarchical economic and social orderings based on ascriptive characteristics. Macroeconomic factors play a role in buttressing stratification, especially by race and gender. Among the macroeconomic policy levers that produce and perpetuate intergroup inequality are monetary policy, fiscal expenditures, exchange rate policy, industrial policy, and trade, investment, and financial policies. These policies interact with a stratification “infrastructure,” comprised of racial and gender ideologies, norms, and stereotypes that are internalized at the individual level and act as a “stealth” factor in reproducing hierarchies. In stratified societies, racial and gender norms and stereotypes act to justify various forms of exclusion from prized economic assets such as good jobs. For example, gendered and racial stereotypes contribute to job segregation, with subordinated groups largely sequestered in the secondary labor market where wages are low and jobs are insecure. The net effect is that subordinated groups serve as shock absorbers that insulate members of the dominant group from the impact of negative macroeconomic phenomena such as unemployment and economic volatility. Further, racial and gender inequality have economy-wide effects, and play a role in determining the rate of economic growth and overall performance of an economy. The impact of intergroup inequality on macro-level outcomes depends on a country’s economic structure. While under some conditions, intergroup inequality acts as a stimulus to economic growth, under other conditions, it undermines societal well-being. Countries are not locked into a path whereby inequality has a positive or negative effect on growth. Rather, through their policy decisions, countries can choose the low road (stratification) or the high road (intergroup inequality). Thus, even if intergroup inequality has been a stimulus to growth in the past, it is possible to choose an equity-led growth path.
Macroeconomics of the Euro
While it is a long-standing idea in international macroeconomic theory that flexible nominal exchange rates have the potential to facilitate adjustment in international relative prices, a monetary union necessarily forgoes this mechanism for facilitating macroeconomic adjustment among its regions. Twenty years of experience in the eurozone monetary union, including the eurozone crisis, have spurred new macroeconomic research on the costs of giving up nominal exchange rates as a tool of adjustment, and the possibility of alternative policies to promote macroeconomic adjustment. Empirical evidence paints a mixed picture regarding the usefulness of nominal exchange rate flexibility: In many historical settings, flexible nominal exchanges rates tend to create more relative price distortions than they have helped resolve; yet, in some contexts exchange rate devaluations can serve as a useful correction to severe relative price misalignments. Theoretical advances in studying open economy models either support the usefulness of exchange rate movements or find them irrelevant, depending on the specific characteristics of the model economy, including the particular specification of nominal rigidities, international openness in goods markets, and international financial integration. Yet in models that embody certain key aspects of the countries suffering the brunt of the eurozone crisis, such as over-borrowing and persistently high wages, it is found that nominal devaluation can be useful to prevent the type of excessive rise in unemployment observed. This theoretical research also raises alternative polices and mechanisms to substitute for nominal exchange rate adjustment. These policies include the standard fiscal tools of optimal currency area theory but also extend to a broader set of tools including import tariffs, export subsidies, and prudential taxes on capital flows. Certain combinations of these policies, labeled a “fiscal devaluation,” have been found in theory to replicate the effects of a currency devaluation in the context of a monetary union such as the eurozone. These theoretical developments are helpful for understanding the history of experiences in the eurozone, such as the eurozone crisis. They are also helpful for thinking about options for preventing such crises in the future.
Making Institutions Work From the Bottom Up in Africa
Moussa P. Blimpo, Admasu Asfaw Maruta, and Josephine Ofori Adofo
Well-functioning institutions are essential for stable and prosperous societies. Despite significant improvement during the past three decades, the consolidation of coherent and stable institutions remains a challenge in many African countries. There is a persistent wedge between the de jure rules, the observance of the rules, and practices at many levels. The wedge largely stems from the fact that the analysis and design of institutions have focused mainly on a top-down approach, which gives more prominence to written laws. During the past two decades, however, a new strand of literature has emerged, focusing on accountability from the bottom up and making institutions more responsive to citizens’ needs. It designs and evaluates a mix of interventions, including information provision to local communities, training, or outright decentralization of decision-making at the local level. In theory, accountability from the bottom up may pave the way in shaping the institutions’ nature at the top—driven by superior localized knowledge. The empirical findings, however, have yielded a limited positive impact or remained mixed at best. Some of the early emerging regularities showed that information and transparency alone are not enough to generate accountability. The reasons include the lack of local ownership and the power asymmetry between the local elites and the people. Some of the studies have addressed many of these constraints at varying degrees without much improvement in the outcomes. A simple theoretical framework with multiple equilibria helps better understand this literature. In this framework, the literature consists of attempts to mobilize, gradually or at once, a critical mass to shift from existing norms and practices (inferior equilibrium) into another set of norms and practices (superior equilibrium). Shifting an equilibrium requires large and/or sustained shocks, whereas most interventions tend to be smaller in scope and short-lived. In addition, accountability at the bottom is often neglected relative to rights. If norms and practices within families and communities carry similar features as those observed at the top (e.g., abuse of one’s power), then the core of the problem is beyond just a wedge between the ruling elite and the citizens.
Methodology of Macroeconometrics
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.
New Monetarist Economics
Chao Gu, Han Han, and Randall Wright
This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.
Nonlinear Models in Macroeconometrics
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
Political Economy of Reform
“Reform” in the economics literature refers to changes in government policies or institutional rules because status-quo policies and institutions are not working well to achieve the goals of economic wellbeing and development. Further, reform refers to alternative policies and institutions that are available which would most likely perform better than the status quo. The main question examined in the “political economy of reform” literature has been why reforms are not undertaken when they are needed for the good of society. The succinct answer from the first generation of research is that conflict of interest between organized socio-political groups is responsible for some groups being able to stall reforms to extract greater private rents from status-quo policies. The next generation of research is tackling more fundamental and enduring questions: Why does conflict of interest persist? How are some interest groups able to exert influence against reforms if there are indeed large gains to be had for society? What institutions are needed to overcome the problem of credible commitment so that interest groups can be compensated or persuaded to support reforms? Game theory—or the analysis of strategic interactions among individuals and groups—is being used more extensively, going beyond the first generation of research which focused on the interaction between “winners” and “losers” from reforms. Widespread expectations, or norms, in society at large, not just within organized interest groups, about how others are behaving in the political sphere of making demands upon government; and, beliefs about the role of public policies, or preferences for public goods, shape these strategic interactions and hence reform outcomes. Examining where these norms and preferences for public goods come from, and how they evolve, are key to understanding why conflict of interest persists and how reformers can commit to finding common ground for socially beneficial reforms. Political markets and institutions, through which the leaders who wield power over public policy are selected and sanctioned, shape norms and preferences for public goods. Leaders who want to pursue reforms need to use the evidence in favor of reforms to build broad-based support in political markets. Contrary to the first generation view of reforms by stealth, the next generation of research suggests that public communication in political markets is needed to develop a shared understanding of policies for the public good. Concomitantly, the areas of reform have circled from market liberalization, which dominated the 20th century, back to strengthening governments to address problems of market failure and public goods in the 21st century. Reforms involve anti-corruption and public sector management in developing countries; improving health, education, and social protection to address persistent inequality in developed countries; and regulation to preserve competition and to price externalities (such as pollution and environmental depletion) in markets around the world. Understanding the functioning of politics is more important than ever before in determining whether governments are able to pursue reforms for public goods or fall prey to corruption and populism.
Purchasing Power Parity and Real Exchange Rates
Menzie D. Chinn
The idea that prices and exchange rates adjust so as to equalize the common-currency price of identical bundles of goods—purchasing power parity (PPP)—is a topic of central importance in international finance. If PPP holds continuously, then nominal exchange rate changes do not influence trade flows. If PPP does not hold in the short run, but does in the long run, then monetary factors can affect the real exchange rate only temporarily. Substantial evidence has accumulated—with the advent of new statistical tests, alternative data sets, and longer spans of data—that purchasing power parity does not typically hold in the short run. One reason why PPP doesn’t hold in the short run might be due to sticky prices, in combination with other factors, such as trade barriers. The evidence is mixed for the longer run. Variations in the real exchange rate in the longer run can also be driven by shocks to demand, arising from changes in government spending, the terms of trade, as well as wealth and debt stocks. At time horizon of decades, trend movements in the real exchange rate—that is, systematically trending deviations in PPP—could be due to the presence of nontraded goods, combined with real factors such as differentials in productivity growth. The well-known positive association between the price level and income levels—also known as the “Penn Effect”—is consistent with this channel. Whether PPP holds then depends on the time period, the time horizon, and the currencies examined.
Q-Factors and Investment CAPM
The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests. The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low). As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.