Asset returns change with fundamentals and other factors, such as technical information and sentiment over time. In modeling time-varying expected returns, this article focuses on the out-of-sample predictability of the aggregate stock market return via extensions of the conventional predictive regression approach.
The extensions are designed to improve out-of-sample performance in realistic environments characterized by large information sets and noisy data. Large information sets are relevant because there are a plethora of plausible stock return predictors. The information sets include variables typically associated with a rational time-varying market risk premium, as well as variables more likely to reflect market inefficiencies resulting from behavioral influences and information frictions. Noisy data stem from the intrinsically large unpredictable component in stock returns. When forecasting with large information sets and noisy data, it is vital to employ methods that incorporate the relevant information in the large set of predictors in a manner that guards against overfitting the data.
Methods that improve out-of-sample market return prediction include forecast combination, principal component regression, partial least squares, the LASSO and elastic net from machine learning, and a newly developed C-ENet approach that relies on the elastic net to refine the simple combination forecast. Employing these methods, a number of studies provide statistically and economically significant evidence that the aggregate market return is predictable on an out-of-sample basis. Out-of-sample market return predictability based on a rich set of predictors thus appears to be a well-established empirical result in asset pricing.
Article
Henrik Cronqvist and Désirée-Jessica Pély
Corporate finance is about understanding the determinants and consequences of the investment and financing policies of corporations. In a standard neoclassical profit maximization framework, rational agents, that is, managers, make corporate finance decisions on behalf of rational principals, that is, shareholders. Over the past two decades, there has been a rapidly growing interest in augmenting standard finance frameworks with novel insights from cognitive psychology, and more recently, social psychology and sociology. This emerging subfield in finance research has been dubbed behavioral corporate finance, which differentiates between rational and behavioral agents and principals.
The presence of behavioral shareholders, that is, principals, may lead to market timing and catering behavior by rational managers. Such managers will opportunistically time the market and exploit mispricing by investing capital, issuing securities, or borrowing debt when costs of capital are low and shunning equity, divesting assets, repurchasing securities, and paying back debt when costs of capital are high. Rational managers will also incite mispricing, for example, cater to non-standard preferences of shareholders through earnings management or by transitioning their firms into an in-fashion category to boost the stock’s price.
The interaction of behavioral managers, that is, agents, with rational shareholders can also lead to distortions in corporate decision making. For example, managers may perceive fundamental values differently and systematically diverge from optimal decisions. Several personal traits, for example, overconfidence or narcissism, and environmental factors, for example, fatal natural disasters, shape behavioral managers’ preferences and beliefs, short or long term. These factors may bias the value perception by managers and thus lead to inferior decision making.
An extension of behavioral corporate finance is social corporate finance, where agents and principals do not make decisions in a vacuum but rather are embedded in a dynamic social environment. Since managers and shareholders take a social position within and across markets, social psychology and sociology can be useful to understand how social traits, states, and activities shape corporate decision making if an individual’s psychology is not directly observable.
Article
Marius Guenzel and Ulrike Malmendier
One of the fastest-growing areas of finance research is the study of managerial biases and their implications for firm outcomes. Since the mid-2000s, this strand of behavioral corporate finance has provided theoretical and empirical evidence on the influence of biases in the corporate realm, such as overconfidence, experience effects, and the sunk-cost fallacy. The field has been a leading force in dismantling the argument that traditional economic mechanisms—selection, learning, and market discipline—would suffice to uphold the rational-manager paradigm. Instead, the evidence reveals that behavioral forces exert a significant influence at every stage of a chief executive officer’s (CEO’s) career. First, at the appointment stage, selection does not impede the promotion of behavioral managers. Instead, competitive environments oftentimes promote their advancement, even under value-maximizing selection mechanisms. Second, while at the helm of the company, learning opportunities are limited, since many managerial decisions occur at low frequency, and their causal effects are clouded by self-attribution bias and difficult to disentangle from those of concurrent events. Third, at the dismissal stage, market discipline does not ensure the firing of biased decision-makers as board members themselves are subject to biases in their evaluation of CEOs.
By documenting how biases affect even the most educated and influential decision-makers, such as CEOs, the field has generated important insights into the hard-wiring of biases. Biases do not simply stem from a lack of education, nor are they restricted to low-ability agents. Instead, biases are significant elements of human decision-making at the highest levels of organizations.
An important question for future research is how to limit, in each CEO career phase, the adverse effects of managerial biases. Potential approaches include refining selection mechanisms, designing and implementing corporate repairs, and reshaping corporate governance to account not only for incentive misalignments, but also for biased decision-making.
Article
Mahendrarajah Nimalendran and Giovanni Petrella
The most important friction studied in the microstructure literature is the adverse selection borne by liquidity providers when facing traders who are better informed, and the bid-ask spread quoted by market makers is one of these frictions in securities markets that has been extensively studied. In the early 1980s, the transparency of U.S. stock markets was limited to post-trade end-of-day transactions prices, and there were no easily available market quotes for researchers and market participants to study the effects of bid-ask spread on the liquidity and quality of markets. This led to models that used the auto-covariance of daily transactions prices to estimate the bid-ask spread. In the early 1990s, the U.S. stock markets (NYSE/AMEX/NASDAQ) provided pre-trade quotes and transaction sizes for researchers and market participants. The increased transparency and access to quotes and trades led to the development of theoretical models and empirical methods to decompose the bid-ask spread into its components: adverse selection, inventory, and order processing. These models and methods can be broadly classified into those that use the serial covariance properties of quotes and transaction prices, and others that use a trade direction indicator and a regression approach to decompose the bid-ask spread. Covariance and trade indicator models are equivalent in structural form, but they differ in parameters’ estimation (reduced form). The basic microstructure model is composed of two equations; the first defines the law of motion of the “true” price, while the second defines the process generating transaction price. From these two equations, an appropriate relation for transaction price changes is derived in terms of observed variables. A crucial point that differentiates the two approaches is the assumption made for estimation purposes relative to the behavior of order arrival, which is the probability of order reversal or continuation. Thus, the specification of the most general models allows for including an additional parameter that accounts for order behavior. The article provides a unified framework to compare the different models with respect to the restrictions that are imposed, and how this affects the relative proportions of the different components of the spread.
Article
Xudong An, Larry Cordell, Raluca A. Roman, and Calvin Zhang
Central banks around the world use monetary policy tools to promote economic growth and stability; for example, in the United States, the Federal Reserve (Fed) uses federal funds rate adjustments, quantitative easing (QE) or tightening, forward guidance, and other tools “to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” Changes in monetary policy affect both businesses and consumers. For consumers, changes in monetary policy affect bank credit supply, refinancing activity, and home purchases, which in turn affect household consumption and thus economic growth and price stability. The U.S. Fed rate cuts and QE programs during COVID-19 led to historically low interest rates, which spurred a huge wave of refinancings. However, the pass-through of rate savings in the mortgage market declined during the pandemic. The weaker pass-through can be linked to the extraordinary growth of shadow bank mortgage lenders during the COVID-19 pandemic: Shadow bank mortgage lenders charged mortgage borrowers higher rates and fees; therefore, a higher market share of them means a smaller overall pass-through of rate savings to mortgage borrowers. It is important to note that these shadow banks did provide convenience to consumers, and they originated loans faster than banks. The convenience and speed could be valuable to borrowers and important in transmitting monetary policy in a timelier way, especially during a crisis.
Article
Florian Exler and Michèle Tertilt
Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments.
Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016.
These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements.
Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules.
There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.
Article
George Batta and Fan Yu
Corporate credit derivatives are over-the-counter (OTC) contracts whose payoffs are determined by a single corporate credit event or a portfolio of such events. Credit derivatives became popular in the late 1990s and early 2000s as a way for financial institutions to reduce their regulatory capital requirement, and early research treated them as redundant securities whose pricing is tied to the underlying corporate bonds and equities, with liquidity and counterparty risk factors playing supplementary roles. Research in the 2010s and beyond, however, increasingly focused on the effects of market frictions on the pricing of CDSs, how CDS trading has impacted corporate behaviors and outcomes as well as the price efficiency and liquidity of other related markets, and the microstructure of the CDS market itself. This was made possible by the availability of market statistics and more granular trade and quote data as a result of the broad movement of the OTC derivatives market toward central clearing.
Article
Alon Brav, Andrey Malenko, and Nadya Malenko
Passively managed (index) funds have grown to become among the largest shareholders in many publicly traded companies. Their large ownership stakes and voting power have attracted the attention of market participants, academics, and regulators and have sparked an active debate about their corporate governance role. While many studies explore the governance implications of passive fund growth, they often come to conflicting conclusions.
To understand how the growth in indexing can affect governance, it is important to understand fund managers’ incentives to be engaged shareholders. These incentives depend on fund managers’ compensation contracts, ownership stakes, assets under management, and costs of engagement. Major passive asset managers, such as the Big Three (BlackRock, State Street, and Vanguard), may have incentives to be engaged even though they track the indices and their engagement efforts benefit all other funds that track the same indices. This is because such funds’ substantial ownership stakes in multiple firms can both increase the effectiveness of their engagement and create relatively large financial benefits from engagement despite the low fees they collect. However, there is a difference between large and small index fund families: the incentives of the latter are likely to be substantially smaller, and the empirical evidence appears to be consistent with this distinction.
The governance effects of passive fund growth also depend on where flows to passive funds come from, which investors are replaced by passive funds in firms’ ownership structures, how passive funds interact with other shareholders, and how their growth affects other asset managers’ compensation structures. Considering such aggregate effects and interactions can help reconcile the seemingly conflicting findings in the empirical literature. It also suggests that policymakers should be careful in using the existing studies to understand the aggregate governance effects of passive fund growth over the past decades.
Overall, the literature has made important progress in understanding and quantifying passive funds’ incentives to engage, their monitoring activities and voting practices, and their interactions with other shareholders. Based on the findings in the literature, there is yet no clear answer to whether passive fund growth has been beneficial or detrimental for governance, and there are many open questions remaining. These open questions suggest several important directions for future research in this area.
Article
Hao Liang and Luc Renneboog
Corporate social responsibility (CSR) refers to the incorporation of environmental, social, and governance (ESG) considerations into corporate management, financial decision-making, and investors’ portfolio decisions. Socially responsible firms are expected to internalize the externalities they create (e.g., pollution) and be accountable to shareholders and other stakeholders (employees, customers, suppliers, local communities, etc.). Rating agencies have developed firm-level measures of ESG performance that are widely used in the literature. However, these ratings show inconsistencies that result from the rating agencies’ preferences, weights of the constituting factors, and rating methodology.
CSR also deals with sustainable, responsible, and impact investing. The return implications of investing in the stocks of socially responsible firms include the search for an EGS factor and the performance of SRI funds. SRI funds apply negative screening (exclusion of “sin” industries), positive screening, and activism through engagement or proxy voting. In this context, one wonders whether responsible investors are willing to trade off financial returns with a “moral” dividend (the return given up in exchange for an increase in utility driven by the knowledge that an investment is ethical). Related to the analysis of externalities and the ethical dimension of corporate decisions is the literature on green financing (the financing of environmentally friendly investment projects by means of green bonds) and on how to foster economic decarbonization as climate change affects financial markets and investor behavior.
Article
Daniel Greene, Omesh Kini, Mo Shen, and Jaideep Shenoy
A large body of work has examined the impact of corporate takeovers on the financial stakeholders (shareholders and bondholders) of the merging firms. Since the late 2000s, empirical research has increasingly highlighted the crucial role played by the non-financial stakeholders (labor, suppliers, customers, government, and communities) in these transactions. It is, therefore, important to understand the interplay between corporate takeovers and the non-financial stakeholders of the firm.
Financial economists have long viewed the firm as a nexus of contracts between various stakeholders connected to the firm. Corporate takeovers not only play an important role in redefining the broad boundaries of the firm but also result in major changes to corporate ownership and structure. In the process, takeovers can significantly alter the contractual relationships with non-financial stakeholders. Because the firm’s relationships with these stakeholders are governed by implicit and explicit contracts, circumstances can arise that allow acquiring firms to fully or partially abrogate these contracts and extract rents from non-financial stakeholders after deal completion. In contrast, non-financial stakeholders can also potentially benefit from a takeover if they get to share in any efficiency gains that are generated in the deal.
Given this framework, the ex-ante importance of these contractual relationships can have a bearing on the efficacy of takeovers. The ability to alter contractual relationships ex post can affect the propensity of a takeover and merging firms’ shareholders and, in turn, impact non-financial stakeholders. Non-financial stakeholders will be more vested in post-takeover success if they can trust the acquiring firm to not take actions that are detrimental to them. The big picture that emerges from the surveyed literature is that non-financial stakeholder considerations affect takeover decisions and post-takeover outcomes. Moreover, takeovers also have an impact on non-financial stakeholders. The directions of all these effects, however, depend on the economic environment in which the merging firms operate.
Article
Miles Livingston and Lei Zhou
Credit rating agencies have developed as an information intermediary in the credit market because there are very large numbers of bonds outstanding with many different features. The Securities Industry and Financial Markets Association reports over $20 trillion of corporate bonds, mortgaged-backed securities, and asset-backed securities in the United States. The vast size of the bond markets, the number of different bond issues, and the complexity of these securities result in a massive amount of information for potential investors to evaluate. The magnitude of the information creates the need for independent companies to provide objective evaluations of the ability of bond issuers to pay their contractually binding obligations. The result is credit rating agencies (CRAs), private companies that monitor debt securities/issuers and provide information to investors about the potential default risk of individual bond issues and issuing firms.
Rating agencies provide ratings for many types of debt instruments including corporate bonds, debt instruments backed by assets such as mortgages (mortgage-backed securities), short-term debt of corporations, municipal government debt, and debt issued by central governments (sovereign bonds).
The three largest rating agencies are Moody’s, Standard & Poor’s, and Fitch. These agencies provide ratings that are indicators of the relative probability of default. Bonds with the highest rating of AAA have very low probabilities of default and consequently the yields on these bonds are relatively low. As the ratings decline, the probability of default increases and the bond yields increase.
Ratings are important to institutional investors such as insurance companies, pension funds, and mutual funds. These large investors are often restricted to purchasing exclusively or primarily bonds in the highest rating categories. Consequently, the highest ratings are usually called investment grade. The lower ratings are usually designated as high-yield or “junk bonds.”
There is a controversy about the possibility of inflated ratings. Since issuers pay rating agencies for providing ratings, there may be an incentive for the rating agencies to provide inflated ratings in exchange for fees. In the U.S. corporate bond market, at least two and often three agencies provide ratings. Multiple ratings make it difficult for one rating agency to provide inflated ratings. Rating agencies are regulated by the Securities and Exchange Commission to ensure that agencies follow reasonable procedures.
Article
Carlos Garriga and Aaron Hedlund
The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.
Article
Ling Cen and Sudipto Dasgupta
The interrelationships between upstream supplier firms and downstream customer firms—popularly referred to as supply-chain relationships—constitute one of the most important linkages in the economy. Suppliers not only provide production inputs for their customers but, increasingly, also engage in R&D and innovation activity that is beneficial to the customers. Yet, the high degree of relationship specificity that such activities involve, and the difficulty of writing complete contracts, expose suppliers to potential hold-up problems. Mechanisms that mitigate opportunism have implications for the origins of such relationships, firm boundary, and organizational structure. Smaller supplier firms benefit from relationships with large customer firms in many ways, such as knowledge sharing, operational efficiency, insulation from competition, and reputation in capital markets. However, customer bargaining power, undiversified customer base, and innovation strategy also expose suppliers to disruption risk. Relationship specificity of investment, customer bargaining power, and customer concentration associated with a less diversified customer base have important consequences for financing decisions of suppliers and customers, such as capital structure choice and the provision and role of trade credit. Changes in the risk of disruption (e.g., bankruptcy filings, takeover activity, and credit market shocks) have spillover effects along the supply chain. The correlation of economic fundamentals of suppliers and customers and the co-attention that they receive from market participants translate to return predictability (with implications for trading strategies), information diffusion along the supply chain, and stock-price informativeness of supply-chain partners.
Article
Chao Gu, Han Han, and Randall Wright
The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.
Article
William Megginson, Herber Farnsworth, and Bing (Violet) Xu
Defined as a single industrial sector, the global production, distribution, and consumption of energy is the world’s largest in terms of annual capital investment (US$1.83 trillion in 2019, the last prepandemic year for which full data are available) and the second largest nonfinancial industry in terms of sales revenue (US$4.51 trillion). Production and consumption of more than 100 million barrels of oil occurs each day—with 70% being traded across borders. Each of the world’s 7.5 billion citizens consumes an average of 3,181 kilowatt-hours per year, although per capita energy consumption varies enormously and is much higher in rich than in poor countries.
Properly analyzing the financial economics of the global energy industry requires focusing on both the physical aspects of production and distribution—how, where, and with what type of fuel energy is produced and consumed—and the capital investment required to support each energy segment. The global energy “industry” can be broadly categorized into two main segments: (a) provision of fuels for transportation and production and (b) distribution of electricity for residential and industrial consumption. The fuels sector encompasses the production; processing; and distribution of crude oil and its refined products, mostly gasoline, kerosene (which becomes jet fuel), diesel, gas oil, and residual fuel oil. The electric power sector includes four related businesses: generation, transmission, distribution, and supply.
Two imperatives drive the ongoing transformation of the global energy industry. These are (a) meeting rising demand due to population growth and rising wealth and (b) addressing climate change through greener energy policies and massive capital investments by corporations and governments. The pathway to decarbonizing electricity production and distribution by 2050 is fairly straightforward technologically; however, doing so will require both scientific innovations (particularly regarding scalable battery storage) and sustained multitrillion dollar annual investments for the next three decades. Decarbonizing transportation is a far more difficult and expensive proposition, which will require fundamental breakthroughs in multiple technologies, coupled with unusually farsighted policy action. Extant academic research already provides useful guidance for policymakers in many areas, but far more is required to help shape the future policy agenda.
Article
Markowitz showed that an investor who cares only about the mean and variance of portfolio returns should hold a portfolio on the efficient frontier. The application of this investment strategy proceeds in two steps. First, the statistical moments of asset returns are estimated from historical time series, and second, the mean-variance portfolio selection problem is solved separately, as if the estimates were the true parameters. The literature on portfolio decision acknowledges the difficulty in estimating means and covariances in many instances. This is particularly the case in high-dimensional settings. Merton notes that it is more difficult to estimate means than covariances and that errors in estimates of means have a larger impact on portfolio weights than errors in covariance estimates. Recent developments in high-dimensional settings have stressed the importance of correcting the estimation error of traditional sample covariance estimators for portfolio allocation. The literature has proposed shrinkage estimators of the sample covariance matrix and regularization methods founded on the principle of sparsity. Both methodologies are nested in a more general framework that constructs optimal portfolios under constraints on different norms of the portfolio weights including short-sale restrictions. On the one hand, shrinkage methods use a target covariance matrix and trade off bias and variance between the standard sample covariance matrix and the target. More prominence has been given to low-dimensional factor models that incorporate theoretical insights from asset pricing models. In these cases, one has to trade off estimation risk for model risk. Alternatively, the literature on regularization of the sample covariance matrix uses different penalty functions for reducing the number of parameters to be estimated. Recent methods extend the idea of regularization to a conditional setting based on factor models, which increase with the number of assets, and apply regularization methods to the residual covariance matrix.
Article
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Article
Craig Burnside
The uncovered interest parity (UIP) condition states that the interest rate differential between two currencies is the expected rate of change of their exchange rate. Empirically, however, in the 1976–2018 period, exchange rate changes were approximately unpredictable over short horizons, with a slight tendency for currencies with higher interest rates to appreciate against currencies with lower interest rates. If the UIP condition held exactly, carry trades, in which investors borrow low interest rate currencies and lend high interest rate currencies, would earn zero average profits. The fact that UIP is violated, therefore, is a necessary condition to explain the fact that carry trades earned significantly positive profits in the 1976–2018 period. A large literature has documented the failure of UIP, as well as the profitability of carry trades, and is surveyed here. Additionally, summary evidence is provided here for the G10 currencies. This evidence shows that carry trades have been significantly less profitable since 2007–2008, and that there was an apparent structural break in exchange rate predictability around the same time.
A large theoretical literature explores economic explanations of this phenomenon and is briefly surveyed here. Prominent among the theoretical models are ones based on risk aversion, peso problems, rare disasters, biases in investor expectations, information frictions, incomplete financial markets, and financial market segmentation.
Article
William Quinn and John Turner
Financial bubbles constitute some of history’s most significant economic events, but academic research into the phenomenon has often been narrow, with an excessive focus on whether bubble episodes invalidate or confirm the efficient markets hypothesis. The literature on the topic has also been somewhat siloed, with theoretical, experimental, qualitative, and quantitative methods used to develop relatively discrete bodies of research.
In order to overcome these deficiencies, future research needs to move beyond the rational/irrational dichotomy and holistically examine the causes and consequences of bubbles. Future research in financial bubbles should thus use a wider range of investigative tools to answer key questions or attempt to synthesize the findings of multiple research programs.
There are three areas in particular that future research should focus on: the role of information in a bubble, the aftermath of bubbles, and possible regulatory responses. While bubbles are sometimes seen as an inevitable part of capitalism, there have been long historical eras in which they were extremely rare, and these eras are likely to contain lessons for alleviating the negative effects of bubbles in the 21st century. Finally, the literature on bubbles has tended to neglect certain regions, and future research should hunt for undiscovered episodes outside of Europe and North America.
Article
Rajesh P. Narayanan and Jonathan Pritchett
Financial economics reveals that slaves were profitable investments and that the rate of return from owning slaves was at least as high as the return on comparable investments. The profitability of slavery depended on both the productivity and the market valuation of slaves. Owners increased the productivity of slaves by developing better strains of cotton, employing more efficient systems of production (gang labor), and using force and coercion (whippings). Efficient markets facilitated the interregional transfer of labor, and selective sales devastated slave families. Market studies show that slave prices reflected the capitalized value of labor and that they varied based on labor productivity. The profitability of slaves and the availability of efficient markets made slaves attractive investment vehicles for storing wealth. Their attractiveness as investments, however, may have had some other costs. Several studies argue and provide evidence that investment in slaves supplanted investment in other forms of physical and human capital, much to the detriment of southern industrialization and development. Besides serving as investment vehicles, slaves also facilitated financing. A growing body of work provides evidence that slaves were pledged as collateral to obtain credit.