1-20 of 23 Results  for:

  • Financial Economics x
Clear all

Article

Henrik Cronqvist and Désirée-Jessica Pély

Corporate finance is about understanding the determinants and consequences of the investment and financing policies of corporations. In a standard neoclassical profit maximization framework, rational agents, that is, managers, make corporate finance decisions on behalf of rational principals, that is, shareholders. Over the past two decades, there has been a rapidly growing interest in augmenting standard finance frameworks with novel insights from cognitive psychology, and more recently, social psychology and sociology. This emerging subfield in finance research has been dubbed behavioral corporate finance, which differentiates between rational and behavioral agents and principals. The presence of behavioral shareholders, that is, principals, may lead to market timing and catering behavior by rational managers. Such managers will opportunistically time the market and exploit mispricing by investing capital, issuing securities, or borrowing debt when costs of capital are low and shunning equity, divesting assets, repurchasing securities, and paying back debt when costs of capital are high. Rational managers will also incite mispricing, for example, cater to non-standard preferences of shareholders through earnings management or by transitioning their firms into an in-fashion category to boost the stock’s price. The interaction of behavioral managers, that is, agents, with rational shareholders can also lead to distortions in corporate decision making. For example, managers may perceive fundamental values differently and systematically diverge from optimal decisions. Several personal traits, for example, overconfidence or narcissism, and environmental factors, for example, fatal natural disasters, shape behavioral managers’ preferences and beliefs, short or long term. These factors may bias the value perception by managers and thus lead to inferior decision making. An extension of behavioral corporate finance is social corporate finance, where agents and principals do not make decisions in a vacuum but rather are embedded in a dynamic social environment. Since managers and shareholders take a social position within and across markets, social psychology and sociology can be useful to understand how social traits, states, and activities shape corporate decision making if an individual’s psychology is not directly observable.

Article

Marius Guenzel and Ulrike Malmendier

One of the fastest-growing areas of finance research is the study of managerial biases and their implications for firm outcomes. Since the mid-2000s, this strand of behavioral corporate finance has provided theoretical and empirical evidence on the influence of biases in the corporate realm, such as overconfidence, experience effects, and the sunk-cost fallacy. The field has been a leading force in dismantling the argument that traditional economic mechanisms—selection, learning, and market discipline—would suffice to uphold the rational-manager paradigm. Instead, the evidence reveals that behavioral forces exert a significant influence at every stage of a chief executive officer’s (CEO’s) career. First, at the appointment stage, selection does not impede the promotion of behavioral managers. Instead, competitive environments oftentimes promote their advancement, even under value-maximizing selection mechanisms. Second, while at the helm of the company, learning opportunities are limited, since many managerial decisions occur at low frequency, and their causal effects are clouded by self-attribution bias and difficult to disentangle from those of concurrent events. Third, at the dismissal stage, market discipline does not ensure the firing of biased decision-makers as board members themselves are subject to biases in their evaluation of CEOs. By documenting how biases affect even the most educated and influential decision-makers, such as CEOs, the field has generated important insights into the hard-wiring of biases. Biases do not simply stem from a lack of education, nor are they restricted to low-ability agents. Instead, biases are significant elements of human decision-making at the highest levels of organizations. An important question for future research is how to limit, in each CEO career phase, the adverse effects of managerial biases. Potential approaches include refining selection mechanisms, designing and implementing corporate repairs, and reshaping corporate governance to account not only for incentive misalignments, but also for biased decision-making.

Article

Mahendrarajah Nimalendran and Giovanni Petrella

The most important friction studied in the microstructure literature is the adverse selection borne by liquidity providers when facing traders who are better informed, and the bid-ask spread quoted by market makers is one of these frictions in securities markets that has been extensively studied. In the early 1980s, the transparency of U.S. stock markets was limited to post-trade end-of-day transactions prices, and there were no easily available market quotes for researchers and market participants to study the effects of bid-ask spread on the liquidity and quality of markets. This led to models that used the auto-covariance of daily transactions prices to estimate the bid-ask spread. In the early 1990s, the U.S. stock markets (NYSE/AMEX/NASDAQ) provided pre-trade quotes and transaction sizes for researchers and market participants. The increased transparency and access to quotes and trades led to the development of theoretical models and empirical methods to decompose the bid-ask spread into its components: adverse selection, inventory, and order processing. These models and methods can be broadly classified into those that use the serial covariance properties of quotes and transaction prices, and others that use a trade direction indicator and a regression approach to decompose the bid-ask spread. Covariance and trade indicator models are equivalent in structural form, but they differ in parameters’ estimation (reduced form). The basic microstructure model is composed of two equations; the first defines the law of motion of the “true” price, while the second defines the process generating transaction price. From these two equations, an appropriate relation for transaction price changes is derived in terms of observed variables. A crucial point that differentiates the two approaches is the assumption made for estimation purposes relative to the behavior of order arrival, which is the probability of order reversal or continuation. Thus, the specification of the most general models allows for including an additional parameter that accounts for order behavior. The article provides a unified framework to compare the different models with respect to the restrictions that are imposed, and how this affects the relative proportions of the different components of the spread.

Article

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.

Article

Miles Livingston and Lei Zhou

Credit rating agencies have developed as an information intermediary in the credit market because there are very large numbers of bonds outstanding with many different features. The Securities Industry and Financial Markets Association reports over $20 trillion of corporate bonds, mortgaged-backed securities, and asset-backed securities in the United States. The vast size of the bond markets, the number of different bond issues, and the complexity of these securities result in a massive amount of information for potential investors to evaluate. The magnitude of the information creates the need for independent companies to provide objective evaluations of the ability of bond issuers to pay their contractually binding obligations. The result is credit rating agencies (CRAs), private companies that monitor debt securities/issuers and provide information to investors about the potential default risk of individual bond issues and issuing firms. Rating agencies provide ratings for many types of debt instruments including corporate bonds, debt instruments backed by assets such as mortgages (mortgage-backed securities), short-term debt of corporations, municipal government debt, and debt issued by central governments (sovereign bonds). The three largest rating agencies are Moody’s, Standard & Poor’s, and Fitch. These agencies provide ratings that are indicators of the relative probability of default. Bonds with the highest rating of AAA have very low probabilities of default and consequently the yields on these bonds are relatively low. As the ratings decline, the probability of default increases and the bond yields increase. Ratings are important to institutional investors such as insurance companies, pension funds, and mutual funds. These large investors are often restricted to purchasing exclusively or primarily bonds in the highest rating categories. Consequently, the highest ratings are usually called investment grade. The lower ratings are usually designated as high-yield or “junk bonds.” There is a controversy about the possibility of inflated ratings. Since issuers pay rating agencies for providing ratings, there may be an incentive for the rating agencies to provide inflated ratings in exchange for fees. In the U.S. corporate bond market, at least two and often three agencies provide ratings. Multiple ratings make it difficult for one rating agency to provide inflated ratings. Rating agencies are regulated by the Securities and Exchange Commission to ensure that agencies follow reasonable procedures.

Article

The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.

Article

Chao Gu, Han Han, and Randall Wright

The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.

Article

Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk

Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.

Article

The uncovered interest parity (UIP) condition states that the interest rate differential between two currencies is the expected rate of change of their exchange rate. Empirically, however, in the 1976–2018 period, exchange rate changes were approximately unpredictable over short horizons, with a slight tendency for currencies with higher interest rates to appreciate against currencies with lower interest rates. If the UIP condition held exactly, carry trades, in which investors borrow low interest rate currencies and lend high interest rate currencies, would earn zero average profits. The fact that UIP is violated, therefore, is a necessary condition to explain the fact that carry trades earned significantly positive profits in the 1976–2018 period. A large literature has documented the failure of UIP, as well as the profitability of carry trades, and is surveyed here. Additionally, summary evidence is provided here for the G10 currencies. This evidence shows that carry trades have been significantly less profitable since 2007–2008, and that there was an apparent structural break in exchange rate predictability around the same time. A large theoretical literature explores economic explanations of this phenomenon and is briefly surveyed here. Prominent among the theoretical models are ones based on risk aversion, peso problems, rare disasters, biases in investor expectations, information frictions, incomplete financial markets, and financial market segmentation.

Article

African financial history is often neglected in research on the history of global financial systems, and in its turn research on African financial systems in the past often fails to explore links with the rest of the world. However, African economies and financial systems have been linked to the rest of the world since ancient times. Sub-Saharan Africa was a key supplier of gold used to underpin the monetary systems of Europe and the North from the medieval period through the 19th century. It was West African gold rather than slaves that first brought Europeans to the Atlantic coast of Africa during the early modern period. Within sub-Saharan Africa, currency and credit systems reflected both internal economic and political structures as well as international links. Before the colonial period, indigenous currencies were often tied to particular trades or trade routes. These systems did not immediately cease to exist with the introduction of territorial currencies by colonial governments. Rather, both systems coexisted, often leading to shocks and localized crises during periods of global financial uncertainty. At independence, African governments had to contend with a legacy of financial underdevelopment left from the colonial period. Their efforts to address this have, however, been shaped by global economic trends. Despite recent expansion and innovation, limited financial development remains a hindrance to economic growth.

Article

Maria Soledad Martinez Peria and Mu Yang Shin

The link between financial inclusion and human development is examined here. Using cross-country data, the behavior of variables that try to capture these concepts is examined and preliminary evidence of a positive association is offered. However, because establishing a causal relationship with macro-data is difficult, a thorough review of the literature on the impact of financial inclusion, focusing on micro-studies that can better address identification is conducted. The literature generally distinguishes between different dimensions of financial inclusion: access to credit, access to bank branches, and access to saving instruments (i.e., accounts). Despite promising results from a first wave of studies, the impact of expanding access to credit seems limited at best, with little evidence of transformative effects on human development outcomes. While there is more promising evidence on the impact of expanding access to bank branches and formal saving instruments, studies show that some interventions such as one-time account opening subsidies are unlikely to have a sizable impact on social and economic outcomes. Instead well-designed interventions catering to individuals’ specific needs in different contexts seem to be required to realize the full potential of formal financial services to enrich human lives.

Article

The indeterminacy school in macroeconomics exploits the fact that macroeconomic models often display multiple equilibria to understand real-world phenomena. There are two distinct phases in the evolution of its history. The first phase began as a research agenda at the University of Pennsylvania in the United States and at CEPREMAP in Paris in the early 1980s. This phase used models of dynamic indeterminacy to explain how shocks to beliefs can temporarily influence economic outcomes. The second phase was developed at the University of California Los Angeles in the 2000s. This phase used models of incomplete factor markets to explain how shocks to beliefs can permanently influence economic outcomes. The first phase of the indeterminacy school has been used to explain volatility in financial markets. The second phase of the indeterminacy school has been used to explain periods of high persistent unemployment. The two phases of the indeterminacy school provide a microeconomic foundation for Keynes’ general theory that does not rely on the assumption that prices and wages are sticky.

Article

Insider trading is not widely understood. Insiders of corporations can, in fact, buy and sell shares of those corporations. But, over time, Congress, the courts and the Securities and Exchange Commission (SEC) have imposed significant limits on such trading. The limits are not always clearly marked and the principles underlying them not always consistent. The core principle is that it is illegal to trade if one is in the possession of material, nonpublic information. But the rationality of this principle has been challenged by successive generations of law and economics scholars, most notably Manne, Easterbrook, Epstein, and Bainbridge. Their “economic” analysis of this contested area of the law provides, arguably, at least a more consistent basis upon which to decide when trades by insiders should, in fact, be disallowed. A return to genuine “first principles” generated by the nature of capitalism, however, allows for more powerful insights into the phenomenon and could lead to more effective regulation.

Article

The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space. We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.

Article

Peter Robinson

Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry. As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.

Article

Taxation and public spending are key policy levers the state has in its power to change the distribution of income determined both by market forces and institutions and the prevailing distribution of wealth and property. One of the most commonly used methods to measure the distributional impact of a country’s taxes and public spending is fiscal incidence analysis. Rooted in the field of public finance, fiscal incidence analysis is designed to measure who bears the burden of taxes and who receives the benefits of government spending, and who are the gainers and losers of particular tax reforms or changes to welfare programs. Fiscal incidence analysis can be used to assess the redistributive impact of a fiscal system as a whole or changes of specific fiscal instruments. In particular, fiscal incidence analysis is used to address the following questions: Who bears the burden of taxation and who receives the benefits of public spending? How much income redistribution is being accomplished through taxation and public spending? What is the impact of taxation and public spending on poverty and the poor? How equalizing are specific taxes and government welfare programs? How progressive are spending on education and health? How effective are taxes and government spending in reducing inequality and poverty? Who are the losers and winners of tax and welfare programs reforms? A sample of key indicators meant to address these questions are discussed here. Real time analysis of winners and losers plays an important role in shaping the policy debate in a number of countries. In practice, fiscal incidence analysis is the method utilized to allocate taxes and public spending to households so that one can compare incomes before taxes and transfers with incomes after them. Standard fiscal incidence analysis just looks at what is paid and what is received without assessing the behavioral responses that taxes and public spending may trigger on individuals or households. This is often referred to as the “accounting approach.” Although the theory is quite straightforward, its application can be fraught with complications. The salient ones are discussed here. While ignoring behavioral responses and general equilibrium effects is a limitation of the accounting approach, the effects calculated with this method are considered a reasonable approximation of the short-run welfare impact. Fiscal incidence analysis, however, can be designed to include behavioral responses as well as general equilibrium and intertemporal effects. This article focuses on the implementation of fiscal incidence analysis using the accounting approach.

Article

Syed Abdul Hamid

Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities. Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.

Article

Martin D. D. Evans and Dagfinn Rime

An overview of research on the microstructure of foreign exchange (FX) markets is presented. We begin by summarizing the institutional features of FX trading and describe how they have evolved since the 1980s. We then explain how these features are represented in microstructure models of FX trading. Next, we describe the links between microstructure and traditional macro exchange-rate models and summarize how these links have been explored in recent empirical research. Finally, we provide a microstructure perspective on two recent areas of interest in exchange-rate economics: the behavior of returns on currency portfolios, and questions of competition and regulation.

Article

Chao Gu, Han Han, and Randall Wright

This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.

Article

The literature on optimum currency areas differs from that on other topics in economic theory in a number of notable respects. Most obviously, the theory is framed in verbal rather than mathematical terms. Mundell’s seminal article coining the term and setting out the theory’s basic propositions relied entirely on words rather than equations. The same was true of subsequent contributions focusing on the sectoral composition of activity and the role of fiscal flows. A handful of more recent articles specified and analyzed formal mathematical models of optimum currency areas. But it is safe to say that none of these has “taken off” in the sense of becoming the workhorse framework on which subsequent scholarship builds. The theoretical literature remains heavily qualitative and narrative compared to other areas of economic theory. While Mundell, McKinnon, Kenen, and the other founding fathers of optimum-currency-area theory provided powerful intuition, attempts to further formalize that intuition evidently contributed less to advances in economic understanding than has been the case for other theoretical literatures. Second, recent contributions to the literature on optimum currency areas are motivated to an unusual extent by a particular case, namely Europe’s monetary union. This was true already in the 1990s, when the EU’s unprecedented decision to proceed with the creation of the euro highlighted the question of whether Europe was an optimum currency area and, if not, how it might become one. That tendency was reinforced when Europe then descended into crisis starting in 2009. With only slight exaggeration it can be said that the literature on optimum currency areas became almost entirely a literature on Europe and on that continent’s failure to satisfy the relevant criteria. Third, the literature on optimum currency areas remains the product of its age. When the founders wrote, in the 1960s, banks were more strictly regulated, and financial markets were less internationalized than subsequently. Consequently, the connections between monetary integration and financial integration—whether monetary union requires banking union, as the point is now put—were neglected in the earlier literature. The role of cross-border financial flows as a destabilizing mechanism within a currency area did not receive the attention it deserved. Because much of that earlier literature was framed in a North American context—the question was whether the United States or Canada was an optimum currency area—and because it was asked by a trio of scholars, two of whom hailed from Canada and one of whom hailed from the United States, the challenges of reconciling monetary integration with political nationalism and the question of whether monetary requires political union were similarly underplayed. Given the euro area’s descent into crisis, a number of analysts have asked why economists didn’t sound louder warnings in advance. The answer is that their outlooks were shaped by a literature that developed in an earlier era when the risks and context were different.