Chao Gu, Han Han, and Randall Wright
The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Syed Abdul Hamid
Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities.
Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.
Chao Gu, Han Han, and Randall Wright
This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.
The literature on optimum currency areas differs from that on other topics in economic theory in a number of notable respects. Most obviously, the theory is framed in verbal rather than mathematical terms. Mundell’s seminal article coining the term and setting out the theory’s basic propositions relied entirely on words rather than equations. The same was true of subsequent contributions focusing on the sectoral composition of activity and the role of fiscal flows. A handful of more recent articles specified and analyzed formal mathematical models of optimum currency areas. But it is safe to say that none of these has “taken off” in the sense of becoming the workhorse framework on which subsequent scholarship builds. The theoretical literature remains heavily qualitative and narrative compared to other areas of economic theory. While Mundell, McKinnon, Kenen, and the other founding fathers of optimum-currency-area theory provided powerful intuition, attempts to further formalize that intuition evidently contributed less to advances in economic understanding than has been the case for other theoretical literatures.
Second, recent contributions to the literature on optimum currency areas are motivated to an unusual extent by a particular case, namely Europe’s monetary union. This was true already in the 1990s, when the EU’s unprecedented decision to proceed with the creation of the euro highlighted the question of whether Europe was an optimum currency area and, if not, how it might become one. That tendency was reinforced when Europe then descended into crisis starting in 2009. With only slight exaggeration it can be said that the literature on optimum currency areas became almost entirely a literature on Europe and on that continent’s failure to satisfy the relevant criteria.
Third, the literature on optimum currency areas remains the product of its age. When the founders wrote, in the 1960s, banks were more strictly regulated, and financial markets were less internationalized than subsequently. Consequently, the connections between monetary integration and financial integration—whether monetary union requires banking union, as the point is now put—were neglected in the earlier literature. The role of cross-border financial flows as a destabilizing mechanism within a currency area did not receive the attention it deserved. Because much of that earlier literature was framed in a North American context—the question was whether the United States or Canada was an optimum currency area—and because it was asked by a trio of scholars, two of whom hailed from Canada and one of whom hailed from the United States, the challenges of reconciling monetary integration with political nationalism and the question of whether monetary requires political union were similarly underplayed. Given the euro area’s descent into crisis, a number of analysts have asked why economists didn’t sound louder warnings in advance. The answer is that their outlooks were shaped by a literature that developed in an earlier era when the risks and context were different.
Jesús Gonzalo and Jean-Yves Pitarakis
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Predictive regressions refer to models whose aim is to assess the predictability of a typically noisy time series, such as stock returns or currency returns with past values of a highly persistent predictor such as valuation ratios, interest rates, or volatilities, among other variables. Obtaining reliable inferences through conventional methods can be challenging in such environments mainly due to the joint interactions of predictor persistence, potential endogeneity, and other econometric complications. Numerous methods have been developed in the literature ranging from adjustments to test statistics used in significance testing to alternative instrumental variable based estimation methods specifically designed to neutralize inferences to the stochastic properties of the predictor(s).
Early developments in this area were mainly confined to linear and single predictor settings, but recent developments have raised the issue of adaptability of existing estimation and inference methods to more general environments so as to extend the use of predictive regressions to a wider range of potential applications.
An important extension involves allowing predictability to enter nonlinearly so as to capture time variation in the role of particular predictors. Economically interesting nonlinearities include, for instance, the use of threshold effects that allow predictability to vanish or strengthen during particular episodes, creating pockets of predictability. Such effects may kick in in the conditional means but also in the variances or both and may help uncover important phenomena such as the countercyclical nature of stock return predictability recently documented in the literature.
Due to the frequent need to consider multiple as opposed to single predictors it also becomes important to evaluate the validity and feasibility of inferences about linear and nonlinear predictability when multiple predictors of potentially different degrees of persistence are allowed to coexist in such settings.