Alina Mungiu-Pippidi and Till Hartmann
Corruption and development are two mutually related concepts equally shifting in meaning across time. The predominant 21st-century view of government that regards corruption as inacceptable has its theoretical roots in ancient Western thought, as well as Eastern thought. This condemning view of corruption coexisted at all times with a more morally indifferent or neutral approach that found its expression most notably in development scholars of the 1960s and 1970s who viewed corruption as an enabler of development rather than an obstacle. Research on the nexus between corruption and development has identified mechanisms that enable corruption and offered theories of change, which have informed practical development policies. Interventions adopting a principal agent approach fit better the advanced economies, where corruption is an exception, rather than the emerging economies, where the opposite of corruption, the norm of ethical universalism, has yet to be built. In such contexts corruption is better approached from a collective action perspective. Reviewing cross-national data for the period 1996–2017, it becomes apparent that the control of corruption stagnated in most countries and only a few exceptions exist. For a lasting improvement of the control of corruption, societies need to reduce the resources for corruption while simultaneously increasing constraints. The evolution of a governance regime requires a multiple stakeholder endeavor reaching beyond the sphere of government involving the press, business, and a strong and activist civil society.
Eduardo Levy Yeyati
While traditional economic literature often sees nominal variables as irrelevant for the real economy, there is a vast body of analytical and empirical economic work that recognizes that, to the extent they exert a critical influence on the macroeconomic environment through a multiplicity of channels, exchange rate policies (ERP) have important consequences for development.
ERP influences economic development in various ways: through its incidence on real variables such as investment and growth (and growth volatility) and on nominal aspects such relative prices or financial depth that, in turn, affect output growth or income distribution, among other development goals. Additionally, ERP, through the expected distribution of the real exchange rate indirectly, influences dimensions such as trade or financial fragility and explains, at least partially, the adoption of the euro—an extreme case of a fixed exchange rate arrangement—or the preference for floating exchange rates in the absence of financial dollarization. Importantly, exchange rate pegs have been (and, in many countries, still are) widely used as a nominal anchor to contain inflation in economies where nominal volatility induces agents to use the exchange rate as an implicit unit of account. All of these channels have been reflected to varying degrees in the choice of exchange rate regimes in recent history.
The empirical literature on the consequences of ERP has been plagued by definitional and measurement problems. Whereas few economists would contest the textbook definition of canonical exchange rate regimes (fixed regimes involve a commitment to keep the nominal exchange rate at a given level; floating regimes imply no market intervention by the monetary authorities), reality is more nuanced: Pure floats are hard to find, and the empirical distinction between alternative flexible regimes is not always clear. Moreover, there are many different degrees of exchange rate commitments as well as many alternative anchors, sometimes undisclosed. Finally, it is not unusual that a country that officially declares to peg its currency realigns its parity if it finds the constraints on monetary policy or economic activity too taxing. By the same token, a country that commits to a float may choose to intervene in the foreign exchange market to dampen exchange rate fluctuations.
The regime of choice depends critically on the situation of each country at a given point in time as much as on the evolution of the global environment. Because both the ERP debate and real-life choices incorporate national and time-specific aspects that tend to evolve over time, so does the changing focus of the debate. In the post-World War II years, under the Bretton Woods agreement, most countries pegged their currencies to the U.S. dollar, which in turn was kept convertible to gold. In the post-Bretton Woods years, after August 1971 when the United States abandoned unilaterally the convertibility of the dollar, thus bringing the Bretton Woods system to an end, the individual choices of ERP were intimately related to the global and local historical contexts, according to whether policy prioritized the use of the exchange rate as a nominal anchor (in favor of pegged or superfixed exchange rates, with dollarization or the launch of the euro as two extreme examples), as a tool to enhance price competitiveness (as in export-oriented developing countries like China in the 2000s) or as a countercyclical buffer (in favor of floating regimes with limited intervention, the prevalent view in the developed world). Similarly, the declining degree of financial dollarization, combined with the improved quality of monetary institutions, explain the growing popularity of inflation targeting with floating exchange rates in emerging economies. Finally, a prudential leaning-against-the-wind intervention to counter mean reverting global financial cycles and exchange rate swings motivates a more active—and increasingly mainstream—ERP in the late 2000s.
The fact that most medium and large developing economies (and virtually all industrial ones) revealed in the 2000s a preference for exchange rate flexibility simply reflects this evolution. Is the combination of inflation targeting (IT) and countercyclical exchange rate intervention a new paradigm? It is still too early to judge. On the one hand, pegs still represent more than half of the IMF reporting countries—particularly, small ones—indicating that exchange rate anchors are still favored by small open economies that give priority to the trade dividend of stable exchange rates and find the conduct of an autonomous monetary policy too costly, due to lack of human capital, scale, or an important non-tradable sector. On the other hand, the work and the empirical evidence on the subject, particularly after the recession of 2008–2009, highlight a number of developments in the way advanced and emerging economies think of the impossible trinity that, in a context of deepening financial integration, casts doubt on the IT paradigm, places the dilemma between nominal and real stability back on the forefront, and postulates an IT 2.0, which includes selective exchange rate interventions as a workable compromise. At any rate, the exchange rate debate is still alive and open.
Thomas E. Getzen
During the 18th and 19th centuries, medical spending in the United States rose slowly, on average about .25% faster than gross domestic product (GDP), and varied widely between rural and urban regions. Accumulating scientific advances caused spending to accelerate by 1910. From 1930 to 1955, rapid per-capita income growth accommodated major medical expansion while keeping the health share of GDP almost constant. During the 1950s and 1960s, prosperity and investment in research, the workforce, and hospitals caused a rapid surge in spending and consolidated a truly national health system. Excess growth rates (above GDP growth) were above +5% per year from 1966 to 1970, which would have doubled the health-sector share in fifteen years had it not moderated, falling under +3% in the 1980s, +2% in 1990s, and +1.5% since 2005. The question of when national health expenditure growth can be brought into line with GDP and made sustainable for the long run is still open. A review of historical data over three centuries forces confrontation with issues regarding what to include and how long events continue to effect national health accounting and policy. Empirical analysis at a national scale over multiple decades fails to support a position that many of the commonly discussed variables (obesity, aging, mortality rates, coinsurance) do cause significant shifts in expenditure trends. What does become clear is that there are long and variable lags before macroeconomic and technological events affect spending: three to six years for business cycles and multiple decades for major recessions, scientific discoveries, and organizational change. Health-financing mechanisms, such as employer-based health insurance, Medicare, and the Affordable Care Act (Obamacare) are seen to be both cause and effect, taking years to develop and affecting spending for decades to come.
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Noémi Kreif and Karla DiazOrdaz
While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).
This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.
Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.
Samuel Berlinski and Marcos Vera-Hernández
A set of policies is at the center of the agenda on early childhood development: parenting programs, childcare regulation and subsidies, cash and in-kind transfers, and parental leave policies. Incentives are embedded in these policies, and households react to them differently. They also have varying effects on child development, both in developed and developing countries. We have learned much about the impact of these policies in the past 20 years. We know that parenting programs can enhance child development, that centre based care might increase female labor force participation and child development, that parental leave policies beyond three months don’t cause improvement in children outcomes, and that the effects of transfers depend much on their design. In this review, we focus on the incentives embedded in these policies, and how they interact with the context and decision makers to understand the heterogeneity of effects and the mechanisms through which these policies work. We conclude by identifying areas of future research.