Lawrence J. Lau
Chinese real gross domestic product (GDP) grew from US$369 billion in 1978 to US$12.7 trillion in 2017 (in 2017 prices and exchange rate), at almost 10% per annum, making the country the second largest economy in the world, just behind the United States. During the same period, Chinese real GDP per capita grew from US$383 to US$9,137 (2017 prices), at 8.1% per annum.
Chinese economic reform, which began in 1978, consists of two elements—introduction of free markets for goods and services, coupled with conditional producer autonomy, and opening to international trade and direct investment with the rest of the world. In its transition from a centrally planned to a market economy, China employed a “dual-track” approach—with the pre-existing mandatory central plan continuing in force and the establishment of free markets in parallel. In its opening to the world, China set a competitive exchange rate for its currency, made it current account convertible in 1994, and acceded to the World Trade Organisation (WTO) in 2001. In 2005, China became the second largest trading nation in the world, after the United States. Other Chinese policies complementary to its economic reform include the pre-existing low non-agricultural wage and the limit of one-child per couple, introduced in 1979 and phased out in 2016.
The high rate of growth of Chinese real output since 1978 can be largely explained by the high rates of growth of inputs, but there were also other factors at work. Chinese economic growth since 1978 may be attributed as follows: (a) the elimination of the initial economic inefficiency (12.7%), (b) the growth of tangible capital (55.7%) and labor (9.7%) inputs, (c) technical progress (or growth of total factor productivity (TFP)) (8%), and (d) economies of scale (14%).
The Chinese economy also shares many commonalities with other East Asian economies in terms of their development experiences: the lack of natural endowments, the initial conditions (the low real GDP per capita and the existence of surplus agricultural labor), the cultural characteristics (thrift, industry, and high value for education), the economic policies (competitive exchange rate, export promotion, investment in basic infrastructure, and maintenance of macroeconomic stability), and the consistency, predictability, and stability resulting from continuous one-party rule.
Alina Mungiu-Pippidi and Till Hartmann
Corruption and development are two mutually related concepts equally shifting in meaning across time. The predominant 21st-century view of government that regards corruption as inacceptable has its theoretical roots in ancient Western thought, as well as Eastern thought. This condemning view of corruption coexisted at all times with a more morally indifferent or neutral approach that found its expression most notably in development scholars of the 1960s and 1970s who viewed corruption as an enabler of development rather than an obstacle. Research on the nexus between corruption and development has identified mechanisms that enable corruption and offered theories of change, which have informed practical development policies. Interventions adopting a principal agent approach fit better the advanced economies, where corruption is an exception, rather than the emerging economies, where the opposite of corruption, the norm of ethical universalism, has yet to be built. In such contexts corruption is better approached from a collective action perspective. Reviewing cross-national data for the period 1996–2017, it becomes apparent that the control of corruption stagnated in most countries and only a few exceptions exist. For a lasting improvement of the control of corruption, societies need to reduce the resources for corruption while simultaneously increasing constraints. The evolution of a governance regime requires a multiple stakeholder endeavor reaching beyond the sphere of government involving the press, business, and a strong and activist civil society.
The origins of modern technological change provide the context necessary to understand present-day technological transformation, to investigate the impact of the new digital technologies, and to examine the phenomenon of digital disruption of established industries and occupations. How these contemporary technologies will transform industries and institutions, or serve to create new industries and institutions, will unfold in time. The implications of the relationships between these pervasive new forms of digital transformation and the accompanying new business models, business strategies, innovation, and capabilities are being worked through at global, national, corporate, and local levels. Whatever the technological future holds it will be defined by continual adaptation, perpetual innovation, and the search for new potential.
Presently, the world is experiencing the impact of waves of innovation created by the rapid advance of digital networks, software, and information and communication technology systems that have transformed workplaces, cities, and whole economies. These digital technologies are converging and coalescing into intelligent technology systems that facilitate and structure our lives. Through creative destruction, digital technologies fundamentally challenge existing routines, capabilities, and structures by which organizations presently operate, adapt, and innovate. In turn, digital technologies stimulate a higher rate of both technological and business model innovation, moving from producer innovation toward more user-collaborative and open-collaborative innovation. However, as dominant global platform technologies emerge, some impending dilemmas associated with the concentration and monopolization of digital markets become salient. The extent of the contribution made by digital transformation to economic growth and environmental sustainability requires a critical appraisal.
Leandro Prados de la Escosura and Blanca Sánchez-Alonso
In assessments of modern-day Spain’s economic progress and living standards, inadequate natural resources, inefficient institutions, lack of education and entrepreneurship, and foreign dependency are frequently blamed on poor performance up to the mid-20th century, but no persuasive arguments were provided to explain why such adverse circumstances reversed, giving way to the fast transformation that started in the 1950s. Hence, it is necessary to first inquire how much economic progress has been achieved in Spain and what impact it had on living standards and income distribution since the end of the Peninsular War to the present day, and second to provide an interpretation.
Research published in the 2010s supports the view that income per person has improved remarkably, driven by increases in labor productivity, which derived, in turn, from a more intense and efficient use of physical and human capital per worker. Exposure to international competition represented a decisive element behind growth performance. From an European perspective, Spain underperformed until 1950. Thereafter, Spain’s economy managed to catch up with more advanced countries until 2007. Although the distribution of the fruits of growth did not follow a linear trend, but a Kuznetsian inverted U pattern, higher levels of income per capita are matched by lower inequality, suggesting that Spaniards’ material wellbeing improved substantially during the modern era.
In the early 21st century, the U.S. economy stood at or very near the top of any ranking of the world’s economies, more obviously so in terms of gross domestic product (GDP), but also when measured by GDP per capita. The current standing of any country reflects three things: how well off it was when it began modern economic growth, how long it has been growing, and how rapidly productivity increased each year. Americans are inclined to think that it was the last of these items that accounted for their country’s success. And there is some truth to the notion that America’s lofty status was due to the continual increases in the efficiency of its factors of production—but that is not the whole story.
The rate at which the U.S. economy has grown over its long history—roughly 1.5% per year measured by output per capita—has been modest in comparison with most other advanced nations. The high value of GDP per capita in the United States is due in no small part to the fact that it was already among the world’s highest back in the early 19th century, when the new nation was poised to begin modern economic growth. The United States was also an early starter, so has experienced growth for a very long time—longer than almost every other nation in the world.
The sustained growth in real GDP per capita began sometime in the period 1790 to 1860, although the exact timing of the transition, and even its nature, are still uncertain. Continual efforts to improve the statistical record have narrowed down the time frame in which the transition took place and improved our understanding of the forces that facilitated the transition, but questions remain. In order to understand how the United States made the transition from a slow-growing British colony to a more rapidly advancing, free-standing economy, it is necessary to know more precisely when it made that transition.
Eduardo Levy Yeyati
While traditional economic literature often sees nominal variables as irrelevant for the real economy, there is a vast body of analytical and empirical economic work that recognizes that, to the extent they exert a critical influence on the macroeconomic environment through a multiplicity of channels, exchange rate policies (ERP) have important consequences for development.
ERP influences economic development in various ways: through its incidence on real variables such as investment and growth (and growth volatility) and on nominal aspects such relative prices or financial depth that, in turn, affect output growth or income distribution, among other development goals. Additionally, ERP, through the expected distribution of the real exchange rate indirectly, influences dimensions such as trade or financial fragility and explains, at least partially, the adoption of the euro—an extreme case of a fixed exchange rate arrangement—or the preference for floating exchange rates in the absence of financial dollarization. Importantly, exchange rate pegs have been (and, in many countries, still are) widely used as a nominal anchor to contain inflation in economies where nominal volatility induces agents to use the exchange rate as an implicit unit of account. All of these channels have been reflected to varying degrees in the choice of exchange rate regimes in recent history.
The empirical literature on the consequences of ERP has been plagued by definitional and measurement problems. Whereas few economists would contest the textbook definition of canonical exchange rate regimes (fixed regimes involve a commitment to keep the nominal exchange rate at a given level; floating regimes imply no market intervention by the monetary authorities), reality is more nuanced: Pure floats are hard to find, and the empirical distinction between alternative flexible regimes is not always clear. Moreover, there are many different degrees of exchange rate commitments as well as many alternative anchors, sometimes undisclosed. Finally, it is not unusual that a country that officially declares to peg its currency realigns its parity if it finds the constraints on monetary policy or economic activity too taxing. By the same token, a country that commits to a float may choose to intervene in the foreign exchange market to dampen exchange rate fluctuations.
The regime of choice depends critically on the situation of each country at a given point in time as much as on the evolution of the global environment. Because both the ERP debate and real-life choices incorporate national and time-specific aspects that tend to evolve over time, so does the changing focus of the debate. In the post-World War II years, under the Bretton Woods agreement, most countries pegged their currencies to the U.S. dollar, which in turn was kept convertible to gold. In the post-Bretton Woods years, after August 1971 when the United States abandoned unilaterally the convertibility of the dollar, thus bringing the Bretton Woods system to an end, the individual choices of ERP were intimately related to the global and local historical contexts, according to whether policy prioritized the use of the exchange rate as a nominal anchor (in favor of pegged or superfixed exchange rates, with dollarization or the launch of the euro as two extreme examples), as a tool to enhance price competitiveness (as in export-oriented developing countries like China in the 2000s) or as a countercyclical buffer (in favor of floating regimes with limited intervention, the prevalent view in the developed world). Similarly, the declining degree of financial dollarization, combined with the improved quality of monetary institutions, explain the growing popularity of inflation targeting with floating exchange rates in emerging economies. Finally, a prudential leaning-against-the-wind intervention to counter mean reverting global financial cycles and exchange rate swings motivates a more active—and increasingly mainstream—ERP in the late 2000s.
The fact that most medium and large developing economies (and virtually all industrial ones) revealed in the 2000s a preference for exchange rate flexibility simply reflects this evolution. Is the combination of inflation targeting (IT) and countercyclical exchange rate intervention a new paradigm? It is still too early to judge. On the one hand, pegs still represent more than half of the IMF reporting countries—particularly, small ones—indicating that exchange rate anchors are still favored by small open economies that give priority to the trade dividend of stable exchange rates and find the conduct of an autonomous monetary policy too costly, due to lack of human capital, scale, or an important non-tradable sector. On the other hand, the work and the empirical evidence on the subject, particularly after the recession of 2008–2009, highlight a number of developments in the way advanced and emerging economies think of the impossible trinity that, in a context of deepening financial integration, casts doubt on the IT paradigm, places the dilemma between nominal and real stability back on the forefront, and postulates an IT 2.0, which includes selective exchange rate interventions as a workable compromise. At any rate, the exchange rate debate is still alive and open.
Maria Soledad Martinez Peria and Mu Yang Shin
The link between financial inclusion and human development is examined here. Using cross-country data, the behavior of variables that try to capture these concepts is examined and preliminary evidence of a positive association is offered. However, because establishing a causal relationship with macro-data is difficult, a thorough review of the literature on the impact of financial inclusion, focusing on micro-studies that can better address identification is conducted. The literature generally distinguishes between different dimensions of financial inclusion: access to credit, access to bank branches, and access to saving instruments (i.e., accounts). Despite promising results from a first wave of studies, the impact of expanding access to credit seems limited at best, with little evidence of transformative effects on human development outcomes. While there is more promising evidence on the impact of expanding access to bank branches and formal saving instruments, studies show that some interventions such as one-time account opening subsidies are unlikely to have a sizable impact on social and economic outcomes. Instead well-designed interventions catering to individuals’ specific needs in different contexts seem to be required to realize the full potential of formal financial services to enrich human lives.
Thomas E. Getzen
During the 18th and 19th centuries, medical spending in the United States rose slowly, on average about .25% faster than gross domestic product (GDP), and varied widely between rural and urban regions. Accumulating scientific advances caused spending to accelerate by 1910. From 1930 to 1955, rapid per-capita income growth accommodated major medical expansion while keeping the health share of GDP almost constant. During the 1950s and 1960s, prosperity and investment in research, the workforce, and hospitals caused a rapid surge in spending and consolidated a truly national health system. Excess growth rates (above GDP growth) were above +5% per year from 1966 to 1970, which would have doubled the health-sector share in fifteen years had it not moderated, falling under +3% in the 1980s, +2% in 1990s, and +1.5% since 2005. The question of when national health expenditure growth can be brought into line with GDP and made sustainable for the long run is still open. A review of historical data over three centuries forces confrontation with issues regarding what to include and how long events continue to effect national health accounting and policy. Empirical analysis at a national scale over multiple decades fails to support a position that many of the commonly discussed variables (obesity, aging, mortality rates, coinsurance) do cause significant shifts in expenditure trends. What does become clear is that there are long and variable lags before macroeconomic and technological events affect spending: three to six years for business cycles and multiple decades for major recessions, scientific discoveries, and organizational change. Health-financing mechanisms, such as employer-based health insurance, Medicare, and the Affordable Care Act (Obamacare) are seen to be both cause and effect, taking years to develop and affecting spending for decades to come.
The Indian Union, from the time of independence from British colonial rule, 1947, until now, has undergone shifts in the trajectory of economic change and the political context of economic change. One of these transitions was a ‘green revolution’ in farming that occurred in the 1970s. In the same decade, Indian migration to the Persian Gulf states began to increase. In the 1980s, the government of India seemed to abandon a strategy of economic development that had relied on public investment in heavy industries and encouraged private enterprise in most fields. These shifts did not always follow announced policy, produced deep impact on economic growth and standards of living, and generated new forms of inequality. Therefore, their causes and consequences are matters of discussion and debate. Most discussions and debates form around three larger questions. First, why was there a turnaround in the pace of economic change in the 1980s? The answer lies in a fortuitous rebalancing of the role of openness and private investment in the economy. Second, why did human development lag achievements in income growth after the turnaround? A preoccupation with state-aided industrialization, the essay answers, entailed neglect of infrastructure and human development, and some of that legacy persisted. If the quality of life failed to improve enough, then a third question follows, why did the democratic political system survive at all if it did not equitably distribute the benefits from growth? In answer, the essay discusses studies that question the extent of the failure.
In contrast with the existing cross-country literature on institutions and development the overview in this article focuses instead on case studies of institutions at the disaggregated level that help or hinder productivity growth. It also shows how along with rule-based systems institutional systems based on social relations and networks and community organizations can resolve some issues of collective action in development. At the level of the state, our discussion focuses on incentive issues in the internal organization of government and how the nature of accountability structures at different levels of government can help or hinder development. In view of the breadth of the relevant literature we have deliberately confined ourselves to the available empirical case studies in only the two largest developing countries, China and India.
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Noémi Kreif and Karla DiazOrdaz
While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).
This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.
Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.
“Reform” in the economics literature refers to changes in government policies or institutional rules because status-quo policies and institutions are not working well to achieve the goals of economic wellbeing and development. Further, reform refers to alternative policies and institutions that are available which would most likely perform better than the status quo. The main question examined in the “political economy of reform” literature has been why reforms are not undertaken when they are needed for the good of society. The succinct answer from the first generation of research is that conflict of interest between organized socio-political groups is responsible for some groups being able to stall reforms to extract greater private rents from status-quo policies. The next generation of research is tackling more fundamental and enduring questions: Why does conflict of interest persist? How are some interest groups able to exert influence against reforms if there are indeed large gains to be had for society? What institutions are needed to overcome the problem of credible commitment so that interest groups can be compensated or persuaded to support reforms?
Game theory—or the analysis of strategic interactions among individuals and groups—is being used more extensively, going beyond the first generation of research which focused on the interaction between “winners” and “losers” from reforms. Widespread expectations, or norms, in society at large, not just within organized interest groups, about how others are behaving in the political sphere of making demands upon government; and, beliefs about the role of public policies, or preferences for public goods, shape these strategic interactions and hence reform outcomes. Examining where these norms and preferences for public goods come from, and how they evolve, are key to understanding why conflict of interest persists and how reformers can commit to finding common ground for socially beneficial reforms. Political markets and institutions, through which the leaders who wield power over public policy are selected and sanctioned, shape norms and preferences for public goods. Leaders who want to pursue reforms need to use the evidence in favor of reforms to build broad-based support in political markets. Contrary to the first generation view of reforms by stealth, the next generation of research suggests that public communication in political markets is needed to develop a shared understanding of policies for the public good.
Concomitantly, the areas of reform have circled from market liberalization, which dominated the 20th century, back to strengthening governments to address problems of market failure and public goods in the 21st century. Reforms involve anti-corruption and public sector management in developing countries; improving health, education, and social protection to address persistent inequality in developed countries; and regulation to preserve competition and to price externalities (such as pollution and environmental depletion) in markets around the world. Understanding the functioning of politics is more important than ever before in determining whether governments are able to pursue reforms for public goods or fall prey to corruption and populism.
Francisco H. G. Ferreira, Emanuela Galasso, and Mario Negre
“Shared prosperity” is a common phrase in current development policy discourse. Its most widely used operational definition—the growth rate in the average income of the poorest 40% of a country’s population—is a truncated measure of change in social welfare. A related concept, the shared prosperity premium—the difference between the growth rate of the mean for the bottom 40% and the growth rate in the overall mean—is similarly analogous to a measure of change in inequality. This article reviews the relationship between these concepts and the more established ideas of social welfare, poverty, inequality, and mobility.
Household survey data can be used to shed light on recent progress in terms of this indicator globally. During 2008–2013, mean incomes for the poorest 40% rose in 60 of the 83 countries for which we have data. In 49 of them, accounting for 65% of the sampled population, it rose faster than overall average incomes, thus narrowing the income gap.
In the policy space, there are examples both of “pre-distribution” policies (which promote human capital investment among the poor) and “re-distribution” policies (such as targeted safety nets), which when well-designed have a sound empirical track record of both raising productivity and improving well-being among the poor.
Studying Long-Term Changes in the Economy and Society Using the HISCO Family of Occupational Measures
Marco H.D. van Leeuwen
Occupations are a key characteristic for analyzing momentous changes in economy and society. Classical economists rooted their analyses in occupational divisions, emphasizing the division of work and its continuous evolution. Modern economists and economic historians also debate the wealth of nations by looking at the global changes in the labor force, at changing labor force participation rates, at winners and losers in the class structure, and in variations in this across the globe—stressing the importance of human capital for work and of changes therein for economic growth. To study such momentous changes over past centuries, historical occupational data are needed as well as measures and procedures to work with these data systematically and comparatively. The Historical International Standard Classification of Occupations (HISCO) maps occupational titles into a common coding scheme across the globe. HISCO-based measures of economic sector and economic specialization have been derived. To answer a number of interesting questions, the HISCO family has been extended to include HISCO-based measures of social status (HISCAM) and social classes (HISCLASS). Armed with his toolbox, scholars are able to study the development of the economy and society over past centuries.
Samuel Berlinski and Marcos Vera-Hernández
A set of policies is at the center of the agenda on early childhood development: parenting programs, childcare regulation and subsidies, cash and in-kind transfers, and parental leave policies. Incentives are embedded in these policies, and households react to them differently. They also have varying effects on child development, both in developed and developing countries. We have learned much about the impact of these policies in the past 20 years. We know that parenting programs can enhance child development, that centre based care might increase female labor force participation and child development, that parental leave policies beyond three months don’t cause improvement in children outcomes, and that the effects of transfers depend much on their design. In this review, we focus on the incentives embedded in these policies, and how they interact with the context and decision makers to understand the heterogeneity of effects and the mechanisms through which these policies work. We conclude by identifying areas of future research.