1-20 of 23 Results  for:

  • Economic Development x
Clear all

Article

Anthropometrics is a research program that explores the extent to which economic processes affect human biological processes using height and weight as markers. This agenda differs from health economics in the sense that instead of studying diseases or longevity, macro manifestations of well-being, it focuses on cellular-level processes that determine the extent to which the organism thrives in its socio-economic and epidemiological environment. Thus, anthropometric indicators are used as a proxy measure for the biological standard of living as complements to conventional measures based on monetary units. Using physical stature as a marker, we enabled the profession to learn about the well-being of children and youth for whom market-generated monetary data are not abundant even in contemporary societies. It is now clear that economic transformations such as the onset of the Industrial Revolution and modern economic growth were accompanied by negative externalities that were hitherto unknown. Moreover, there is plenty of evidence to indicate that the Welfare States of Western and Northern Europe take better care of the biological needs of their citizens than the market-oriented health-care system of the United States. Obesity has reached pandemic proportions in the United States affecting 40% of the population. It is fostered by a sedentary and harried lifestyle, by the diminution in self-control, the spread of labor-saving technologies, and the rise of instant gratification characteristic of post-industrial society. The spread of television and a fast-food culture in the 1950s were watershed developments in this regard that accelerated the process. Obesity poses a serious health risk including heart disease, stroke, diabetes, and some types of cancer and its cost reaches $150 billion per annum in the United States or about $1,400 per capita. We conclude that the economy influences not only mortality and health but reaches bone-deep into the cellular level of the human organism. In other words, the economy is inextricably intertwined with human biological processes.

Article

Lawrence J. Lau

Chinese real gross domestic product (GDP) grew from US$369 billion in 1978 to US$12.7 trillion in 2017 (in 2017 prices and exchange rate), at almost 10% per annum, making the country the second largest economy in the world, just behind the United States. During the same period, Chinese real GDP per capita grew from US$383 to US$9,137 (2017 prices), at 8.1% per annum. Chinese economic reform, which began in 1978, consists of two elements—introduction of free markets for goods and services, coupled with conditional producer autonomy, and opening to international trade and direct investment with the rest of the world. In its transition from a centrally planned to a market economy, China employed a “dual-track” approach—with the pre-existing mandatory central plan continuing in force and the establishment of free markets in parallel. In its opening to the world, China set a competitive exchange rate for its currency, made it current account convertible in 1994, and acceded to the World Trade Organisation (WTO) in 2001. In 2005, China became the second largest trading nation in the world, after the United States. Other Chinese policies complementary to its economic reform include the pre-existing low non-agricultural wage and the limit of one-child per couple, introduced in 1979 and phased out in 2016. The high rate of growth of Chinese real output since 1978 can be largely explained by the high rates of growth of inputs, but there were also other factors at work. Chinese economic growth since 1978 may be attributed as follows: (a) the elimination of the initial economic inefficiency (12.7%), (b) the growth of tangible capital (55.7%) and labor (9.7%) inputs, (c) technical progress (or growth of total factor productivity (TFP)) (8%), and (d) economies of scale (14%). The Chinese economy also shares many commonalities with other East Asian economies in terms of their development experiences: the lack of natural endowments, the initial conditions (the low real GDP per capita and the existence of surplus agricultural labor), the cultural characteristics (thrift, industry, and high value for education), the economic policies (competitive exchange rate, export promotion, investment in basic infrastructure, and maintenance of macroeconomic stability), and the consistency, predictability, and stability resulting from continuous one-party rule.

Article

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.

Article

Alina Mungiu-Pippidi and Till Hartmann

Corruption and development are two mutually related concepts equally shifting in meaning across time. The predominant 21st-century view of government that regards corruption as inacceptable has its theoretical roots in ancient Western thought, as well as Eastern thought. This condemning view of corruption coexisted at all times with a more morally indifferent or neutral approach that found its expression most notably in development scholars of the 1960s and 1970s who viewed corruption as an enabler of development rather than an obstacle. Research on the nexus between corruption and development has identified mechanisms that enable corruption and offered theories of change, which have informed practical development policies. Interventions adopting a principal agent approach fit better the advanced economies, where corruption is an exception, rather than the emerging economies, where the opposite of corruption, the norm of ethical universalism, has yet to be built. In such contexts corruption is better approached from a collective action perspective. Reviewing cross-national data for the period 1996–2017, it becomes apparent that the control of corruption stagnated in most countries and only a few exceptions exist. For a lasting improvement of the control of corruption, societies need to reduce the resources for corruption while simultaneously increasing constraints. The evolution of a governance regime requires a multiple stakeholder endeavor reaching beyond the sphere of government involving the press, business, and a strong and activist civil society.

Article

The origins of modern technological change provide the context necessary to understand present-day technological transformation, to investigate the impact of the new digital technologies, and to examine the phenomenon of digital disruption of established industries and occupations. How these contemporary technologies will transform industries and institutions, or serve to create new industries and institutions, will unfold in time. The implications of the relationships between these pervasive new forms of digital transformation and the accompanying new business models, business strategies, innovation, and capabilities are being worked through at global, national, corporate, and local levels. Whatever the technological future holds it will be defined by continual adaptation, perpetual innovation, and the search for new potential. Presently, the world is experiencing the impact of waves of innovation created by the rapid advance of digital networks, software, and information and communication technology systems that have transformed workplaces, cities, and whole economies. These digital technologies are converging and coalescing into intelligent technology systems that facilitate and structure our lives. Through creative destruction, digital technologies fundamentally challenge existing routines, capabilities, and structures by which organizations presently operate, adapt, and innovate. In turn, digital technologies stimulate a higher rate of both technological and business model innovation, moving from producer innovation toward more user-collaborative and open-collaborative innovation. However, as dominant global platform technologies emerge, some impending dilemmas associated with the concentration and monopolization of digital markets become salient. The extent of the contribution made by digital transformation to economic growth and environmental sustainability requires a critical appraisal.

Article

Thilo R. Huning and Fabian Wahl

The study of the Holy Roman Empire, a medieval state on the territory of modern-day Germany and Central Europe, has attracted generations of qualitative economic historians and quantitative scholars from various fields. Its bordering position between Roman and Germanic legacies, its Carolingian inheritance, and the numerous small states emerging from 1150 onward, on the one hand, are suspected to have hindered market integration, and on the other, allowed states to compete. This has inspired many research questions around differences and communalities in culture, the origin of the state, the integration of good and financial markets, and technology inventions, such the printing press. While little is still known about the economy of the rural population, cities and their economic conditions have been extensively studied from the angles of economic geography, institutionalism, and for their influence on early human capital accumulation. The literature has stressed that Germany at this time cannot be seen as a closed economy, but only in the context of Europe and the wider world. Global events, such as the Black Death, and European particularities, such as the Catholic Church, never stopped at countries’ borders. As such, the literature provides an understanding for the prelude to radical changes, such as the Lutheran Reformation, religious wars, and the coming of the modern age with its economic innovations.

Article

Leandro Prados de la Escosura and Blanca Sánchez-Alonso

In assessments of modern-day Spain’s economic progress and living standards, inadequate natural resources, inefficient institutions, lack of education and entrepreneurship, and foreign dependency are frequently blamed on poor performance up to the mid-20th century, but no persuasive arguments were provided to explain why such adverse circumstances reversed, giving way to the fast transformation that started in the 1950s. Hence, it is necessary to first inquire how much economic progress has been achieved in Spain and what impact it had on living standards and income distribution since the end of the Peninsular War to the present day, and second to provide an interpretation. Research published in the 2010s supports the view that income per person has improved remarkably, driven by increases in labor productivity, which derived, in turn, from a more intense and efficient use of physical and human capital per worker. Exposure to international competition represented a decisive element behind growth performance. From an European perspective, Spain underperformed until 1950. Thereafter, Spain’s economy managed to catch up with more advanced countries until 2007. Although the distribution of the fruits of growth did not follow a linear trend, but a Kuznetsian inverted U pattern, higher levels of income per capita are matched by lower inequality, suggesting that Spaniards’ material wellbeing improved substantially during the modern era.

Article

In the early 21st century, the U.S. economy stood at or very near the top of any ranking of the world’s economies, more obviously so in terms of gross domestic product (GDP), but also when measured by GDP per capita. The current standing of any country reflects three things: how well off it was when it began modern economic growth, how long it has been growing, and how rapidly productivity increased each year. Americans are inclined to think that it was the last of these items that accounted for their country’s success. And there is some truth to the notion that America’s lofty status was due to the continual increases in the efficiency of its factors of production—but that is not the whole story. The rate at which the U.S. economy has grown over its long history—roughly 1.5% per year measured by output per capita—has been modest in comparison with most other advanced nations. The high value of GDP per capita in the United States is due in no small part to the fact that it was already among the world’s highest back in the early 19th century, when the new nation was poised to begin modern economic growth. The United States was also an early starter, so has experienced growth for a very long time—longer than almost every other nation in the world. The sustained growth in real GDP per capita began sometime in the period 1790 to 1860, although the exact timing of the transition, and even its nature, are still uncertain. Continual efforts to improve the statistical record have narrowed down the time frame in which the transition took place and improved our understanding of the forces that facilitated the transition, but questions remain. In order to understand how the United States made the transition from a slow-growing British colony to a more rapidly advancing, free-standing economy, it is necessary to know more precisely when it made that transition.

Article

Samuel Berlinski and Marcos Vera-Hernández

A set of policies is at the center of the agenda on early childhood development: parenting programs, childcare regulation and subsidies, cash and in-kind transfers, and parental leave policies. Incentives are embedded in these policies, and households react to them differently. They also have varying effects on child development, both in developed and developing countries. We have learned much about the impact of these policies in the past 20 years. We know that parenting programs can enhance child development, that centre based care might increase female labor force participation and child development, that parental leave policies beyond three months don’t cause improvement in children outcomes, and that the effects of transfers depend much on their design. In this review, we focus on the incentives embedded in these policies, and how they interact with the context and decision makers to understand the heterogeneity of effects and the mechanisms through which these policies work. We conclude by identifying areas of future research.

Article

While traditional economic literature often sees nominal variables as irrelevant for the real economy, there is a vast body of analytical and empirical economic work that recognizes that, to the extent they exert a critical influence on the macroeconomic environment through a multiplicity of channels, exchange rate policies (ERP) have important consequences for development. ERP influences economic development in various ways: through its incidence on real variables such as investment and growth (and growth volatility) and on nominal aspects such relative prices or financial depth that, in turn, affect output growth or income distribution, among other development goals. Additionally, ERP, through the expected distribution of the real exchange rate indirectly, influences dimensions such as trade or financial fragility and explains, at least partially, the adoption of the euro—an extreme case of a fixed exchange rate arrangement—or the preference for floating exchange rates in the absence of financial dollarization. Importantly, exchange rate pegs have been (and, in many countries, still are) widely used as a nominal anchor to contain inflation in economies where nominal volatility induces agents to use the exchange rate as an implicit unit of account. All of these channels have been reflected to varying degrees in the choice of exchange rate regimes in recent history. The empirical literature on the consequences of ERP has been plagued by definitional and measurement problems. Whereas few economists would contest the textbook definition of canonical exchange rate regimes (fixed regimes involve a commitment to keep the nominal exchange rate at a given level; floating regimes imply no market intervention by the monetary authorities), reality is more nuanced: Pure floats are hard to find, and the empirical distinction between alternative flexible regimes is not always clear. Moreover, there are many different degrees of exchange rate commitments as well as many alternative anchors, sometimes undisclosed. Finally, it is not unusual that a country that officially declares to peg its currency realigns its parity if it finds the constraints on monetary policy or economic activity too taxing. By the same token, a country that commits to a float may choose to intervene in the foreign exchange market to dampen exchange rate fluctuations. The regime of choice depends critically on the situation of each country at a given point in time as much as on the evolution of the global environment. Because both the ERP debate and real-life choices incorporate national and time-specific aspects that tend to evolve over time, so does the changing focus of the debate. In the post-World War II years, under the Bretton Woods agreement, most countries pegged their currencies to the U.S. dollar, which in turn was kept convertible to gold. In the post-Bretton Woods years, after August 1971 when the United States abandoned unilaterally the convertibility of the dollar, thus bringing the Bretton Woods system to an end, the individual choices of ERP were intimately related to the global and local historical contexts, according to whether policy prioritized the use of the exchange rate as a nominal anchor (in favor of pegged or superfixed exchange rates, with dollarization or the launch of the euro as two extreme examples), as a tool to enhance price competitiveness (as in export-oriented developing countries like China in the 2000s) or as a countercyclical buffer (in favor of floating regimes with limited intervention, the prevalent view in the developed world). Similarly, the declining degree of financial dollarization, combined with the improved quality of monetary institutions, explain the growing popularity of inflation targeting with floating exchange rates in emerging economies. Finally, a prudential leaning-against-the-wind intervention to counter mean reverting global financial cycles and exchange rate swings motivates a more active—and increasingly mainstream—ERP in the late 2000s. The fact that most medium and large developing economies (and virtually all industrial ones) revealed in the 2000s a preference for exchange rate flexibility simply reflects this evolution. Is the combination of inflation targeting (IT) and countercyclical exchange rate intervention a new paradigm? It is still too early to judge. On the one hand, pegs still represent more than half of the IMF reporting countries—particularly, small ones—indicating that exchange rate anchors are still favored by small open economies that give priority to the trade dividend of stable exchange rates and find the conduct of an autonomous monetary policy too costly, due to lack of human capital, scale, or an important non-tradable sector. On the other hand, the work and the empirical evidence on the subject, particularly after the recession of 2008–2009, highlight a number of developments in the way advanced and emerging economies think of the impossible trinity that, in a context of deepening financial integration, casts doubt on the IT paradigm, places the dilemma between nominal and real stability back on the forefront, and postulates an IT 2.0, which includes selective exchange rate interventions as a workable compromise. At any rate, the exchange rate debate is still alive and open.

Article

African financial history is often neglected in research on the history of global financial systems, and in its turn research on African financial systems in the past often fails to explore links with the rest of the world. However, African economies and financial systems have been linked to the rest of the world since ancient times. Sub-Saharan Africa was a key supplier of gold used to underpin the monetary systems of Europe and the North from the medieval period through the 19th century. It was West African gold rather than slaves that first brought Europeans to the Atlantic coast of Africa during the early modern period. Within sub-Saharan Africa, currency and credit systems reflected both internal economic and political structures as well as international links. Before the colonial period, indigenous currencies were often tied to particular trades or trade routes. These systems did not immediately cease to exist with the introduction of territorial currencies by colonial governments. Rather, both systems coexisted, often leading to shocks and localized crises during periods of global financial uncertainty. At independence, African governments had to contend with a legacy of financial underdevelopment left from the colonial period. Their efforts to address this have, however, been shaped by global economic trends. Despite recent expansion and innovation, limited financial development remains a hindrance to economic growth.

Article

Maria Soledad Martinez Peria and Mu Yang Shin

The link between financial inclusion and human development is examined here. Using cross-country data, the behavior of variables that try to capture these concepts is examined and preliminary evidence of a positive association is offered. However, because establishing a causal relationship with macro-data is difficult, a thorough review of the literature on the impact of financial inclusion, focusing on micro-studies that can better address identification is conducted. The literature generally distinguishes between different dimensions of financial inclusion: access to credit, access to bank branches, and access to saving instruments (i.e., accounts). Despite promising results from a first wave of studies, the impact of expanding access to credit seems limited at best, with little evidence of transformative effects on human development outcomes. While there is more promising evidence on the impact of expanding access to bank branches and formal saving instruments, studies show that some interventions such as one-time account opening subsidies are unlikely to have a sizable impact on social and economic outcomes. Instead well-designed interventions catering to individuals’ specific needs in different contexts seem to be required to realize the full potential of formal financial services to enrich human lives.

Article

During the 18th and 19th centuries, medical spending in the United States rose slowly, on average about .25% faster than gross domestic product (GDP), and varied widely between rural and urban regions. Accumulating scientific advances caused spending to accelerate by 1910. From 1930 to 1955, rapid per-capita income growth accommodated major medical expansion while keeping the health share of GDP almost constant. During the 1950s and 1960s, prosperity and investment in research, the workforce, and hospitals caused a rapid surge in spending and consolidated a truly national health system. Excess growth rates (above GDP growth) were above +5% per year from 1966 to 1970, which would have doubled the health-sector share in fifteen years had it not moderated, falling under +3% in the 1980s, +2% in 1990s, and +1.5% since 2005. The question of when national health expenditure growth can be brought into line with GDP and made sustainable for the long run is still open. A review of historical data over three centuries forces confrontation with issues regarding what to include and how long events continue to effect national health accounting and policy. Empirical analysis at a national scale over multiple decades fails to support a position that many of the commonly discussed variables (obesity, aging, mortality rates, coinsurance) do cause significant shifts in expenditure trends. What does become clear is that there are long and variable lags before macroeconomic and technological events affect spending: three to six years for business cycles and multiple decades for major recessions, scientific discoveries, and organizational change. Health-financing mechanisms, such as employer-based health insurance, Medicare, and the Affordable Care Act (Obamacare) are seen to be both cause and effect, taking years to develop and affecting spending for decades to come.

Article

The Indian Union, from the time of independence from British colonial rule, 1947, until now, has undergone shifts in the trajectory of economic change and the political context of economic change. One of these transitions was a ‘green revolution’ in farming that occurred in the 1970s. In the same decade, Indian migration to the Persian Gulf states began to increase. In the 1980s, the government of India seemed to abandon a strategy of economic development that had relied on public investment in heavy industries and encouraged private enterprise in most fields. These shifts did not always follow announced policy, produced deep impact on economic growth and standards of living, and generated new forms of inequality. Therefore, their causes and consequences are matters of discussion and debate. Most discussions and debates form around three larger questions. First, why was there a turnaround in the pace of economic change in the 1980s? The answer lies in a fortuitous rebalancing of the role of openness and private investment in the economy. Second, why did human development lag achievements in income growth after the turnaround? A preoccupation with state-aided industrialization, the essay answers, entailed neglect of infrastructure and human development, and some of that legacy persisted. If the quality of life failed to improve enough, then a third question follows, why did the democratic political system survive at all if it did not equitably distribute the benefits from growth? In answer, the essay discusses studies that question the extent of the failure.

Article

In contrast with the existing cross-country literature on institutions and development the overview in this article focuses instead on case studies of institutions at the disaggregated level that help or hinder productivity growth. It also shows how along with rule-based systems institutional systems based on social relations and networks and community organizations can resolve some issues of collective action in development. At the level of the state, our discussion focuses on incentive issues in the internal organization of government and how the nature of accountability structures at different levels of government can help or hinder development. In view of the breadth of the relevant literature we have deliberately confined ourselves to the available empirical case studies in only the two largest developing countries, China and India.

Article

The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space. We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance. Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.

Article

Ching-mu Chen and Shin-Kun Peng

For research attempting to investigate why economic activities are distributed unevenly across geographic space, new economic geography (NEG) provides a general equilibrium-based and microfounded approach to modeling a spatial economy characterized by a large variety of economic agglomerations. NEG emphasizes how agglomeration (centripetal) and dispersion (centrifugal) forces interact to generate observed spatial configurations and uneven distributions of economic activity. However, numerous economic geographers prefer to refer to the term new economic geographies as vigorous and diversified academic outputs that are inspired by the institutional-cultural turn of economic geography. Accordingly, the term geographical economics has been suggested as an alternative to NEG. Approaches for modeling a spatial economy through the use of a general equilibrium framework have not only rendered existing concepts amenable to empirical scrutiny and policy analysis but also drawn economic geography and location theories from the periphery to the center of mainstream economic theory. Reduced-form empirical studies have attempted to test certain implications of NEG. However, due to NEG’s simplified geographic settings, the developed NEG models cannot be easily applied to observed data. The recent development of quantitative spatial models based on the mechanisms formalized by previous NEG theories has been a breakthrough in building an empirically relevant framework for implementing counterfactual policy exercises. If quantitative spatial models can connect with observed data in an empirically meaningful manner, they can enable the decomposition of key theoretical mechanisms and afford specificity in the evaluation of the general equilibrium effects of policy interventions in particular settings. Several decades since its proposal, NEG has been criticized for its parsimonious assumptions about the economy across space and time. Therefore, existing challenges still require theoretical and quantitative models on new microfoundations pertaining to the interactions between economic agents across geographical space and the relationship between geography and economic development.

Article

Stuti Khemani

“Reform” in the economics literature refers to changes in government policies or institutional rules because status-quo policies and institutions are not working well to achieve the goals of economic wellbeing and development. Further, reform refers to alternative policies and institutions that are available which would most likely perform better than the status quo. The main question examined in the “political economy of reform” literature has been why reforms are not undertaken when they are needed for the good of society. The succinct answer from the first generation of research is that conflict of interest between organized socio-political groups is responsible for some groups being able to stall reforms to extract greater private rents from status-quo policies. The next generation of research is tackling more fundamental and enduring questions: Why does conflict of interest persist? How are some interest groups able to exert influence against reforms if there are indeed large gains to be had for society? What institutions are needed to overcome the problem of credible commitment so that interest groups can be compensated or persuaded to support reforms? Game theory—or the analysis of strategic interactions among individuals and groups—is being used more extensively, going beyond the first generation of research which focused on the interaction between “winners” and “losers” from reforms. Widespread expectations, or norms, in society at large, not just within organized interest groups, about how others are behaving in the political sphere of making demands upon government; and, beliefs about the role of public policies, or preferences for public goods, shape these strategic interactions and hence reform outcomes. Examining where these norms and preferences for public goods come from, and how they evolve, are key to understanding why conflict of interest persists and how reformers can commit to finding common ground for socially beneficial reforms. Political markets and institutions, through which the leaders who wield power over public policy are selected and sanctioned, shape norms and preferences for public goods. Leaders who want to pursue reforms need to use the evidence in favor of reforms to build broad-based support in political markets. Contrary to the first generation view of reforms by stealth, the next generation of research suggests that public communication in political markets is needed to develop a shared understanding of policies for the public good. Concomitantly, the areas of reform have circled from market liberalization, which dominated the 20th century, back to strengthening governments to address problems of market failure and public goods in the 21st century. Reforms involve anti-corruption and public sector management in developing countries; improving health, education, and social protection to address persistent inequality in developed countries; and regulation to preserve competition and to price externalities (such as pollution and environmental depletion) in markets around the world. Understanding the functioning of politics is more important than ever before in determining whether governments are able to pursue reforms for public goods or fall prey to corruption and populism.