You are looking at 21-40 of 157 articles
Helen Hayes and Matt Sutton
Contracts and working conditions are important influences on the medical workforce that must be carefully constructed and considered by policymakers. Contracts involve an enforceable agreement of the rights and responsibilities of both employer and employee. The principal-agent relationship and presence of asymmetric information in healthcare means that contracts must be incentive compatible and create sufficient incentive for doctors to act in the payer’s best interests. Within medicine, there are special characteristics that are believed to be particularly pertinent to doctors, who act as agents to both the patient and the payer. These include intrinsic motivation, professionalism, altruism, and multitasking, and they influence the success of these contracts. The three most popular methods of payment are fee-for-service, capitation, and salaries. In most contexts a blend of each of these three payment methods is used; however, guidance on the most appropriate blend is unclear and the evidence on the special nature of doctors is insubstantial. The role of skill mix and teamwork in a healthcare setting is an important consideration as it impacts the success of incentives and payment systems and the efficiency of workers. Additionally, with increasing demand for healthcare, changing skill mix is one response to problems with recruitment and retention in health services. Health systems in many settings depend on a large proportion of foreign-born workers and so migration is a key consideration in retention and recruitment of health workers. Finally, forms of external regulation such as accreditation, inspection, and revalidation are widely used in healthcare systems; however, robust evidence of their effectiveness is lacking.
Corporate governance includes legal, contractual, and market mechanisms that structure decision-making within business corporations. Most attention has focused on corporate governance in large U.S. public corporations with dispersed shareholding. The separation of ownership from control in those corporations creates a unique problem, as shareholders typically have weak individual incentive to monitor managers. Mechanisms that have been developed to address this agency problem include independent directors, fiduciary duty, securities law disclosure, executive compensation, various professional gatekeepers, the market for corporate control, and shareholder activism. In most countries outside the United States, there are few companies with dispersed shareholding. Instead, most companies have a controlling shareholder or group. These companies face a different agency problem, the possibility that controlling shareholders may use their power to gain at the expense of minority shareholders.
Enterprise governance refers to mechanisms aimed at related agency problems that occur in closely held companies without publicly traded equity interests. Here too the agency problem typically encountered is the potential conflict between controllers and minority investors, with the added twist that share illiquidity removes an important protection for the minority. Closely held companies have adopted a variety of contractual mechanisms to address these concerns. Other than the important but special cases of venture capital and private equity fund investments, there is less empirical evidence on governance in closely held companies because information is generally much harder to find.
Alina Mungiu-Pippidi and Till Hartmann
Corruption and development are two mutually related concepts equally shifting in meaning across time. The predominant 21st-century view of government that regards corruption as inacceptable has its theoretical roots in ancient Western thought, as well as Eastern thought. This condemning view of corruption coexisted at all times with a more morally indifferent or neutral approach that found its expression most notably in development scholars of the 1960s and 1970s who viewed corruption as an enabler of development rather than an obstacle. Research on the nexus between corruption and development has identified mechanisms that enable corruption and offered theories of change, which have informed practical development policies. Interventions adopting a principal agent approach fit better the advanced economies, where corruption is an exception, rather than the emerging economies, where the opposite of corruption, the norm of ethical universalism, has yet to be built. In such contexts corruption is better approached from a collective action perspective. Reviewing cross-national data for the period 1996–2017, it becomes apparent that the control of corruption stagnated in most countries and only a few exceptions exist. For a lasting improvement of the control of corruption, societies need to reduce the resources for corruption while simultaneously increasing constraints. The evolution of a governance regime requires a multiple stakeholder endeavor reaching beyond the sphere of government involving the press, business, and a strong and activist civil society.
The origins of modern technological change provide the context necessary to understand present-day technological transformation, to investigate the impact of the new digital technologies, and to examine the phenomenon of digital disruption of established industries and occupations. How these contemporary technologies will transform industries and institutions, or serve to create new industries and institutions, will unfold in time. The implications of the relationships between these pervasive new forms of digital transformation and the accompanying new business models, business strategies, innovation, and capabilities are being worked through at global, national, corporate, and local levels. Whatever the technological future holds it will be defined by continual adaptation, perpetual innovation, and the search for new potential.
Presently, the world is experiencing the impact of waves of innovation created by the rapid advance of digital networks, software, and information and communication technology systems that have transformed workplaces, cities, and whole economies. These digital technologies are converging and coalescing into intelligent technology systems that facilitate and structure our lives. Through creative destruction, digital technologies fundamentally challenge existing routines, capabilities, and structures by which organizations presently operate, adapt, and innovate. In turn, digital technologies stimulate a higher rate of both technological and business model innovation, moving from producer innovation toward more user-collaborative and open-collaborative innovation. However, as dominant global platform technologies emerge, some impending dilemmas associated with the concentration and monopolization of digital markets become salient. The extent of the contribution made by digital transformation to economic growth and environmental sustainability requires a critical appraisal.
Carlos Garriga and Aaron Hedlund
The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.
Michael P. Clements and Ana Beatriz Galvão
At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations.
The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.
Deborah J. Street and Rosalie Viney
Discrete choice experiments are a popular stated preference tool in health economics and have been used to address policy questions, establish consumer preferences for health and healthcare, and value health states, among other applications. They are particularly useful when revealed preference data are not available. Most commonly in choice experiments respondents are presented with a situation in which a choice must be made and with a a set of possible options. The options are described by a number of attributes, each of which takes a particular level for each option. The set of possible options is called a “choice set,” and a set of choice sets comprises the choice experiment. The attributes and levels are chosen by the analyst to allow modeling of the underlying preferences of respondents. Respondents are assumed to make utility-maximizing decisions, and the goal of the choice experiment is to estimate how the attribute levels affect the utility of the individual. Utility is assumed to have a systematic component (related to the attributes and levels) and a random component (which may relate to unobserved determinants of utility, individual characteristics or random variation in choices), and an assumption must be made about the distribution of the random component. The structure of the set of choice sets, from the universe of possible choice sets represented by the attributes and levels, that is shown to respondents determines which models can be fitted to the observed choice data and how accurately the effect of the attribute levels can be estimated. Important structural issues include the number of options in each choice set and whether or not options in the same choice set have common attribute levels. Two broad approaches to constructing the set of choice sets that make up a DCE exist—theoretical and algorithmic—and no consensus exists about which approach consistently delivers better designs, although simulation studies and in-field comparisons of designs constructed by both approaches exist.
Gabriella Conti, Giacomo Mason, and Stavros Poupakis
Building on early animal studies, 20th-century researchers increasingly explored the fact that early events—ranging from conception to childhood—affect a child’s health trajectory in the long-term. By the 21st century, a wide body of research had emerged, incorporating the original fetal origins hypothesis into the developmental origins of health and disease. Evidence from Organization for Economic Cooperation and Development (OECD) countries suggests that health inequalities are strongly correlated with many dimensions of socioeconomic status, such as educational attainment, and that they tend to increase with age and carry stark intergenerational implications. Different economic theories have been developed to rationalize this evidence, with an overarching comprehensive framework still lacking. Existing models widely rely on human capital theory, which has given rise to separate dynamic models of adult and child health capital within a production function framework. A large body of empirical evidence has also found support for the developmental origins of inequalities in health. On the one hand, studies exploiting quasi-random exposure to adverse events have shown long-term physical and mental health impacts of exposure to early shocks, including pandemics or maternal illness, famine, malnutrition, stress, vitamin deficiencies, maltreatment, pollution, and economic recessions. On the other hand, studies from the 20th century have shown that early interventions of various content and delivery formats improve life course health. Further, given that the most socioeconomically disadvantaged groups show the greatest gains, such measures can potentially reduce health inequalities. However, studies of long-term impacts as well as the mechanisms via which shocks or policies affect health, and the dynamic interaction among them, are still lacking. Mapping the complexities of those early event dynamics is an important avenue for future research.
While definitional and measurement problems pose a challenge, there is no doubt that disability affects a noticeable share of the population, the vast majority of whom live in low- and middle-income countries (LMICs). The still comparatively scarce empirical data and evidence suggests that disability is closely associated with poverty and other indicators of economic deprivation at both the country and—if with slightly greater nuance—at the individual/household level. There is also a growing body of evidence documenting the sizeable additional costs incurred by persons with disabilities (PwDs) as a direct or indirect consequence of their disability, underlining the increased risk of PwDs (and the households they are part of) falling under the absolute poverty line in any given LMIC.
Looking ahead, there remains considerable scope for more evidence on the causal nature of the link between disability and poverty, as well as on the (cost-)effectiveness of interventions and policies attempting to improve the well-being of PwDs.
Denzil G. Fiebig and Hong Il Yoo
Stated preference methods are used to collect individual level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article will be data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.
Michael Drummond, Rosanna Tarricone, and Aleksandra Torbica
There are a number of challenges in the economic evaluation of medical devices (MDs). They are typically less regulated than pharmaceuticals, and the clinical evidence requirements for market authorization are generally lower. There are also specific characteristics of MDs, such as the device–user interaction (learning curve), the incremental nature of innovation, the dynamic nature of pricing, and the broader organizational impact. Therefore, a number of initiatives need to be taken in order to facilitate the economic evaluation of MDs. First, the regulatory processes for MDs need to be strengthened and more closely aligned to the needs of economic evaluation. Second, the methods of economic evaluation need to be enhanced by improving the analysis of the available clinical data, establishing high-quality clinical registries, and better recognizing MDs’ specific characteristics. Third, the market entry and diffusion of MDs need to be better managed by understanding the key influences on MD diffusion and linking diffusion with cost-effectiveness evidence through the use of performance-based risk-sharing arrangements.
Eline Aas, Emily Burger, and Kine Pedersen
The objective of medical screening is to prevent future disease (secondary prevention) or to improve prognosis by detecting the disease at an earlier stage (early detection). This involves examination of individuals with no symptoms of disease. Introducing a screening program is resource demanding, therefore stakeholders emphasize the need for comprehensive evaluation, where costs and health outcomes are reasonably balanced, prior to population-based implementation.
Economic evaluation of population-based screening programs involves quantifying health benefits (e.g., life-years gained) and monetary costs of all relevant screening strategies. The alternative strategies can vary by starting- and stopping-age, frequency of the screening and follow-up regimens after a positive test result. Following evaluation of all strategies, the efficiency frontier displays the efficient strategies and the country-specific cost-effectiveness threshold is used to determine the optimal, i.e., most cost-effective, screening strategy.
Similar to other preventive interventions, the costs of screening are immediate, while the health benefits accumulate after several years. Hence, the effect of discounting can be substantial when estimating the net present value (NPV) of each strategy. Reporting both discounting and undiscounted results is recommended. In addition, intermediate outcome measures, such as number of positive tests, cases detected, and events prevented, can be valuable supplemental outcomes to report.
Estimating the cost-effectiveness of alternative screening strategies is often based on decision-analytic models, synthesizing evidence from clinical trials, literature, guidelines, and registries. Decision-analytic modeling can include evidence from trials with intermediate or surrogate endpoints and extrapolate to long-term endpoints, such as incidence and mortality, by means of sophisticated calibration methods. Furthermore, decision-analytic models are unique, as a large number of screening alternatives can be evaluated simultaneously, which is not feasible in a randomized controlled trial (RCT). Still, evaluation of screening based on RCT data are valuable as both costs and health benefits are measured for the same individual, enabling more advanced analysis of the interaction of costs and health benefits.
Evaluation of screening involves multiple stakeholders and other considerations besides cost-effectiveness, such as distributional concerns, severity of the disease, and capacity influence decision-making. Analysis of harm-benefit trade-offs is a useful tool to supplement cost-effectiveness analyses. Decision-analytic models are often based on 100% participation, which is rarely the case in practice. If those participating are different from those not choosing to participate, with regard to, for instance, risk of the disease or condition, this would result in selection bias, and the result in practice could deviate from the results based on 100% participation. The development of new diagnostics or preventive interventions requires re-evaluation of the cost-effectiveness of screening. For example, if treatment of a disease becomes more efficient, screening becomes less cost-effective. Similarly, the introduction of vaccines (e.g., HPV-vaccination for cervical cancer) may influence the cost-effectiveness of screening. With access to individual level data from registries, there is an opportunity to better represent heterogeneity and long-term consequences of screening on health behavior in the analysis.
Anthony J. Venables
Economic activity is unevenly distributed across space, both internationally and within countries. What determines this spatial distribution, and how is it shaped by trade? Classical trade theory gives the insights of comparative advantage and gains from trade but is firmly aspatial, modeling countries as points and trade (in goods and factors of production) as either perfectly frictionless or impossible. Modern theory places this in a spatial context in which geographical considerations influence the volume of trade between places. Gravity models tell us that distance is important, with each doubling of distance between places halving the volume of trade. Modeling the location decisions of firms gives a theory of location of activity based on factor costs (as in classical theory) and also on proximity to markets, proximity to suppliers, and the extent of competition in each market. It follows from this that—if there is a high degree of mobility—firms and economic activity as a whole may tend to cluster, providing an explanation of observed spatial unevenness. In some circumstances falling trade barriers may trigger the deindustrialization of some areas as activity clusters in fewer places. In other circumstances falling barriers may enable activity to spread out, reducing inequalities within and between countries. Research over the past several decades has established the mechanisms that cause these changes and placed them in full general equilibrium models of the economy. Empirical work has quantified many of the important relationships. However, geography and trade remains an area where progress is needed to develop robust tools that can be used to inform place-based policies (concerning trade, transport, infrastructure, and local economic development), particularly in view of the huge expenditures that such policies incur.
In the early 21st century, the U.S. economy stood at or very near the top of any ranking of the world’s economies, more obviously so in terms of gross domestic product (GDP), but also when measured by GDP per capita. The current standing of any country reflects three things: how well off it was when it began modern economic growth, how long it has been growing, and how rapidly productivity increased each year. Americans are inclined to think that it was the last of these items that accounted for their country’s success. And there is some truth to the notion that America’s lofty status was due to the continual increases in the efficiency of its factors of production—but that is not the whole story.
The rate at which the U.S. economy has grown over its long history—roughly 1.5% per year measured by output per capita—has been modest in comparison with most other advanced nations. The high value of GDP per capita in the United States is due in no small part to the fact that it was already among the world’s highest back in the early 19th century, when the new nation was poised to begin modern economic growth. The United States was also an early starter, so has experienced growth for a very long time—longer than almost every other nation in the world.
The sustained growth in real GDP per capita began sometime in the period 1790 to 1860, although the exact timing of the transition, and even its nature, are still uncertain. Continual efforts to improve the statistical record have narrowed down the time frame in which the transition took place and improved our understanding of the forces that facilitated the transition, but questions remain. In order to understand how the United States made the transition from a slow-growing British colony to a more rapidly advancing, free-standing economy, it is necessary to know more precisely when it made that transition.
The Ming Dynasty (1368–1644) marked in the long history of China a period of cultural, political, demographic, and economic renaissance, after less than a century (1271–1368) of rule by the alien Mongol conquerors from the steppes. The wealth of the Ming Empire attracted European traders and missionaries with whom foreign silver, crops, and knowledge flowed into the country at unprecedented speed. Meanwhile, the Ming Empire reached out to the Indian Ocean with the largest armada in the world at the time.
The Ming rule was ended by a military takeover by Manchu mercenaries who did not return to Manchuria after helping the Ming authorities crack down on a rebellion, an important factor that ultimately dictated the behavior of the Qing state (1644–1911). The main institutions and policies of the Ming remained intact, and in 1712 the Qing state voluntarily capped its total tax revenue, a Confucian gesture to gain legitimacy, which marked a major step toward a withering state whereby the tax burden became lighter and consequently state control over the population and territory became weaker. At the beginning, the waning state produced some positive outcomes: both farmland and population multiplied, and domestic and foreign trade were prosperous. The Qing economy outperformed that of the Ming and became one of the largest in the world by 1800, with a decent standard of living.
Even so, a withering state was a time bomb. The unintended consequences of the weakening state loomed large. Externally, the empire did not have the ability to prevent the invasion of foreign bullies. From 1840 to 1900, China lost all five wars it fought with foreign forces. Internally, unrest swept the empire from 1860 to 1880. Imperial order and tranquility was replaced by anarchy, a rather logical outcome of a withering state. To a great extent the benefits of growth during the Qing rule had been lost by the second half of the 19th century.
Meanwhile, fully aware of the root cause of the problem, the Qing elite sought solutions to save the empire from within. This led to a more open approach to foreign aid, loans, and technology, known as the “Westernization Movement” (c. 1860–1880). This movement marked the beginning of state-led modernization in China.
The path of modernization in China was, however, rugged. It began with the ideal of “Chinese knowledge as the foundation and Western learning for utility” (until 1949), then proceeded to “Russian (Soviet) ideology as the foundation and Russian (Soviet) learning for utility” (1949–1976), and then to “Russian (Soviet) ideology as the foundation and Western learning for utility” in the post-Mao era (1977–present day). With such a swing, the performance of China’s growth and development fluctuated, sometimes violently.
Sandra G. Sosa-Rubí and Omar Galárraga
Conditional economic incentives are a theoretically grounded approach for eliciting behavior change. The rationale stems from present-biased preferences, by which individuals attach greater value to benefits in the present and heavily discount long-term health. A growing literature documents the use of economic incentives in the HIV field. Small and frequent conditional economic incentives offered to vulnerable populations can contribute to behavior change. Economic incentives accompanied with other strategies can help overcome obstacles to access health services and in general seem to improve linkage to HIV care, prevention interventions, and adherence to HIV treatment. Future identification of promising combinations of intervention components, modalities, and strategies may yield maximum impact.
Hilario Casado Alonso and Teofilo F. Ruiz
The period between 1085 to 1815 witnessed important transformations in Spain’s economic history. The transition from a frontier society to one of the largest empires in the world was soon followed by its subsequent decline. During Spain’s Middle Ages two kinds of economies, societies and political structures, existed side by side: One represented by the various Muslim kingdoms and another by the Christians. Their frontiers shifted constantly between 1035 and 1212 to the detriment of Al-Andalus (Muslim Spain), concluding with the conquest of Granada in 1492. Economic dynamism resulted in Christian expansion, reflected in demographic, agricultural, livestock, and commercial growth during the 11th, 12th, and 13th centuries and comparable to that of other medieval kingdoms. Under the stress of the mid-14th-century crisis (plagues, wars, and civil conflicts), economic growth came to a partial halt in the second half of the century. Yet, unlike other areas in Europe, the late medieval crisis had less of an impact in Spain, differently affecting some of the Iberian realms.
After the second third of the 15th century, as it was the case in Portugal, the economy in the Crown of Castile began to grow once more. Castile became the demographic and economic hub of Spain to the detriment of other areas, such as Catalonia, Navarra, or Aragón, which had been more developed in earlier times. The Catholic Monarchs’ rule and their reforms made Spain one of the most prosperous economies in Europe and the center of a sprawling empire. The colonisation of the Americas and the Philippines with their untold wealth further bolstered Spain’s economy. As a result, most researchers agree that Spain reached the height of its economic growth in the mid-16th century, although in a number of regions growth extended into the 1580s. Based mostly in agriculture, the economy also benefitted from the development of crafts and, above all, trade, generating vast tax revenue for the Habsburg monarchy’s expansive policy of war.
After the late 16th century, however, the Spanish economy began to show signs of fatigue, leading to severe crisis that lasted until at least the mid-17th century. This recession heralded a major shift in Spain’s history. Whereas it was the inland areas of Spain that were the most populated and wealthy during the 12th and 13th centuries, these areas were also most affected by the crisis, while the coastal regions would be the first to emerge from the recession. Although Spain failed to reach the heights attained in other countries such as Britain, France, or the Netherlands, an economic revival occurred during the 18th century, moving the Spanish economy beyond what it had been during the final third of the 16th century. Nonetheless, as had occurred in the 17th century, coastal areas developed more intensely than inland, leading to the economic geography of modern-day Spain.
Antony W. Dnes
Economists increasingly connect legal changes to behavioral responses that many family law experts fail to see. Incentives matter in families, which respond to changes in legal regulation. Changing incentive structures linked to family law have largely affected marriage, cohabitation, and divorce. Economic analysis has been applied to assess the causes of falling marriage rates and delays in marriage. Much analysis has focused on increases in divorce rates, which appear to respond to legal changes making divorce easier, and to different settlement regimes. Less work has been done in relation to children but some research does exist showing how children are impacted by changes in incentives affecting adults.
Jason M. Fletcher
Two interrelated advances in genetics have occurred which have ushered in the growing field of genoeconomics. The first is a rapid expansion of so-called big data featuring genetic information collected from large population–based samples. The second is enhancements to computational and predictive power to aggregate small genetic effects across the genome into single summary measures called polygenic scores (PGSs). Together, these advances will be incorporated broadly with economic research, with strong possibilities for new insights and methodological techniques.
Despite the drop in transport and commuting costs since the mid-19th century, sizable and lasting differences across locations at very different spatial scales remain the most striking feature of the space-economy. The main challenges of the economics of agglomeration are therefore (a) to explain why people and economic activities are agglomerated in a few places and (b) to understand why some places fare better than others.
To meet these challenges, the usual route is to appeal to the fundamental trade-off between (internal and external) increasing returns and various mobility costs. This trade-off has a major implication for the organization of the space-economy: High transport and commuting costs foster the dispersion of economic activities, while strong increasing returns act as a strong agglomeration force.
The first issue is to explain the existence of large and persistent regional disparities within nations or continents. At that spatial scale, the mobility of commodities and production factors is critical. By combining new trade theories with the mobility of firms and workers, economic geography shows that a core periphery structure can emerge as a stable market outcome.
Second, at the urban scale, cities stem from the interplay between agglomeration and dispersion forces: The former explain why firms and consumers want to be close to each other whereas the latter put an upper limit on city sizes. Housing and commuting costs, which increase with population size, are the most natural candidates for the dispersion force. What generates agglomeration forces is less obvious. The literature on urban economics has highlighted the fact that urban size is the source of various benefits, which increase firm productivity and consumer welfare.
Within cities, agglomeration also occurs in the form of shopping districts where firms selling differentiated products congregate. Strategic location considerations and product differentiation play a central role in the emergence of commercial districts because firms compete with a small number of close retailers.