41-60 of 314 Results

Article

In order to secure effective service access, coverage, and impact, it is increasingly recognized that the introduction of novel health technologies such as diagnostics, drugs, and vaccines may require additional investment to address the constraints under which many health systems operate. Health-system constraints include a shortage of health workers, ineffective supply chains, or inadequate information systems, or organizational constraints such as weak incentives and poor service integration. Decision makers may be faced with the question of whether to invest in a new technology, including the specific health system strengthening needed to ensure effective implementation; or they may be seeking to optimize resource allocation across a range of interventions including investment in broad health system functions or platforms. Investment in measures to address health-system constraints therefore increasingly need to undergo economic evaluation, but this poses several methodological challenges for health economists, particularly in the context of low- and middle-income countries. Designing the appropriate analysis to inform investment decisions concerning new technologies incorporating health systems investment can be broken down into several steps. First, the analysis needs to comprehensively outline the interface between the new intervention and the system through which it is to be delivered, in order to identify the relevant constraints and the measures needed to relax them. Second, the analysis needs to be rooted in a theoretical approach to appropriately characterize constraints and consider joint investment in the health system and technology. Third, the analysis needs to consider how the overarching priority- setting process influences the scope and output of the analysis informing the way in which complex evidence is used to support the decision, including how to represent and manage system wide trade-offs. Finally, there are several ways in which decision analytical models can be structured, and parameterized, in a context of data scarcity around constraints. This article draws together current approaches to health system thinking with the emerging literature on analytical approaches to integrating health-system constraints into economic evaluation to guide economists through these four issues. It aims to contribute to a more health-system-informed approach to both appraising the cost-effectiveness of new technologies and setting priorities across a range of program activities.

Article

Florian Exler and Michèle Tertilt

Consumer debt is an important means for consumption smoothing. In the United States, 70% of households own a credit card, and 40% borrow on it. When borrowers cannot (or do not want to) repay their debts, they can declare bankruptcy, which provides additional insurance in tough times. Since the 2000s, up to 1.5% of households declared bankruptcy per year. Clearly, the option to default affects borrowing interest rates in equilibrium. Consequently, when assessing (welfare) consequences of different bankruptcy regimes or providing policy recommendations, structural models with equilibrium default and endogenous interest rates are needed. At the same time, many questions are quantitative in nature: the benefits of a certain bankruptcy regime critically depend on the nature and amount of risk that households bear. Hence, models for normative or positive analysis should quantitatively match some important data moments. Four important empirical patterns are identified: First, since 1950, consumer debt has risen constantly, and it amounted to 25% of disposable income by 2016. Defaults have risen since the 1980s. Interestingly, interest rates remained roughly constant over the same time period. Second, borrowing and default clearly depend on age: both measures exhibit a distinct hump, peaking around 50 years of age. Third, ownership of credit cards and borrowing clearly depend on income: high-income households are more likely to own a credit card and to use it for borrowing. However, this pattern was stronger in the 1980s than in the 2010s. Finally, interest rates became more dispersed over time: the number of observed interest rates more than quadrupled between 1983 and 2016. These data have clear implications for theory: First, considering the importance of age, life cycle models seem most appropriate when modeling consumer debt and default. Second, bankruptcy must be costly to support any debt in equilibrium. While many types of costs are theoretically possible, only partial repayment requirements are able to quantitatively match the data on filings, debt levels, and interest rates simultaneously. Third, to account for the long-run trends in debts, defaults, and interest rates, several quantitative theory models identify a credit expansion along the intensive and extensive margin as the most likely source. This expansion is a consequence of technological advancements. Many of the quantitative macroeconomic models in this literature assess welfare effects of proposed reforms or of granting bankruptcy at all. These welfare consequences critically hinge on the types of risk that households face—because households incur unforeseen expenditures, not-too-stringent bankruptcy laws are typically found to be welfare superior to banning bankruptcy (or making it extremely costly) but also to extremely lax bankruptcy rules. There are very promising opportunities for future research related to consumer debt and default. Newly available data in the United States and internationally, more powerful computational resources allowing for more complex modeling of household balance sheets, and new loan products are just some of many promising avenues.

Article

A patent is a legal right to exclude granted by the state to the inventor of a novel and useful invention. Much legal ink has been spilled on the meaning of these terms. “Novel” means that the invention has not been anticipated in the art prior to its creation by the inventor. “Useful” means that the invention has a practical application. The words “inventor” and “invention” are also legal terms of art. An invention is a work that advances a particular field, moving practitioners forward not simply through accretions of knowledge but through concrete implementations. An inventor is someone who contributes to an invention either as an individual or as part of a team. The exclusive right, finally, is not granted gratuitously. The inventor must apply and go through a review process for the invention. Furthermore, a price for the patent being granted is full, clear disclosure by the inventor of how to practice the invention. The public can use this disclosure once the patent expires or through a license during the duration of the patent. These institutional details are common features of all patent systems. What is interesting is the economic justification for patents. As a property right, a patent resolves certain externality problems that arise in markets for knowledge. The establishment of property rights allows for trade in the invention and the dissemination of knowledge. However, the economic case for property rights is made complex because of the institutional need to apply for a patent. While in theory, patent grants could be automatic, inventions must meet certain standards for the grant to be justified. These procedural hurdles create possibilities for gamesmanship in how property rights are allocated. Furthermore, even if granted correctly, property rights can become murky because of the problems of enforcement through litigation. Courts must determine when an invention has been used, made, or sold without permission by a third party in violation of the rights of the patent owner. This legal process can lead to gamesmanship as patent owners try to force settlements from alleged infringers. Meanwhile, third parties may act opportunistically to take advantage of the uncertain boundaries of patent rights and engage in undetectable infringement. Exacerbating these tendencies are the difficulties in determining damages and the possibility of injunctive relief. Some caution against these criticisms through the observation that most patents are not enforced. In fact, most granted patents turn out to be worthless, when gauged in commercial value. But worthless patents still have potential litigation value. While a patent owner might view a worthless patent as a sunk cost, there is incentive to recoup investment through the sale of worthless patents to parties willing to assume the risk of litigation. Hence the phenomenon of “trolling,” or the rise of non-practicing entities, troubles the patent landscape. This phenomenon gives rise to concerns with the anticompetitive uses of patents, demonstrating the need for some limitations on patent enforcement. With all the policy concerns arising from patents, it is no surprise that patent law has been ripe for reform. Economic analysis can inform these reform efforts by identifying ways in which patents fail to create a vibrant market for inventions. Appreciation of the political economy of patents invites a rich academic and policy debate over the direction of patent law.

Article

Qiang Fu and Zenan Wu

Competitive situations resembling contests are ubiquitous in modern economic landscape. In a contest, economic agents expend costly effort to vie for limited prizes, and they are rewarded for “getting ahead” of their opponents instead of their absolute performance metrics. Many social, economic, and business phenomena exemplify such competitive schemes, ranging from college admissions, political campaigns, advertising, and organizational hierarchies, to warfare. The economics literature has long recognized contest/tournament as a convenient and efficient incentive scheme to remedy the moral hazard problem, especially when the production process is subject to random perturbation or the measurement of input/output is imprecise or costly. An enormous amount of scholarly effort has been devoted to developing tractable theoretical models, unveiling the fundamentals of the strategic interactions that underlie such competitions, and exploring the optimal design of contest rules. This voluminous literature has enriched basic contest/tournament models by introducing different variations to the modeling, such as dynamic structure, incomplete and asymmetric information, multi-battle confrontations, sorting and entry, endogenous prize allocation, competitions in groups, contestants with alternative risk attitude, among other things.

Article

Helen Hayes and Matt Sutton

Contracts and working conditions are important influences on the medical workforce that must be carefully constructed and considered by policymakers. Contracts involve an enforceable agreement of the rights and responsibilities of both employer and employee. The principal–agent relationship and presence of asymmetric information in healthcare means that contracts must be incentive compatible and create sufficient incentive for doctors to act in the payer’s best interests. Within medicine, there are special characteristics that are believed to be particularly pertinent to doctors, who act as agents to both the patient and the payer. These include intrinsic motivation, professionalism, altruism, and multitasking, and they influence the success of these contracts. The three most popular methods of payment are fee-for-service, capitation, and salaries. In most contexts a blend of each of these three payment methods is used; however, guidance on the most appropriate blend is unclear and the evidence on the special nature of doctors is insubstantial. The role of skill mix and teamwork in a healthcare setting is an important consideration as it impacts the success of incentives and payment systems and the efficiency of workers. Additionally, with increasing demand for healthcare, changing skill mix is one response to problems with recruitment and retention in health services. Health systems in many settings depend on a large proportion of foreign-born workers and so migration is a key consideration in retention and recruitment of health workers. Finally, forms of external regulation such as accreditation, inspection, and revalidation are widely used in healthcare systems; however, robust evidence of their effectiveness is lacking.

Article

Pascal Mossay and Pierre M. Picard

New Economic Geography (NEG) provides microeconomic foundations for explaining the spatial concentration of economic activities across regions, cities, and urban areas. The origins of the NEG literature trace back to trade, location, and urbans economics theories. In NEG, agglomeration and dispersion forces explain the existence of spatial agglomerations. A NEG model usually incorporates a combination of such forces. In particular, firm proximity to large markets and the importance of linkages along a supply chain are typical agglomeration forces. Equilibria properties derived from NEG models are very specific to NEG as they involve multiple equilibria and have a very high dependence on changes in parameters. This phenomenon has important implications for the emergence of nations, regions, and cities. In particular, high transport costs imply the dispersion of economic activities, while low transport costs lead to their spatial concentration. The same forces that shape inequalities and disparities between regions also shape the internal structure of cities. Firms concentrate in urban centers to gain greater access to larger demand. The empirical literature has developed several approaches that shed light on spatial agglomeration and estimate the role and impact of transport costs on market access. A key empirical research question is whether observed patterns could be explained by location amenities or agglomeration forces as put forward by NEG. Quasi-experimental methodology is frequently used for such a purpose. NEG theory is supported by empirical evidence, demonstrating the role of market access.

Article

Despite the common view that innovation requires academically educated workers, some countries that strongly emphasize vocational education and training (VET) in their education systems—such as Switzerland and Germany—are highly competitive internationally in terms of innovation. These countries have dual VET programs, that is, upper-secondary-level apprenticeship programs, that combine about three quarters of workplace training with about one quarter of vocational schooling, and design them in such a way that their graduates (i.e., dual apprenticeship-graduates) play crucial roles in innovation processes. Regular updates of VET curricula incorporate the latest technological developments into these curricula, thereby ensuring that dual apprenticeship-graduates possess up-to-date, high-level skills in their chosen occupation. This process allows these graduates to contribute to innovation in firms. Moreover, these graduates acquire broad sets of technical and soft skills that enhance their job mobility and flexibility. Therefore, conventional wisdom notwithstanding, dual apprenticeship-graduates in such countries not only have broad skill sets that accelerate innovation in firms, but also willingly participate in innovation because of their high flexibility and employability. Moreover, Switzerland and Germany have tertiary-level VET institutions that foster innovation. These are universities of applied sciences (UASs), which teach and conduct applied research, thereby helping build a bridge between different types of knowledge (vocational and academic). UAS students have prior vocational knowledge through their dual apprenticeship and acquire applied research skills from UAS professors who usually have both work experience and a doctoral degree from an academic university. Thus UAS graduates combine sound occupational knowledge with applied research knowledge inspired by input from the academic research frontier and from practical research and development (R & D) in firms. Firms employ UAS graduates with their knowledge combination as an important input for R & D. Consequently, regions with a UAS have higher levels of innovation than regions without one. This effect is particularly strong for regions outside major innovation centers and for regions with larger percentages of smaller firms.

Article

George Batta and Fan Yu

Corporate credit derivatives are over-the-counter (OTC) contracts whose payoffs are determined by a single corporate credit event or a portfolio of such events. Credit derivatives became popular in the late 1990s and early 2000s as a way for financial institutions to reduce their regulatory capital requirement, and early research treated them as redundant securities whose pricing is tied to the underlying corporate bonds and equities, with liquidity and counterparty risk factors playing supplementary roles. Research in the 2010s and beyond, however, increasingly focused on the effects of market frictions on the pricing of CDSs, how CDS trading has impacted corporate behaviors and outcomes as well as the price efficiency and liquidity of other related markets, and the microstructure of the CDS market itself. This was made possible by the availability of market statistics and more granular trade and quote data as a result of the broad movement of the OTC derivatives market toward central clearing.

Article

Corporate governance includes legal, contractual, and market mechanisms that structure decision-making within business corporations. Most attention has focused on corporate governance in large U.S. public corporations with dispersed shareholding. The separation of ownership from control in those corporations creates a unique problem, as shareholders typically have weak individual incentive to monitor managers. Mechanisms that have been developed to address this agency problem include independent directors, fiduciary duty, securities law disclosure, executive compensation, various professional gatekeepers, the market for corporate control, and shareholder activism. In most countries outside the United States, there are few companies with dispersed shareholding. Instead, most companies have a controlling shareholder or group. These companies face a different agency problem, the possibility that controlling shareholders may use their power to gain at the expense of minority shareholders. Enterprise governance refers to mechanisms aimed at related agency problems that occur in closely held companies without publicly traded equity interests. Here too the agency problem typically encountered is the potential conflict between controllers and minority investors, with the added twist that share illiquidity removes an important protection for the minority. Closely held companies have adopted a variety of contractual mechanisms to address these concerns. Other than the important but special cases of venture capital and private equity fund investments, there is less empirical evidence on governance in closely held companies because information is generally much harder to find.

Article

Corporate social responsibility (CSR) refers to the incorporation of environmental, social, and governance (ESG) considerations into corporate management, financial decision-making, and investors’ portfolio decisions. Socially responsible firms are expected to internalize the externalities they create (e.g., pollution) and be accountable to shareholders and other stakeholders (employees, customers, suppliers, local communities, etc.). Rating agencies have developed firm-level measures of ESG performance that are widely used in the literature. However, these ratings show inconsistencies that result from the rating agencies’ preferences, weights of the constituting factors, and rating methodology. CSR also deals with sustainable, responsible, and impact investing. The return implications of investing in the stocks of socially responsible firms include the search for an EGS factor and the performance of SRI funds. SRI funds apply negative screening (exclusion of “sin” industries), positive screening, and activism through engagement or proxy voting. In this context, one wonders whether responsible investors are willing to trade off financial returns with a “moral” dividend (the return given up in exchange for an increase in utility driven by the knowledge that an investment is ethical). Related to the analysis of externalities and the ethical dimension of corporate decisions is the literature on green financing (the financing of environmentally friendly investment projects by means of green bonds) and on how to foster economic decarbonization as climate change affects financial markets and investor behavior.

Article

Daniel Greene, Omesh Kini, Mo Shen, and Jaideep Shenoy

A large body of work has examined the impact of corporate takeovers on the financial stakeholders (shareholders and bondholders) of the merging firms. Since the late 2000s, empirical research has increasingly highlighted the crucial role played by the non-financial stakeholders (labor, suppliers, customers, government, and communities) in these transactions. It is, therefore, important to understand the interplay between corporate takeovers and the non-financial stakeholders of the firm. Financial economists have long viewed the firm as a nexus of contracts between various stakeholders connected to the firm. Corporate takeovers not only play an important role in redefining the broad boundaries of the firm but also result in major changes to corporate ownership and structure. In the process, takeovers can significantly alter the contractual relationships with non-financial stakeholders. Because the firm’s relationships with these stakeholders are governed by implicit and explicit contracts, circumstances can arise that allow acquiring firms to fully or partially abrogate these contracts and extract rents from non-financial stakeholders after deal completion. In contrast, non-financial stakeholders can also potentially benefit from a takeover if they get to share in any efficiency gains that are generated in the deal. Given this framework, the ex-ante importance of these contractual relationships can have a bearing on the efficacy of takeovers. The ability to alter contractual relationships ex post can affect the propensity of a takeover and merging firms’ shareholders and, in turn, impact non-financial stakeholders. Non-financial stakeholders will be more vested in post-takeover success if they can trust the acquiring firm to not take actions that are detrimental to them. The big picture that emerges from the surveyed literature is that non-financial stakeholder considerations affect takeover decisions and post-takeover outcomes. Moreover, takeovers also have an impact on non-financial stakeholders. The directions of all these effects, however, depend on the economic environment in which the merging firms operate.

Article

Alina Mungiu-Pippidi and Till Hartmann

Corruption and development are two mutually related concepts equally shifting in meaning across time. The predominant 21st-century view of government that regards corruption as inacceptable has its theoretical roots in ancient Western thought, as well as Eastern thought. This condemning view of corruption coexisted at all times with a more morally indifferent or neutral approach that found its expression most notably in development scholars of the 1960s and 1970s who viewed corruption as an enabler of development rather than an obstacle. Research on the nexus between corruption and development has identified mechanisms that enable corruption and offered theories of change, which have informed practical development policies. Interventions adopting a principal agent approach fit better the advanced economies, where corruption is an exception, rather than the emerging economies, where the opposite of corruption, the norm of ethical universalism, has yet to be built. In such contexts corruption is better approached from a collective action perspective. Reviewing cross-national data for the period 1996–2017, it becomes apparent that the control of corruption stagnated in most countries and only a few exceptions exist. For a lasting improvement of the control of corruption, societies need to reduce the resources for corruption while simultaneously increasing constraints. The evolution of a governance regime requires a multiple stakeholder endeavor reaching beyond the sphere of government involving the press, business, and a strong and activist civil society.

Article

The origins of modern technological change provide the context necessary to understand present-day technological transformation, to investigate the impact of the new digital technologies, and to examine the phenomenon of digital disruption of established industries and occupations. How these contemporary technologies will transform industries and institutions, or serve to create new industries and institutions, will unfold in time. The implications of the relationships between these pervasive new forms of digital transformation and the accompanying new business models, business strategies, innovation, and capabilities are being worked through at global, national, corporate, and local levels. Whatever the technological future holds it will be defined by continual adaptation, perpetual innovation, and the search for new potential. Presently, the world is experiencing the impact of waves of innovation created by the rapid advance of digital networks, software, and information and communication technology systems that have transformed workplaces, cities, and whole economies. These digital technologies are converging and coalescing into intelligent technology systems that facilitate and structure our lives. Through creative destruction, digital technologies fundamentally challenge existing routines, capabilities, and structures by which organizations presently operate, adapt, and innovate. In turn, digital technologies stimulate a higher rate of both technological and business model innovation, moving from producer innovation toward more user-collaborative and open-collaborative innovation. However, as dominant global platform technologies emerge, some impending dilemmas associated with the concentration and monopolization of digital markets become salient. The extent of the contribution made by digital transformation to economic growth and environmental sustainability requires a critical appraisal.

Article

Miles Livingston and Lei Zhou

Credit rating agencies have developed as an information intermediary in the credit market because there are very large numbers of bonds outstanding with many different features. The Securities Industry and Financial Markets Association reports over $20 trillion of corporate bonds, mortgaged-backed securities, and asset-backed securities in the United States. The vast size of the bond markets, the number of different bond issues, and the complexity of these securities result in a massive amount of information for potential investors to evaluate. The magnitude of the information creates the need for independent companies to provide objective evaluations of the ability of bond issuers to pay their contractually binding obligations. The result is credit rating agencies (CRAs), private companies that monitor debt securities/issuers and provide information to investors about the potential default risk of individual bond issues and issuing firms. Rating agencies provide ratings for many types of debt instruments including corporate bonds, debt instruments backed by assets such as mortgages (mortgage-backed securities), short-term debt of corporations, municipal government debt, and debt issued by central governments (sovereign bonds). The three largest rating agencies are Moody’s, Standard & Poor’s, and Fitch. These agencies provide ratings that are indicators of the relative probability of default. Bonds with the highest rating of AAA have very low probabilities of default and consequently the yields on these bonds are relatively low. As the ratings decline, the probability of default increases and the bond yields increase. Ratings are important to institutional investors such as insurance companies, pension funds, and mutual funds. These large investors are often restricted to purchasing exclusively or primarily bonds in the highest rating categories. Consequently, the highest ratings are usually called investment grade. The lower ratings are usually designated as high-yield or “junk bonds.” There is a controversy about the possibility of inflated ratings. Since issuers pay rating agencies for providing ratings, there may be an incentive for the rating agencies to provide inflated ratings in exchange for fees. In the U.S. corporate bond market, at least two and often three agencies provide ratings. Multiple ratings make it difficult for one rating agency to provide inflated ratings. Rating agencies are regulated by the Securities and Exchange Commission to ensure that agencies follow reasonable procedures.

Article

The global financial crisis of 2007–2009 helped usher in a stronger consensus about the central role that housing plays in shaping economic activity, particularly during large boom and bust episodes. The latest research regards the causes, consequences, and policy implications of housing crises with a broad focus that includes empirical and structural analysis, insights from the 2000s experience in the United States, and perspectives from around the globe. Even with the significant degree of heterogeneity in legal environments, institutions, and economic fundamentals over time and across countries, several common themes emerge. Research indicates that fundamentals such as productivity, income, and demographics play an important role in generating sustained movements in house prices. While these forces can also contribute to boom-bust episodes, periods of large house price swings often reflect an evolving housing premium caused by financial innovation and shifts in expectations, which are in turn amplified by changes to the liquidity of homes. Regarding credit, the latest evidence indicates that expansions in lending to marginal borrowers via the subprime market may not be entirely to blame for the run-up in mortgage debt and prices that preceded the 2007–2009 financial crisis. Instead, the expansion in credit manifested by lower mortgage rates was broad-based and caused borrowers across a wide range of incomes and credit scores to dramatically increase their mortgage debt. To whatever extent changing beliefs about future housing appreciation may have contributed to higher realized house price growth in the 2000s, it appears that neither borrowers nor lenders anticipated the subsequent collapse in house prices. However, expectations about future credit conditions—including the prospect of rising interest rates—may have contributed to the downturn. For macroeconomists and those otherwise interested in the broader economic implications of the housing market, a growing body of evidence combining micro data and structural modeling finds that large swings in house prices can produce large disruptions to consumption, the labor market, and output. Central to this transmission is the composition of household balance sheets—not just the amount of net worth, but also how that net worth is allocated between short term liquid assets, illiquid housing wealth, and long-term defaultable mortgage debt. By shaping the incentive to default, foreclosure laws have a profound ex-ante effect on the supply of credit as well as on the ex-post economic response to large shocks that affect households’ degree of financial distress. On the policy front, research finds mixed results for some of the crisis-related interventions implemented in the U.S. while providing guidance for future measures should another housing bust of similar or greater magnitude reoccur. Lessons are also provided for the development of macroprudential policy aimed at preventing such a future crisis without unduly constraining economic performance in good times.

Article

Michael P. Clements and Ana Beatriz Galvão

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations. The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.

Article

Deborah J. Street and Rosalie Viney

Discrete choice experiments are a popular stated preference tool in health economics and have been used to address policy questions, establish consumer preferences for health and healthcare, and value health states, among other applications. They are particularly useful when revealed preference data are not available. Most commonly in choice experiments respondents are presented with a situation in which a choice must be made and with a a set of possible options. The options are described by a number of attributes, each of which takes a particular level for each option. The set of possible options is called a “choice set,” and a set of choice sets comprises the choice experiment. The attributes and levels are chosen by the analyst to allow modeling of the underlying preferences of respondents. Respondents are assumed to make utility-maximizing decisions, and the goal of the choice experiment is to estimate how the attribute levels affect the utility of the individual. Utility is assumed to have a systematic component (related to the attributes and levels) and a random component (which may relate to unobserved determinants of utility, individual characteristics or random variation in choices), and an assumption must be made about the distribution of the random component. The structure of the set of choice sets, from the universe of possible choice sets represented by the attributes and levels, that is shown to respondents determines which models can be fitted to the observed choice data and how accurately the effect of the attribute levels can be estimated. Important structural issues include the number of options in each choice set and whether or not options in the same choice set have common attribute levels. Two broad approaches to constructing the set of choice sets that make up a DCE exist—theoretical and algorithmic—and no consensus exists about which approach consistently delivers better designs, although simulation studies and in-field comparisons of designs constructed by both approaches exist.

Article

Gabriella Conti, Giacomo Mason, and Stavros Poupakis

Building on early animal studies, 20th-century researchers increasingly explored the fact that early events—ranging from conception to childhood—affect a child’s health trajectory in the long-term. By the 21st century, a wide body of research had emerged, incorporating the original fetal origins hypothesis into the developmental origins of health and disease. Evidence from Organization for Economic Cooperation and Development (OECD) countries suggests that health inequalities are strongly correlated with many dimensions of socioeconomic status, such as educational attainment, and that they tend to increase with age and carry stark intergenerational implications. Different economic theories have been developed to rationalize this evidence, with an overarching comprehensive framework still lacking. Existing models widely rely on human capital theory, which has given rise to separate dynamic models of adult and child health capital within a production function framework. A large body of empirical evidence has also found support for the developmental origins of inequalities in health. On the one hand, studies exploiting quasi-random exposure to adverse events have shown long-term physical and mental health impacts of exposure to early shocks, including pandemics or maternal illness, famine, malnutrition, stress, vitamin deficiencies, maltreatment, pollution, and economic recessions. On the other hand, studies from the 20th century have shown that early interventions of various content and delivery formats improve life course health. Further, given that the most socioeconomically disadvantaged groups show the greatest gains, such measures can potentially reduce health inequalities. However, studies of long-term impacts as well as the mechanisms via which shocks or policies affect health, and the dynamic interaction among them, are still lacking. Mapping the complexities of those early event dynamics is an important avenue for future research.

Article

While definitional and measurement problems pose a challenge, there is no doubt that disability affects a noticeable share of the population, the vast majority of whom live in low- and middle-income countries (LMICs). The still comparatively scarce empirical data and evidence suggests that disability is closely associated with poverty and other indicators of economic deprivation at both the country and—if with slightly greater nuance—at the individual/household level. There is also a growing body of evidence documenting the sizeable additional costs incurred by persons with disabilities (PwDs) as a direct or indirect consequence of their disability, underlining the increased risk of PwDs (and the households they are part of) falling under the absolute poverty line in any given LMIC. Looking ahead, there remains considerable scope for more evidence on the causal nature of the link between disability and poverty, as well as on the (cost-)effectiveness of interventions and policies attempting to improve the well-being of PwDs.

Article

Frederick van der Ploeg

The social rate of discount is a crucial driver of the social cost of carbon (SCC), that is, the expected present discounted value of marginal damages resulting from emitting one ton of carbon today. Policy makers should set carbon prices to the SCC using a carbon tax or a competitive permits market. The social discount rate is lower and the SCC higher if policy makers are more patient and if future generations are less affluent and policy makers care about intergenerational inequality. Uncertainty about the future rate of growth of the economy and emissions and the risk of macroeconomic disasters (tail risks) also depress the social discount rate and boost the SCC provided intergenerational inequality aversion is high. Various reasons (e.g., autocorrelation in the economic growth rate or the idea that a decreasing certainty-equivalent discount rate results from a discount rate with a distribution that is constant over time) are discussed for why the social discount rate is likely to decline over time. A declining social discount rate also emerges if account is taken from the relative price effects resulting from different growth rates for ecosystem services and of labor in efficiency units. The market-based asset pricing approach to carbon pricing is contrasted with a more ethical approach to policy making. Some suggestions for further research are offered.