You are looking at 101-120 of 132 articles
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
The literature on optimum currency areas differs from that on other topics in economic theory in a number of notable respects. Most obviously, the theory is framed in verbal rather than mathematical terms. Mundell’s seminal article coining the term and setting out the theory’s basic propositions relied entirely on words rather than equations. The same was true of subsequent contributions focusing on the sectoral composition of activity and the role of fiscal flows. A handful of more recent articles specified and analyzed formal mathematical models of optimum currency areas. But it is safe to say that none of these has “taken off” in the sense of becoming the workhorse framework on which subsequent scholarship builds. The theoretical literature remains heavily qualitative and narrative compared to other areas of economic theory. While Mundell, McKinnon, Kenen, and the other founding fathers of optimum-currency-area theory provided powerful intuition, attempts to further formalize that intuition evidently contributed less to advances in economic understanding than has been the case for other theoretical literatures.
Second, recent contributions to the literature on optimum currency areas are motivated to an unusual extent by a particular case, namely Europe’s monetary union. This was true already in the 1990s, when the EU’s unprecedented decision to proceed with the creation of the euro highlighted the question of whether Europe was an optimum currency area and, if not, how it might become one. That tendency was reinforced when Europe then descended into crisis starting in 2009. With only slight exaggeration it can be said that the literature on optimum currency areas became almost entirely a literature on Europe and on that continent’s failure to satisfy the relevant criteria.
Third, the literature on optimum currency areas remains the product of its age. When the founders wrote, in the 1960s, banks were more strictly regulated, and financial markets were less internationalized than subsequently. Consequently, the connections between monetary integration and financial integration—whether monetary union requires banking union, as the point is now put—were neglected in the earlier literature. The role of cross-border financial flows as a destabilizing mechanism within a currency area did not receive the attention it deserved. Because much of that earlier literature was framed in a North American context—the question was whether the United States or Canada was an optimum currency area—and because it was asked by a trio of scholars, two of whom hailed from Canada and one of whom hailed from the United States, the challenges of reconciling monetary integration with political nationalism and the question of whether monetary requires political union were similarly underplayed. Given the euro area’s descent into crisis, a number of analysts have asked why economists didn’t sound louder warnings in advance. The answer is that their outlooks were shaped by a literature that developed in an earlier era when the risks and context were different.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Detection of outliers is an important explorative step in empirical analysis. Once detected, the investigator will have to decide how to model the outliers depending on the context. Indeed, the outliers may represent noisy observations that are best left out of the analysis or they may be very informative observations that would have a particularly important role in the analysis. For regression analysis in time series a number of outlier algorithms are available, including impulse indicator saturation and methods from robust statistics. The algorithms are complex and their statistical properties are not fully understood. Extensive simulation studies have been made, but the formal theory is lacking. Some progress has been made toward an asymptotic theory of the algorithms. A number of asymptotic results are already available building on empirical process theory.
Jun Li and Edward C. Norton
Pay-for-performance programs have become a prominent supply-side intervention to improve quality and decrease spending in health care, touching upon long-term care, acute care, and outpatient care. Pay-for-performance directly targets long-term care, with programs in nursing homes and home health. Indirectly, pay-for-performance programs targeting acute care settings affect clinical practice for long-term care providers through incentives for collaboration across settings.
As a whole, pay-for-performance programs entail the identification of problems it seeks to solve, measurement of the dimensions it seeks to incentivize, methods to combine and translate performance to incentives, and application of the incentives to reward performance. For the long-term care population, pay-for-performance programs must also heed the unique challenges specific to the sector, such as patients with complex health needs and distinct health trajectories, and be structured to recognize the challenges of incentivizing performance improvement when there are multiple providers and payers involved in the care delivery.
Although empirical results indicate modest effectiveness of pay-for-performance in long-term care on improving targeted measures, some research has provided more clarity on the role of pay-for-performance design on the output of the programs, highlighting room for future research. Further, because health care is interconnected, the indirect effects of pay-for-performance programs on long-term care is an underexplored topic. As the scope of pay-for-performance in long-term care expands, both within the United States and internationally, pay-for-performance offers ample opportunities for future research.
Mark F. Grady
Tort law is part of the common law that originated in England after the Norman Conquest and spread throughout the world, including to the United States. It is judge-made law that allows people who have been injured by others to sue those who harmed them and collect damages in proper cases. Since its early origins, tort law has evolved considerably and has become a full-fledged “grown order,” like the economy, and can best be understood by positive theory, also like the economy. Economic theories of tort have developed since the early 1970s, and they too have evolved over time. Their objective is to generate fresh insight about the purposes and the workings of the tort system.
The basic thesis of the economic theory is that tort law creates incentives for people to minimize social cost, which is comprised of the harm produced by torts and the cost of the precautions necessary to prevent torts. This thesis, intentionally simple, generates many fresh insights about the workings and effects of the tort system and even about the actual legal rules that judges have developed. In an evolved grown order, legal rules are far less concrete than most people would expect though often very clear in application. Beginning also in the 1970s, legal philosophers have objected to the economic theory of tort and have devised philosophical theories that compete. The competition, moreover, has been productive because it has spurred both sides to revise and improve their theories and to seek better to understand the law. Tort law is diverse, applicable to many different activities and situations, so developing a positive theory about it is both challenging and rewarding.
James P. Ziliak
The interaction between poverty and social policy is an issue of longstanding interest in academic and policy circles. There are active debates on how to measure poverty, including where to draw the threshold determining whether a family is deemed to be living in poverty and how to measure resources available. These decisions have profound impacts on our understanding of the anti-poverty effectiveness of social welfare programs. In the context of the United States, focusing solely on cash income transfers shows little progress against poverty over the past 50 years, but substantive gains are obtained if the resource concept is expanded to include in-kind transfers and refundable tax credits. Beyond poverty, the research literature has examined the effects of social welfare policy on a host of outcomes such as labor supply, consumption, health, wealth, fertility, and marriage. Most of this work finds the disincentive effects of welfare programs on work, saving, and family structure to be small, but the income and consumption smoothing benefits to be sizable, and some recent work has found positive long-term effects of transfer programs on the health and education of children. More research is needed, however, on how to measure poverty, especially in the face of deteriorating quality of household surveys, on the long-term consequences of transfer programs, and on alternative designs of the welfare state.
Jesús Gonzalo and Jean-Yves Pitarakis
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Predictive regressions refer to models whose aim is to assess the predictability of a typically noisy time series, such as stock returns or currency returns with past values of a highly persistent predictor such as valuation ratios, interest rates, or volatilities, among other variables. Obtaining reliable inferences through conventional methods can be challenging in such environments mainly due to the joint interactions of predictor persistence, potential endogeneity, and other econometric complications. Numerous methods have been developed in the literature ranging from adjustments to test statistics used in significance testing to alternative instrumental variable based estimation methods specifically designed to neutralize inferences to the stochastic properties of the predictor(s).
Early developments in this area were mainly confined to linear and single predictor settings, but recent developments have raised the issue of adaptability of existing estimation and inference methods to more general environments so as to extend the use of predictive regressions to a wider range of potential applications.
An important extension involves allowing predictability to enter nonlinearly so as to capture time variation in the role of particular predictors. Economically interesting nonlinearities include, for instance, the use of threshold effects that allow predictability to vanish or strengthen during particular episodes, creating pockets of predictability. Such effects may kick in in the conditional means but also in the variances or both and may help uncover important phenomena such as the countercyclical nature of stock return predictability recently documented in the literature.
Due to the frequent need to consider multiple as opposed to single predictors it also becomes important to evaluate the validity and feasibility of inferences about linear and nonlinear predictability when multiple predictors of potentially different degrees of persistence are allowed to coexist in such settings.
Payment systems based on fixed prices have become the dominant model to finance hospitals across OECD countries. In the early 1980s, Medicare in the United States introduced the Diagnosis Related Groups (DRG) system. The idea was that hospitals should be paid a fixed price for treating a patient within a given diagnosis or treatment. The system then spread to other European countries (e.g., France, Germany, Italy, Norway, Spain, the United Kingdom) and high-income countries (e.g., Canada, Australia). The change in payment system was motivated by concerns over rapid health expenditure growth, and replaced financing arrangements based on reimbursing costs (e.g., in the United States) or fixed annual budgets (e.g., in the United Kingdom).
A more recent policy development is the introduction of pay for performance (P4P) schemes, which, in most cases, pay directly for higher quality. This is also a form of regulated price payment but the unit of payment is a (process or outcome) measure of quality, as opposed to activity, that is admitting a patient with a given diagnosis or a treatment.
Fixed price payment systems, either of the DRG type or the P4P type, affect hospital incentives to provide quality, contain costs, and treat the right patients (allocative efficiency). Quality and efficiency are ubiquitous policy goals across a range of countries.
Fixed price regulation induces providers to contain costs and, under certain conditions (e.g., excess demand), offer some incentives to sustain quality. But payment systems in the health sector are complex. Since its inception, DRG systems have been continuously refined. From their initial (around) 500 tariffs, many DRG codes have been split in two or more finer ones to reflect heterogeneity in costs within each subgroup. In turn, this may give incentives to provide excessive intensive treatments or to code patients in more remunerative tariffs, a practice known as upcoding. Fixed prices also make it financially unprofitable to treat high cost patients. This is particularly problematic when patients with the highest costs have the largest benefits from treatment. Hospitals also differ systematically in costs and other dimensions, and some of these external differences are beyond their control (e.g., higher cost of living, land, or capital). Price regulation can be put in place to address such differences.
The development of information technology has allowed constructing a plethora of quality indicators, mostly process measures of quality and in some cases health outcomes. These have been used both for public reporting, to help patients choose providers, but also for incentive schemes that directly pay for quality. P4P schemes are attractive but raise new issues, such as they might divert provider attention and unincentivized dimensions of quality might suffer as a result.
Pharmaceutical expenditure accounts for approximately 20% of healthcare expenditure across the Organisation for Economic Cooperation and Development (OECD) countries. Pharmaceutical products are regulated in all major global markets primarily to ensure product quality but also to regulate the reimbursed prices of insurance companies and central purchasing authorities that dominate this sector. Price regulation is justified as patent protection, which acts as an incentive to invest in R&D given the difficulties in appropriating the returns to such activity, creates monopoly rights to suppliers. Price regulation does itself reduce the ability of producers’ to recapture the substantial R&D investment costs incurred. Traditional price regulation through Ramsey pricing and yardstick competition is not efficient given the distortionary impact of insurance holdings, which are extensive in this sector and the inherent uncertainties that characterize Research and Development (R&D) activity. A range of other pricing regulations aimed at establishing pharmaceutical reimbursement that covers both dynamic efficiency (tied to R&D incentives) and static efficiency (tied to reducing monopoly rents) have been suggested. These range from cost-plus pricing, to internal and external reference pricing, rate-of-return pricing and, most recently value-based (essential health benefit maximization) pricing. Reimbursed prices reflecting value based pricing are, in some countries, associated with clinical treatment guidelines and cost-effectiveness analysis. Some countries are also requiring or allowing post-launch price regulation thorough a range of patient access agreements based on predefined population health targets and/or financial incentives. There is no simple, single solution to the determination of dynamic and static efficiency in this sector given the uncertainty associated with innovation, the large monopoly interests in the area, the distortionary impact of health insurance and the informational asymmetries that exist across providers and purchasers.
The concept of soft budget constraint, describes a situation where a decision-maker finds it impossible to keep an agent to a fixed budget. In healthcare it may refer to a (nonprofit) hospital that overspends, or to a lower government level that does not balance its accounts. The existence of a soft budget constraint may represent an optimal policy from the regulator point of view only in specific settings. In general, its presence may allow for strategic behavior that changes considerably its nature and its desirability. In this article, soft budget constraint will be analyzed along two lines: from a market perspective and from a fiscal federalism perspective.
The creation of an internal market for healthcare has made hospitals with different objectives and constraints compete together. The literature does not agree on the effects of competition on healthcare or on which type of organizations should compete. Public hospitals are often seen as less efficient providers, but they are also intrinsically motivated and/or altruistic. Competition for quality in a market where costs are sunk and competitors have asymmetric objectives may produce regulatory failures; for this reason, it might be optimal to implement soft budget constraint rules to public hospitals even at the risk of perverse effects. Several authors have attempted to estimate the presence of soft budget constraint, showing that they derive from different strategic behaviors and lead to quite different outcomes.
The reforms that have reshaped public healthcare systems across Europe have often been accompanied by a process of devolution; in some countries it has often been accompanied by widespread soft budget constraint policies. Medicaid expenditure in the United States is becoming a serious concern for the Federal Government and the evidence from other states is not reassuring. Several explanations have been proposed: (a) local governments may use spillovers to induce neighbors to pay for their local public goods; (b) size matters: if the local authority is sufficiently big, the center will bail it out; equalization grants and fiscal competition may be responsible for the rise of soft budget constraint policies. Soft budget policies may also derive from strategic agreements among lower tiers, or as a consequence of fiscal imbalances. In this context the optimal use of soft budget constraint as a policy instrument may not be desirable.
Menzie D. Chinn
The idea that prices and exchange rates adjust so as to equalize the common-currency price of identical bundles of goods—purchasing power parity (PPP)—is a topic of central importance in international finance. If PPP holds continuously, then nominal exchange rate changes do not influence trade flows. If PPP does not hold in the short run, but does in the long run, then monetary factors can affect the real exchange rate only temporarily. Substantial evidence has accumulated—with the advent of new statistical tests, alternative data sets, and longer spans of data—that purchasing power parity does not typically hold in the short run. One reason why PPP doesn’t hold in the short run might be due to sticky prices, in combination with other factors, such as trade barriers. The evidence is mixed for the longer run. Variations in the real exchange rate in the longer run can also be driven by shocks to demand, arising from changes in government spending, the terms of trade, as well as wealth and debt stocks. At time horizon of decades, trend movements in the real exchange rate—that is, systematically trending deviations in PPP—could be due to the presence of nontraded goods, combined with real factors such as differentials in productivity growth. The well-known positive association between the price level and income levels—also known as the “Penn Effect”—is consistent with this channel. Whether PPP holds then depends on the time period, the time horizon, and the currencies examined.
Joanna Coast and Manuela De Allegri
Qualitative methods are being used increasingly by health economists, but most health economists are not trained in these methods and may need to develop expertise in this area. This article discusses important issues of ontology, epistemology, and research design, before addressing the key issues of sampling, data collection, and data analysis in qualitative research. Understanding differences in the purpose of sampling between qualitative and quantitative methods is important for health economists, and the key notion of purposeful sampling is described. The section on data collection covers in-depth and semistructured interviews, focus-group discussions, and observation. Methods for data analysis are then discussed, with a particular focus on the use of inductive methods that are appropriate for economic purposes. Presentation and publication are briefly considered, before three areas that have seen substantial use of qualitative methods are explored: attribute development for discrete choice experiment, priority-setting research, and health financing initiatives.
Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli
In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries.
The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features.
The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.
Samuel Berlinski and Marcos Vera-Hernández
Socioeconomic gradients in health, cognitive, and socioemotional skills start at a very early age. Well-designed policy interventions in the early years can have a great impact in closing these gaps. Advancing this line of research requires a thorough understanding of how households make human capital investment decisions on behalf of their children, what their information set is, and how the market, the environment, and government policies affect them. A framework for this research should describe how children’s skills evolve and how parents make choices about the inputs that model child development, as well as the rationale for government interventions, including both efficiency and equity considerations.
Ijeoma Peace Edoka
Low- and middle-income countries (LMICs) bear a disproportionately high burden of diseases in comparison to high-income countries, partly due to inequalities in the distribution of resources for health. Recent increases in health spending in these countries demonstrate a commitment to tackling the high burden of disease. However, evidence on the extent to which increased spending on health translates to better population health outcomes has been inconclusive. Some studies have reported improvements in population health with an increase in health spending whereas others have either found no effect or very limited effect to justify increased financial allocations to health. Differences across studies may be explained by differences in approaches adopted in estimating returns to health spending in LMICs.
Ana Balsa and Carlos Díaz
Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.
Anna Dreber and Magnus Johannesson
The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.
Richard C. van Kleef, Thomas G. McGuire, Frederik T. Schut, and Wynand P. M. M. van de Ven
Many countries rely on social health insurance supplied by competing insurers to enhance fairness and efficiency in healthcare financing. Premiums in these settings are typically community rated per health plan. Though community rating can help achieve fairness objectives, it also leads to a variety of problems due to risk selection, that is, actions by consumers and insurers to exploit “unpriced risk” heterogeneity. From the viewpoint of a consumer, unpriced risk refers to the gap between her expected spending under a health plan and the net premium for that plan. Heterogeneity in unpriced risk can lead to selection by consumers in and out of insurance and between high- and low-value plans. These forms of risk selection can result in upward premium spirals, inefficient take-up of basic coverage, and inefficient sorting of consumers between high- and low-value plans.
From the viewpoint of an insurer, unpriced risk refers to the gap between his expected costs under a certain contract and the revenues he receives for that contract. Heterogeneity in unpriced risk incentivizes insurers to alter their plan offerings in order to attract profitable people, resulting in inefficient plan design and possibly in the unavailability of high-quality care. Moreover, insurers have incentives to target profitable people via marketing tools and customer service, which—from a societal perspective—can be considered a waste of resources.
Common tools to counteract selection problems are risk equalization, risk sharing, and risk rating of premiums. All three strategies reduce unpriced risk heterogeneity faced by insurers and thus diminish selection actions by insurers such as the altering of plan offerings. Risk rating of premiums also reduces unpriced risk heterogeneity faced by consumers and thus mitigates selection in and out of insurance and between high- and low-value plans. All three strategies, however, come with trade-offs. A smart blend takes advantage of the strengths, while reducing the weaknesses of each strategy. The optimal payment system configuration will depend on how a regulator weighs fairness and efficiency and on how the healthcare system is organized.
Alessandro Casini and Pierre Perron
This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.
Most developed nations provide generous coverage of care services, using either a tax financed healthcare system or social health insurance. Such systems pursue efficiency and equity in care provision. Efficiency means that expenditures are minimized for a given level of care services. Equity means that individuals with equal needs have equal access to the benefit package. In order to limit expenditures, social health insurance systems explicitly limit their benefit package. Moreover, most such systems have introduced cost sharing so that beneficiaries bear some cost when using care services. These limits on coverage create room for private insurance that complements or supplements social health insurance. Everywhere, social health insurance coexists along with voluntarily purchased supplementary private insurance. While the latter generally covers a small portion of health expenditures, it can interfere with the functioning of social health insurance. Supplementary health insurance can be detrimental to efficiency through several mechanisms. It limits competition in managed competition settings. It favors excessive care consumption through coverage of cost sharing and of services that are complementary to those included in social insurance benefits. It can also hinder achievement of the equity goals inherent to social insurance. Supplementary insurance creates inequality in access to services included in the social benefits package. Individuals with high incomes are more likely to buy supplementary insurance, and the additional care consumption resulting from better coverage creates additional costs that are borne by social health insurance. In addition, there are other anti-redistributive mechanisms from high to low risks. Social health insurance should be designed, not as an isolated institution, but with an awareness of the existence—and the possible expansion—of supplementary health insurance.