You are looking at 81-100 of 132 articles
Eduardo A. Cavallo
Sudden stops in capital flows are a form of financial whiplash that creates instability and crises in the affected economies. Sudden stops in net capital flows trigger current account reversals as countries that were borrowing on net from the rest of the world before the stop can no longer finance current account deficits. Sudden stops in gross capital flows are associated with financial instability, especially when the gross flows are dominated by volatile cross-border banking flows. Sudden stops in gross and net capital flows are episodes with an external trigger. This implies that the spark that ignites sudden stops originates outside the affected country: more specifically, in the supply of foreign financing that can halt for reasons that may be unrelated to the affected country’s domestic conditions. Yet a spark cannot generate a fire unless combustible materials are around. The literature has established that a set of domestic macroeconomic fundamentals are the combustible materials that make some countries more vulnerable than others. Higher fiscal deficits, larger current account deficits, and higher levels of foreign currency debts in the domestic financial system are manifestations of weak fundamentals that increase vulnerability. Those same factors increase the costs in terms of output losses when the crisis materializes. On the flip side, international reserves provide buffers that can help countries offset the risks. Holding foreign currency reserves hedges the fiscal position of the government providing it with more resources to respond to the crisis. While it may be impossible for countries to completely insulate themselves from the volatility of capital inflows, the choice of antidotes to prevent that volatility from forcing potentially costly external adjustments is in their own hands. The global financial architecture can be improved to support those efforts if countries could agree on and fund a more powerful international lender of last resort that resembles, at the global scale, the role of the Federal Reserve Bank in promoting financial stability in the United States.
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Daniel Eisenberg and Ramesh Raghavan
One of the most important unanswered questions for any society is how best to invest in children’s mental health. Childhood is a sensitive and opportune period in which to invest in programs and services that can mitigate a range of downstream risks for health and mental health conditions. Investing in such programs and services will require a shift from focusing not only on reducing deficits but also enhancing the child’s skills and other assets. Economic evaluation is crucial for determining which programs and services represent optimal investments. Several registries curate lists of programs with high evidence of effectiveness; many of these programs also have evidence of positive benefit-cost differentials, although the economic evidence is typically limited and uncertain. Even the programs with the strongest evidence are currently reaching only a small fraction of young people who would potentially benefit. Thus, it is important to understand and address factors that impede or facilitate the implementation of best practices. One example of a program that represents a promising investment is home visiting, in which health workers visit the homes of new parents to advise on parenting skills, child needs, and the home environment. Another example is social emotional learning programs delivered in schools, where children are taught to regulate emotions, manage behaviors, and enhance relationships with peers. Investing in these and other programs with a strong evidence base, and assuring their faithful implementation in practice settings, can produce improvements on a range of mental health, academic, and social outcomes for children, extending into their lives as adults.
Joni Hersch and Blair Druhan Bullock
The labor market is governed by a panoply of laws, regulating virtually all aspects of the employment relation, including hiring, firing, information exchange, privacy, workplace safety, work hours, minimum wages, and access to courts for redress of violations of rights. Antidiscrimination laws, especially Title VII, notably prohibit employment discrimination on the basis of race, color, religion, sex, and national origin. Court decisions and legislation have led to the extension of protection to a far wider range of classes and types of workplace behavior than Title VII originally covered.
The workplace of the early 21st century is very different from the workplace when the major employment discrimination statutes were enacted, as these laws were conceived as regulating an employer–employee relationship in a predominantly white male labor market. Prior emphasis on employment discrimination on the basis of race and sex has been superseded by enhanced attention to sexual harassment and discrimination on the basis of disability, sexual orientation, gender identity, and religion. Concerns over the equity or efficiency of the employment-at-will doctrine recede in a workforce in which workers are increasingly categorized as independent contractors who are not covered by most equal employment laws. As the workplace has changed, the scholarship on the law and economics of employment law has been slow to follow.
While economists overwhelmingly favor free trade, even unilateral free trade, because of the gains realizable from specialization and the exploitation of comparative advantage, in fact international trading relations are structured by a complex body of multilateral and preferential trade agreements. The article outlines the case for multilateral trade agreements and the non-discrimination principle that they embody, in the form of both the Most Favored Nation principle and the National Treatment principle, where non-discrimination has been widely advocated as supporting both geopolitical goals (reducing economic factionalism) and economic goals (ensuring the full play of theories of comparative advantage undistorted by discriminatory trade treatment).
Despite the virtues of multilateral trade agreements, preferential trade agreements (PTAs), authorized from the outset under GATT, have proliferated in recent years, even though they are inherently discriminatory between members and non-members, provoking vigorous debates as to whether (a) PTAs are trade-creating or trade-diverting; (b) whether they increase transaction costs in international trade; and (c) whether they undermine the future course of multilateral trade liberalization.
A further and similarly contentious derogation from the principle of non-discrimination under the multilateral system is Special and Differential Treatment for developing countries, where since the mid-1950s developing countries have been given much greater latitude than developed countries to engage in trade protectionism on the import side in order to promote infant industries, and since the mid-1960s on the export side have benefited from non-reciprocal trade concessions by developed countries on products of actual or potential export interest to developing countries.
Beyond debates over the strengths and weaknesses of multilateral trade agreements and the two major derogations therefrom, further debates surround the appropriate scope of trade agreements, and in particular the expansion of their scope in recent decades to address divergences or incompatibilities across a wide range of domestic regulatory and related policies that arguably create frictions in cross-border trade and investment and hence constitute an impediment to it.
The article goes on to consider contemporary fair trade versus free trade debates, including concerns over trade deficits, currency manipulation, export subsidies, misappropriation of intellectual property rights, and lax labor or environmental standards. The article concludes with a consideration of the case for a larger scope for plurilateral trade agreements internationally, and for a larger scope for active labor market policies domestically to mitigate transition costs from trade.
Law and economics is an important, growing field of specialization for both legal scholars and economists. It applies efficiency analysis to property, contracts, torts, procedure, and many other areas of the law. The use of economics as a methodology for understanding law is not immune to criticism. The rationality assumption and the efficiency principle have been intensively debated. Overall, the field has advanced in recent years by incorporating insights from psychology and other social sciences. In that respect, many questions concerning the efficiency of legal rules and norms are still open and respond to a multifaceted balance among diverse costs and benefits. The role of courts in explaining economic performance is a more specific area of analysis that emerged in the late 1990s. The relationship between law and economic growth is complex and debatable. An important literature has pointed to significant differences at the macro-level between the Anglo-American common law family and the civil law families. Although these initial results have been heavily scrutinized, other important subjects have surfaced such as convergence of legal systems, transplants, infrastructure of legal systems, rule of law and development, among others.
Life-cycle choices and outcomes over financial (e.g., savings, portfolio, work) and health-related variables (e.g., medical spending, habits, sickness, and mortality) are complex and intertwined. Indeed, labor/leisure choices can both affect and be conditioned by health outcomes, precautionary savings is determined by exposure to sickness and longevity risks, where the latter can both be altered through preventive medical and leisure decisions. Moreover, inevitable aging induces changes in the incentives and in the constraints for investing in one’s own health and saving resources for old age. Understanding these pathways poses numerous challenges for economic models.
The life-cycle data is indicative of continuous declines in health statuses and associated increases in exposure to morbidity, medical expenses, and mortality risks, with accelerating post-retirement dynamics. Theory suggests that risk-averse and forward-looking agents should rely on available instruments to insure against these risks. Indeed, market- and state-provided health insurance (e.g., Medicare) cover curative medical expenses. High end-of-life home and nursing-home expenses can be hedged through privately or publicly provided (e.g., Medicaid) long-term care insurance. The risk of outliving one’s financial resources can be hedged through annuities. The risk of not living long enough can be insured through life insurance.
In practice, however, the recourse to these hedging instruments remains less than predicted by theory. Slow-observed wealth drawdown after retirement is unexplained by bequest motives and suggests precautionary motives against health-related expenses. The excessive reliance on public pension (e.g., Social Security) and the post-retirement drop in consumption not related to work or health are both indicative of insufficient financial preparedness and run counter to consumption smoothing objectives. Moreover, the capacity to self-insure through preventive care and healthy habits is limited when aging is factored in. In conclusion, the observed health and financial life-cycle dynamics remain challenging for economic theory.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Noémi Kreif and Karla DiazOrdaz
While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).
This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.
Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.
José Luis Pinto-Prades, Arthur Attema, and Fernando Ignacio Sánchez-Martínez
Quality-adjusted life years (QALYs) are one of the main health outcomes measures used to make health policy decisions. It is assumed that the objective of policymakers is to maximize QALYs. Since the QALY weighs life years according to their health-related quality of life, it is necessary to calculate those weights (also called utilities) in order to estimate the number of QALYs produced by a medical treatment. The methodology most commonly used to estimate utilities is to present standard gamble (SG) or time trade-off (TTO) questions to a representative sample of the general population. It is assumed that, in this way, utilities reflect public preferences. Two different assumptions should hold for utilities to be a valid representation of public preferences. One is that the standard (linear) QALY model has to be a good model of how subjects value health. The second is that subjects should have consistent preferences over health states. Based on the main assumptions of the popular linear QALY model, most of those assumptions do not hold. A modification of the linear model can be a tractable improvement. This suggests that utilities elicited under the assumption that the linear QALY model holds may be biased. In addition, the second assumption, namely that subjects have consistent preferences that are estimated by asking SG or TTO questions, does not seem to hold. Subjects are sensitive to features of the elicitation process (like the order of questions or the type of task) that should not matter in order to estimate utilities. The evidence suggests that questions (TTO, SG) that researchers ask members of the general population produce response patterns that do not agree with the assumption that subjects have well-defined preferences when researchers ask them to estimate the value of health states. Two approaches can deal with this problem. One is based on the assumption that subjects have true but biased preferences. True preferences can be recovered from biased ones. This approach is valid as long as the theory used to debias is correct. The second approach is based on the idea that preferences are imprecise. In practice, national bodies use utilities elicited using TTO or SG under the assumptions that the linear QALY model is a good enough representation of public preferences and that subjects’ responses to preference elicitation methods are coherent.
David A. Hyman and Charles Silver
Medical malpractice is the best studied aspect of the civil justice system. But the subject is complicated, and there are heated disputes about basic facts. For example, are premium spikes driven by factors that are internal (i.e., number of claims, payout per claim, and damage costs) or external to the system? How large (or small) is the impact of a damages cap? Do caps have a bigger impact on the number of cases that are brought or the payment in the cases that remain? Do blockbuster verdicts cause defendants to settle cases for more than they are worth? Do caps attract physicians? Do caps reduce healthcare spending—and by how much? How much does it cost to resolve the high percentage of cases in which no damages are recovered? What is the comparative impact of a cap on noneconomic damages versus a cap on total damages?
Other disputes involve normative questions. Is there too much med mal litigation or not enough? Are damage caps fair? Is the real problem bad doctors or predatory lawyers—or some combination of both?
This article summarizes the empirical research on the performance of the med mal system, and highlights some areas for future research.
Syed Abdul Hamid
Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities.
Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.
Martin D. D. Evans and Dagfinn Rime
An overview of research on the Microstructure of Foreign Exchange Markets is presented. We begin by summarizing the institutional features of FX trading and describe how they have evolved since the 1980s. We then explain how these features are represented in microstructure models of FX trading. Next, we describe the links between microstructure and traditional macro exchange-rate models and summarize how these links have been explored in recent empirical research. Finally, we provide a microstructure perspective on two recent areas of interest in exchange-rate economics: the behavior of returns on currency portfolios, and questions of competition and regulation.
The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.
Pieter van Baal and Hendriek Boshuizen
In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases.
This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.
Audrey Laporte and Brian S. Ferguson
One of the implications of the human capital literature of the 1960s was that a great many decisions individuals make that have consequences not just for the point in time when the decision is being made but also for the future can be thought of as involving investments in certain types of capital. In health economics, this led Michael Grossman to propose the concept of health capital, which refers not just to the individual’s illness status at any point in time, but to the more fundamental factors that affect the likelihood that she will be ill at any point in her life and also affect her life expectancy at each age. In Grossman’s model, an individual purchased health-related commodities that act through a health production function to improve her health. These commodities could be medical care, which could be seen as repair expenditures, or factors such as diet and exercise, which could be seen as ongoing additions to her health—the counterparts of adding savings to her financial capital on a regular basis. The individual was assumed to make decisions about her level of consumption of these commodities as part of an intertemporal utility-maximizing process that incorporated, through a budget constraint, the need to make tradeoffs between health-related goods and goods that had no health consequences. Pauline Ippolito showed that the same analytical techniques could be used to consider goods that were bad for health in the long run—bad diet and smoking, for example—still within the context of lifetime utility maximization. This raised the possibility that an individual might rationally take actions that were bad for her health in the long run. The logical extension of considering smoking as bad was adding recognition that smoking and other bad health habits were addictive. The notion of addictive commodities was already present in the literature on consumer behavior, but the consensus in that literature was that it was extremely difficult, if not impossible, to distinguish between a rational addict and a completely myopic consumer of addictive goods. Gary Becker and Kevin Murphy proposed an alternative approach to modeling a forward-looking, utility-maximizing consumer’s consumption of addictive commodities, based on the argument that an individual’s degree of addiction could be modeled as addiction capital, and which could be used to tackle the empirical problems that the consumer expenditure literature had experienced. That model has become the most widely used framework for empirical research by economists into the consumption of addictive goods, and, while the concept of rationality in addiction remains controversial, the Becker-Murphy framework also provides a basis for testing various alternative models of the consumption of addictive commodities, most notably those based on versions of time-inconsistent intertemporal decision making.
Paul Hansen and Nancy Devlin
Multi-criteria decision analysis (MCDA) is increasingly used to support healthcare decision-making. MCDA involves decision makers evaluating the alternatives under consideration based on the explicit weighting of criteria relevant to the overarching decision—in order to, depending on the application, rank (or prioritize) or choose between the alternatives. A prominent example of MCDA applied to healthcare decision-making that has received a lot of attention in recent years and is the main subject of this article is choosing which health “technologies” (i.e., drugs, devices, procedures, etc.) to fund—a process known as health technology assessment (HTA). Other applications include prioritizing patients for surgery, prioritizing diseases for R&D, and decision-making about licensing treatments. Most applications are based on weighted-sum models. Such models involve explicitly weighting the criteria and rating the alternatives on the criteria, with each alternative’s “performance” on the criteria aggregated using a linear (i.e., additive) equation to produce the alternative’s “total score,” by which the alternatives are ranked. The steps involved in a MCDA process are explained, including an overview of methods for scoring alternatives on the criteria and weighting the criteria. The steps are: structuring the decision problem being addressed, specifying criteria, measuring alternatives’ performance, scoring alternatives on the criteria and weighting the criteria, applying the scores and weights to rank the alternatives, and presenting the MCDA results, including sensitivity analysis, to decision makers to support their decision-making. Arguments recently advanced against using MCDA for HTA and counterarguments are also considered. Finally, five questions associated with how MCDA for HTA is operationalized are discussed: Whose preferences are relevant for MCDA? Should criteria and weights be decision-specific or identical for repeated applications? How should cost or cost-effectiveness be included in MCDA? How can the opportunity cost of decisions be captured in MCDA? How can uncertainty be incorporated into MCDA?
Chao Gu, Han Han, and Randall Wright
This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.
Vincenzo Atella and Joanna Kopinska
New sanitation and health technology applied to treatments, procedures, and devices is constantly revolutionizing epidemiological patterns. Since the early 1900s it has been responsible for significant improvements in population health by turning once-deadly diseases into curable or preventable conditions, by expanding the existing cures to more patients and diseases, and by simplifying procedures for both medical and organizational practices. Notwithstanding the benefits of technological progress for the population health, the innovation process is also an important driver of health expenditure growth across all countries. The technological progress generates additional financial burden and expands the volume of services provided, which constitutes a concern from an economic point of view. Moreover, the evolution of technology costs and their impact on healthcare spending is difficult to predict due to the revolutionary nature of many innovations and their adoption. In this respect, the challenge for policymakers is to discourage overadoption of ineffective, unnecessary, and inappropriate technologies. This task has been long carried out through regulation, which according to standard economic theory is the only response to market failures and socially undesirable outcomes of healthcare markets left on their own. The potential welfare loss of a market failure must be confronted with the costs of regulatory activities. While health technology evolution delivers important value for patients and societies, it will continue to pose important challenges for already overextended public finances.
Karla DiazOrdaz and Richard Grieve
Health economic evaluations face the issues of noncompliance and missing data. Here, noncompliance is defined as non-adherence to a specific treatment, and occurs within randomized controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss-to-follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling noncompliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods with which to handle these with application to health economic evaluation that uses data from an RCT.
In an RCT the random assignment can be used as an instrument-for-treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals’ costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, which assume the data are Missing At Random, but also sensitivity analyses that recognize the data may be missing according to the true, unobserved values, that is, Missing Not at Random.
Future studies should subject the assumptions behind methods for handling noncompliance and missing data to thorough sensitivity analyses. Modern machine-learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of noncompliance and missing data.