1-5 of 5 Results

  • Keywords: cost-effectiveness analysis x
Clear all

Article

Economic evaluation provides a framework to help inform decisions on which technologies represent the best use of healthcare resources (i.e., are cost-effective) by bringing together the available evidence about the benefits and costs of the alternative options. Critical to the economic evaluation framework is the need to accurately characterize the decision problem—this is the problem-structuring stage. Problem structuring encompasses the characterization of the target population; identification of the decision options to compare in the model (e.g., use of the technology in different ways, standard of care, etc.); and the development of the conceptual model, which maps out how the decision options relate to the costs and benefits in the target population. Problem structuring is central to the application of the economic evaluation framework and to development of the analysis, as it determines the specific questions that can be addressed and affects the relevance and credibility of the results. The methodological guidelines discuss problem structuring to some extent, although the practical implications warrant further consideration. With respect to the target population, questions emerge about how to define it, whether and which sources of heterogeneity to consider, and when and in whom to consider spillovers. Relating to the specification of decision options are questions about how to identify and select them, including restricting the comparison to standard of care, sequences of tests and/or treatments, and “do-nothing” approaches. There are also issues relating to the role and the process of development of the conceptual model. Based on a review of methodological guidelines and reflections on their implications, various recommendations for practice emerge. The process of developing the conceptual model and how to use it to inform choices and assumptions in the economic evaluation are two areas where further research is warranted.

Article

To guide climate change policymaking, we need to understand how technologies and behaviors should be transformed to avoid dangerous levels of global warming and what the implications of failing to bring forward such transformation might be. Integrated assessment models (IAMs) are computational tools developed by engineers, earth and natural scientists, and economists to provide projections of interconnected human and natural systems under various conditions. These models help researchers to understand possible implications of climate inaction. They evaluate the effects of national and international policies on global emissions and devise optimal emissions trajectories in line with long-term temperature targets and their implications for infrastructure, investment, and behavior. This research highlights the deep interconnection between climate policies and other sustainable development objectives. Evolving and focusing on one or more of these key policy questions, the large family of IAMs includes a wide array of tools that incorporate multiple dimensions and advances from a range of scientific fields.

Article

The assessment of health-related quality of life is crucially important in the evaluation of healthcare technologies and services. In many countries, economic evaluation plays a prominent role in informing decision making often requiring preference-based measures (PBMs) to assess quality of life. These measures comprise two aspects: a descriptive system where patients can indicate the impact of ill health, and a value set based on the preferences of individuals for each of the health states that can be described. These values are required for the calculation of quality adjusted life years (QALYs), the measure for health benefit used in the vast majority of economic evaluations. The National Institute for Health and Care Excellence (NICE) has used cost per QALY as its preferred framework for economic evaluation of healthcare technologies since its inception in 1999. However, there is often an evidence gap between the clinical measures that are available from clinical studies on the effect of a specific health technology and the PBMs needed to construct QALY measures. Instruments such as the EQ-5D have preference-based scoring systems and are favored by organizations such as NICE but are frequently absent from clinical studies of treatment effect. Even where a PBM is included this may still be insufficient for the needs of the economic evaluation. Trials may have insufficient follow-up, be underpowered to detect relevant events, or include the wrong PBM for the decision- making body. Often this gap is bridged by “mapping”—estimating a relationship between observed clinical outcomes and PBMs, using data from a reference dataset containing both types of information. The estimated statistical model can then be used to predict what the PBM would have been in the clinical study given the available information. There are two approaches to mapping linked to the structure of a PBM. The indirect approach (or response mapping) models the responses to the descriptive system using discrete data models. The expected health utility is calculated as a subsequent step using the estimated probability distribution of health states. The second approach (the direct approach) models the health state utility values directly. Statistical models routinely used in the past for mapping are unable to consider the idiosyncrasies of health utility data. Often they do not work well in practice and can give seriously biased estimates of the value of treatments. Although the bias could, in principle, go in any direction, in practice it tends to result in underestimation of cost effectiveness and consequently distorted funding decisions. This has real effects on patients, clinicians, industry, and the general public. These problems have led some analysts to mistakenly conclude that mapping always induces biases and should be avoided. However, the development and use of more appropriate models has refuted this claim. The need to improve the quality of mapping studies led to the formation of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Mapping to Estimate Health State Utility values from Non-Preference-Based Outcome Measures Task Force to develop good practice guidance in mapping.

Article

Ciaran N. Kohli-Lynch and Andrew H. Briggs

Cost-effectiveness analysis is conducted with the aim of maximizing population-level health outcomes given an exogenously determined budget constraint. Considerable health economic benefits can be achieved by reflecting heterogeneity in cost-effectiveness studies and implementing interventions based on this analysis. The following article describes forms of subgroup and heterogeneity in patient populations. It further discusses traditional decision rules employed in cost-effectiveness analysis and shows how these can be adapted to account for heterogeneity. This article discusses the theoretical basis for reflecting heterogeneity in cost-effectiveness analysis and methodology that can be employed to conduct such analysis. Reflecting heterogeneity in cost-effectiveness analysis allows decision-makers to define limited use criteria for treatments with a fixed price. This ensures that only those patients who are cost-effective to treat receive an intervention. Moreover, when price is not fixed, reflecting heterogeneity in cost-effectiveness analysis allows decision-makers to signal demand for healthcare interventions and ensure that payers achieve welfare gains when investing in health.

Article

The evidence produced by healthcare economic evaluation studies is a key component of any Health Technology Assessment (HTA) process designed to inform resource allocation decisions in a budget-limited context. To improve the quality (and harmonize the generation process) of such evidence, many HTA agencies have established methodological guidelines describing the normative framework inspiring their decision-making process. The information requirements that economic evaluation analyses for HTA must satisfy typically involve the use of complex quantitative syntheses of multiple available datasets, handling mixtures of aggregate and patient-level information, and the use of sophisticated statistical models for the analysis of non-Normal data (e.g., time-to-event, quality of life and costs). Much of the recent methodological research in economic evaluation for healthcare has developed in response to these needs, in terms of sound statistical decision-theoretic foundations, and is increasingly being formulated within a Bayesian paradigm. The rationale for this preference lies in the fact that by taking a probabilistic approach, based on decision rules and available information, a Bayesian economic evaluation study can explicitly account for relevant sources of uncertainty in the decision process and produce information to identify an “optimal” course of actions. Moreover, the Bayesian approach naturally allows the incorporation of an element of judgment or evidence from different sources (e.g., expert opinion or multiple studies) into the analysis. This is particularly important when, as often occurs in economic evaluation for HTA, the evidence base is sparse and requires some inevitable mathematical modeling to bridge the gaps in the available data. The availability of free and open source software in the last two decades has greatly reduced the computational costs and facilitated the application of Bayesian methods and has the potential to improve the work of modelers and regulators alike, thus advancing the fields of economic evaluation of healthcare interventions. This chapter provides an overview of the areas where Bayesian methods have contributed to the address the methodological needs that stem from the normative framework adopted by a number of HTA agencies.