1-8 of 8 Results

  • Keywords: methods x
Clear all

Article

Multi-criteria decision analysis (MCDA) is increasingly used to support healthcare decision-making. MCDA involves decision makers evaluating the alternatives under consideration based on the explicit weighting of criteria relevant to the overarching decision—in order to, depending on the application, rank (or prioritize) or choose between the alternatives. A prominent example of MCDA applied to healthcare decision-making that has received a lot of attention in recent years and is the main subject of this article is choosing which health “technologies” (i.e., drugs, devices, procedures, etc.) to fund—a process known as health technology assessment (HTA). Other applications include prioritizing patients for surgery, prioritizing diseases for R&D, and decision-making about licensing treatments. Most applications are based on weighted-sum models. Such models involve explicitly weighting the criteria and rating the alternatives on the criteria, with each alternative’s “performance” on the criteria aggregated using a linear (i.e., additive) equation to produce the alternative’s “total score,” by which the alternatives are ranked. The steps involved in a MCDA process are explained, including an overview of methods for scoring alternatives on the criteria and weighting the criteria. The steps are: structuring the decision problem being addressed, specifying criteria, measuring alternatives’ performance, scoring alternatives on the criteria and weighting the criteria, applying the scores and weights to rank the alternatives, and presenting the MCDA results, including sensitivity analysis, to decision makers to support their decision-making. Arguments recently advanced against using MCDA for HTA and counterarguments are also considered. Finally, five questions associated with how MCDA for HTA is operationalized are discussed: Whose preferences are relevant for MCDA? Should criteria and weights be decision-specific or identical for repeated applications? How should cost or cost-effectiveness be included in MCDA? How can the opportunity cost of decisions be captured in MCDA? How can uncertainty be incorporated into MCDA?

Article

Marjon van der Pol and Alastair Irvine

The interest in eliciting time preferences for health has increased rapidly since the early 1990s. It has two main sources: a concern over the appropriate methods for taking timing into account in economics evaluations, and a desire to obtain a better understanding of individual health and healthcare behaviors. The literature on empirical time preferences for health has developed innovative elicitation methods in response to specific challenges that are due to the special nature of health. The health domain has also shown a willingness to explore a wider range of underlying models compared to the monetary domain. Consideration of time preferences for health raises a number of questions. Are time preferences for health similar to those for money? What are the additional challenges when measuring time preferences for health? How do individuals in time preference for health experiments make decisions? Is it possible or necessary to incentivize time preference for health experiments?

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

David Wolf and H. Allen Klaiber

The value of a differentiated product is simply the sum of its parts. This concept is easily observed in housing markets where the price of a home is determined by the underlying bundle of attributes that define it and by the price households are willing to pay for each attribute. These prices are referred to as implicit prices because their value is indirectly revealed through the price of another product (typically a home) and are of interest as they reveal the value of goods, such as nearby public amenities, that would otherwise remain unknown. This concept was first formalized into a tractable theoretical framework by Rosen, and is known as the hedonic pricing method. The two-stage hedonic method requires the researcher to map housing attributes into housing price using an equilibrium price function. Information recovered from the first stage is then used to recover inverse demand functions for nonmarket goods in the second stage, which are required for nonmarginal welfare evaluation. Researchers have rarely implemented the second stage, however, due to limited data availability, specification concerns, and the inability to correct for simultaneity bias between price and quality. As policies increasingly seek to deliver large, nonmarginal changes in public goods, the need to estimate the hedonic second stage is becoming more poignant. Greater effort therefore needs to be made to establish a set of best practices within the second stage, many of which can be developed using methods established in the extensive first-stage literature.

Article

The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.

Article

In order to secure effective service access, coverage, and impact, it is increasingly recognized that the introduction of novel health technologies such as diagnostics, drugs, and vaccines may require additional investment to address the constraints under which many health systems operate. Health-system constraints include a shortage of health workers, ineffective supply chains, or inadequate information systems, or organizational constraints such as weak incentives and poor service integration. Decision makers may be faced with the question of whether to invest in a new technology, including the specific health system strengthening needed to ensure effective implementation; or they may be seeking to optimize resource allocation across a range of interventions including investment in broad health system functions or platforms. Investment in measures to address health-system constraints therefore increasingly need to undergo economic evaluation, but this poses several methodological challenges for health economists, particularly in the context of low- and middle-income countries. Designing the appropriate analysis to inform investment decisions concerning new technologies incorporating health systems investment can be broken down into several steps. First, the analysis needs to comprehensively outline the interface between the new intervention and the system through which it is to be delivered, in order to identify the relevant constraints and the measures needed to relax them. Second, the analysis needs to be rooted in a theoretical approach to appropriately characterize constraints and consider joint investment in the health system and technology. Third, the analysis needs to consider how the overarching priority- setting process influences the scope and output of the analysis informing the way in which complex evidence is used to support the decision, including how to represent and manage system wide trade-offs. Finally, there are several ways in which decision analytical models can be structured, and parameterized, in a context of data scarcity around constraints. This article draws together current approaches to health system thinking with the emerging literature on analytical approaches to integrating health-system constraints into economic evaluation to guide economists through these four issues. It aims to contribute to a more health-system-informed approach to both appraising the cost-effectiveness of new technologies and setting priorities across a range of program activities.

Article

International transactions are riskier than domestic transactions for several reasons, including, but not limited to, geographical distance, longer shipping times, greater informational frictions, contract enforcement, and dispute resolution problems. Such risks stem, fundamentally, from a timing mismatch between payment and delivery in business transactions. Trade finance plays a critical role in bridging the gap, thereby overcoming greater risks inherent in international trade. It is thus even described as the lifeline of international trade, because more than 90% of international transactions involve some form of credit, insurance, or guarantee. Despite its importance in international trade, however, it was not until the great trade collapse in 2008–2009 that trade finance came to the attention of academic researchers. An emerging literature on trade finance has contributed to providing answers to questions such as: Who is responsible for financing transactions, and, hence, who would need liquidity support most to sustain international trade? This is particularly relevant in developing countries, where the lack of trade finance is often identified as the main hindrance to trade, and in times of financial crisis, when the overall drying up of trade finance could lead to a global collapse in trade.

Article

Denzil G. Fiebig and Hong Il Yoo

Stated preference methods are used to collect individual-level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article is data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.