You are looking at 41-60 of 67 articles
Health status measurement issues arise across a wide spectrum of applications in empirical health economics research as well as in public policy, clinical, and regulatory contexts. It is fitting that economists and other researchers working in these domains devote scientific attention to the measurement of those phenomena most central to their investigations. While often accepted and used uncritically, the particular measures of health status used in empirical investigations can have sometimes subtle but nonetheless important implications for research findings and policy action. How health is characterized and measured at the individual level and how such individual-level measures are summarized to characterize the health of groups and populations are entwined considerations. Such measurement issues have become increasingly salient given the wealth of health data available from population surveys, administrative sources, and clinical records in which researchers may be confronted with competing options for how they go about characterizing and measuring health. While recent work in health economics has seen significant advances in the econometric methods used to estimate and interpret quantities like treatment effects, the literature has seen less focus on some of the central measurement issues necessarily involved in such exercises. As such, increased attention ought to be devoted to measuring and understanding health status concepts that are relevant to decision makers’ objectives as opposed to those that are merely statistically convenient.
Economists have long regarded health care as a unique and challenging area of economic activity on account of the specialized knowledge of health care professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in health care. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.
Hendrik Schmitz and Svenja Winkler
The terms information and risk aversion play central roles in healthcare economics. While risk aversion is among the main reasons for the existence of health insurance, information asymmetries between insured individual and insurance company potentially lead to moral hazard or adverse selection. This has implications for the optimal design of health insurance contracts, but whether there is indeed moral hazard or adverse selection are ultimately empirical questions. Recently, there was even a debate whether the opposite of adverse selection—advantageous selection—prevails. Private information on risk aversion might weigh out information asymmetries regarding risk type and lead to more insurance coverage of healthy individuals (instead of less insurance coverage in adverse selection).
Information and risk preferences are important not only in health insurance but more generally in health economics. For instance, they affect health behavior and, consequently, health outcomes. The degree of risk aversion, the ability to perceive risks, and the availability of information about risks partly explain why some individuals engage in unhealthy behavior while others refrain from smoking, drinking, or the like.
Information has several dimensions. Apart from information on one’s personal health status, risk preferences, or health risks, consumer information on provider quality or health insurance supply is central in the economics of healthcare. Even though healthcare systems are necessarily highly regulated throughout the world, all systems at least allow for some market elements. These typically include the possibility of consumer choice, for instance, regarding health insurance coverage or choice of medical provider. An important question is whether consumer choice elements work in the healthcare sector—that is, whether consumers actually make rational or optimal decisions—and whether more information can improve decision quality.
Joni Hersch and Blair Druhan Bullock
The labor market is governed by a panoply of laws, regulating virtually all aspects of the employment relation, including hiring, firing, information exchange, privacy, workplace safety, work hours, minimum wages, and access to courts for redress of violations of rights. Antidiscrimination laws, especially Title VII, notably prohibit employment discrimination on the basis of race, color, religion, sex, and national origin. Court decisions and legislation have led to the extension of protection to a far wider range of classes and types of workplace behavior than Title VII originally covered.
The workplace of the early 21st century is very different from the workplace when the major employment discrimination statutes were enacted, as these laws were conceived as regulating an employer–employee relationship in a predominantly white male labor market. Prior emphasis on employment discrimination on the basis of race and sex has been superseded by enhanced attention to sexual harassment and discrimination on the basis of disability, sexual orientation, gender identity, and religion. Concerns over the equity or efficiency of the employment-at-will doctrine recede in a workforce in which workers are increasingly categorized as independent contractors who are not covered by most equal employment laws. As the workplace has changed, the scholarship on the law and economics of employment law has been slow to follow.
Law and economics is an important, growing field of specialization for both legal scholars and economists. It applies efficiency analysis to property, contracts, torts, procedure, and many other areas of the law. The use of economics as a methodology for understanding law is not immune to criticism. The rationality assumption and the efficiency principle have been intensively debated. Overall, the field has advanced in recent years by incorporating insights from psychology and other social sciences. In that respect, many questions concerning the efficiency of legal rules and norms are still open and respond to a multifaceted balance among diverse costs and benefits. The role of courts in explaining economic performance is a more specific area of analysis that emerged in the late 1990s. The relationship between law and economic growth is complex and debatable. An important literature has pointed to significant differences at the macro-level between the Anglo-American common law family and the civil law families. Although these initial results have been heavily scrutinized, other important subjects have surfaced such as convergence of legal systems, transplants, infrastructure of legal systems, rule of law and development, among others.
Life-cycle choices and outcomes over financial (e.g., savings, portfolio, work) and health-related variables (e.g., medical spending, habits, sickness, and mortality) are complex and intertwined. Indeed, labor/leisure choices can both affect and be conditioned by health outcomes, precautionary savings is determined by exposure to sickness and longevity risks, where the latter can both be altered through preventive medical and leisure decisions. Moreover, inevitable aging induces changes in the incentives and in the constraints for investing in one’s own health and saving resources for old age. Understanding these pathways poses numerous challenges for economic models.
The life-cycle data is indicative of continuous declines in health statuses and associated increases in exposure to morbidity, medical expenses, and mortality risks, with accelerating post-retirement dynamics. Theory suggests that risk-averse and forward-looking agents should rely on available instruments to insure against these risks. Indeed, market- and state-provided health insurance (e.g., Medicare) cover curative medical expenses. High end-of-life home and nursing-home expenses can be hedged through privately or publicly provided (e.g., Medicaid) long-term care insurance. The risk of outliving one’s financial resources can be hedged through annuities. The risk of not living long enough can be insured through life insurance.
In practice, however, the recourse to these hedging instruments remains less than predicted by theory. Slow-observed wealth drawdown after retirement is unexplained by bequest motives and suggests precautionary motives against health-related expenses. The excessive reliance on public pension (e.g., Social Security) and the post-retirement drop in consumption not related to work or health are both indicative of insufficient financial preparedness and run counter to consumption smoothing objectives. Moreover, the capacity to self-insure through preventive care and healthy habits is limited when aging is factored in. In conclusion, the observed health and financial life-cycle dynamics remain challenging for economic theory.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
David A. Hyman and Charles Silver
Medical malpractice is the best studied aspect of the civil justice system. But the subject is complicated, and there are heated disputes about basic facts. For example, are premium spikes driven by factors that are internal (i.e., number of claims, payout per claim, and damage costs) or external to the system? How large (or small) is the impact of a damages cap? Do caps have a bigger impact on the number of cases that are brought or the payment in the cases that remain? Do blockbuster verdicts cause defendants to settle cases for more than they are worth? Do caps attract physicians? Do caps reduce healthcare spending—and by how much? How much does it cost to resolve the high percentage of cases in which no damages are recovered? What is the comparative impact of a cap on noneconomic damages versus a cap on total damages?
Other disputes involve normative questions. Is there too much med mal litigation or not enough? Are damage caps fair? Is the real problem bad doctors or predatory lawyers—or some combination of both?
This article summarizes the empirical research on the performance of the med mal system, and highlights some areas for future research.
The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.
Audrey Laporte and Brian S. Ferguson
One of the implications of the human capital literature of the 1960s was that a great many decisions individuals make that have consequences not just for the point in time when the decision is being made but also for the future can be thought of as involving investments in certain types of capital. In health economics, this led Michael Grossman to propose the concept of health capital, which refers not just to the individual’s illness status at any point in time, but to the more fundamental factors that affect the likelihood that she will be ill at any point in her life and also affect her life expectancy at each age. In Grossman’s model, an individual purchased health-related commodities that act through a health production function to improve her health. These commodities could be medical care, which could be seen as repair expenditures, or factors such as diet and exercise, which could be seen as ongoing additions to her health—the counterparts of adding savings to her financial capital on a regular basis. The individual was assumed to make decisions about her level of consumption of these commodities as part of an intertemporal utility-maximizing process that incorporated, through a budget constraint, the need to make tradeoffs between health-related goods and goods that had no health consequences. Pauline Ippolito showed that the same analytical techniques could be used to consider goods that were bad for health in the long run—bad diet and smoking, for example—still within the context of lifetime utility maximization. This raised the possibility that an individual might rationally take actions that were bad for her health in the long run. The logical extension of considering smoking as bad was adding recognition that smoking and other bad health habits were addictive. The notion of addictive commodities was already present in the literature on consumer behavior, but the consensus in that literature was that it was extremely difficult, if not impossible, to distinguish between a rational addict and a completely myopic consumer of addictive goods. Gary Becker and Kevin Murphy proposed an alternative approach to modeling a forward-looking, utility-maximizing consumer’s consumption of addictive commodities, based on the argument that an individual’s degree of addiction could be modeled as addiction capital, and which could be used to tackle the empirical problems that the consumer expenditure literature had experienced. That model has become the most widely used framework for empirical research by economists into the consumption of addictive goods, and, while the concept of rationality in addiction remains controversial, the Becker-Murphy framework also provides a basis for testing various alternative models of the consumption of addictive commodities, most notably those based on versions of time-inconsistent intertemporal decision making.
Vincenzo Atella and Joanna Kopinska
New sanitation and health technology applied to treatments, procedures, and devices is constantly revolutionizing epidemiological patterns. Since the early 1900s it has been responsible for significant improvements in population health by turning once-deadly diseases into curable or preventable conditions, by expanding the existing cures to more patients and diseases, and by simplifying procedures for both medical and organizational practices. Notwithstanding the benefits of technological progress for the population health, the innovation process is also an important driver of health expenditure growth across all countries. The technological progress generates additional financial burden and expands the volume of services provided, which constitutes a concern from an economic point of view. Moreover, the evolution of technology costs and their impact on healthcare spending is difficult to predict due to the revolutionary nature of many innovations and their adoption. In this respect, the challenge for policymakers is to discourage overadoption of ineffective, unnecessary, and inappropriate technologies. This task has been long carried out through regulation, which according to standard economic theory is the only response to market failures and socially undesirable outcomes of healthcare markets left on their own. The potential welfare loss of a market failure must be confronted with the costs of regulatory activities. While health technology evolution delivers important value for patients and societies, it will continue to pose important challenges for already overextended public finances.
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
The literature on optimum currency areas differs from that on other topics in economic theory in a number of notable respects. Most obviously, the theory is framed in verbal rather than mathematical terms. Mundell’s seminal article coining the term and setting out the theory’s basic propositions relied entirely on words rather than equations. The same was true of subsequent contributions focusing on the sectoral composition of activity and the role of fiscal flows. A handful of more recent articles specified and analyzed formal mathematical models of optimum currency areas. But it is safe to say that none of these has “taken off” in the sense of becoming the workhorse framework on which subsequent scholarship builds. The theoretical literature remains heavily qualitative and narrative compared to other areas of economic theory. While Mundell, McKinnon, Kenen, and the other founding fathers of optimum-currency-area theory provided powerful intuition, attempts to further formalize that intuition evidently contributed less to advances in economic understanding than has been the case for other theoretical literatures.
Second, recent contributions to the literature on optimum currency areas are motivated to an unusual extent by a particular case, namely Europe’s monetary union. This was true already in the 1990s, when the EU’s unprecedented decision to proceed with the creation of the euro highlighted the question of whether Europe was an optimum currency area and, if not, how it might become one. That tendency was reinforced when Europe then descended into crisis starting in 2009. With only slight exaggeration it can be said that the literature on optimum currency areas became almost entirely a literature on Europe and on that continent’s failure to satisfy the relevant criteria.
Third, the literature on optimum currency areas remains the product of its age. When the founders wrote, in the 1960s, banks were more strictly regulated, and financial markets were less internationalized than subsequently. Consequently, the connections between monetary integration and financial integration—whether monetary union requires banking union, as the point is now put—were neglected in the earlier literature. The role of cross-border financial flows as a destabilizing mechanism within a currency area did not receive the attention it deserved. Because much of that earlier literature was framed in a North American context—the question was whether the United States or Canada was an optimum currency area—and because it was asked by a trio of scholars, two of whom hailed from Canada and one of whom hailed from the United States, the challenges of reconciling monetary integration with political nationalism and the question of whether monetary requires political union were similarly underplayed. Given the euro area’s descent into crisis, a number of analysts have asked why economists didn’t sound louder warnings in advance. The answer is that their outlooks were shaped by a literature that developed in an earlier era when the risks and context were different.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Detection of outliers is an important explorative step in empirical analysis. Once detected, the investigator will have to decide how to model the outliers depending on the context. Indeed, the outliers may represent noisy observations that are best left out of the analysis or they may be very informative observations that would have a particularly important role in the analysis. For regression analysis in time series a number of outlier algorithms are available, including impulse indicator saturation and methods from robust statistics. The algorithms are complex and their statistical properties are not fully understood. Extensive simulation studies have been made, but the formal theory is lacking. Some progress has been made toward an asymptotic theory of the algorithms. A number of asymptotic results are already available building on empirical process theory.
Jun Li and Edward C. Norton
Pay-for-performance programs have become a prominent supply-side intervention to improve quality and decrease spending in health care, touching upon long-term care, acute care, and outpatient care. Pay-for-performance directly targets long-term care, with programs in nursing homes and home health. Indirectly, pay-for-performance programs targeting acute care settings affect clinical practice for long-term care providers through incentives for collaboration across settings.
As a whole, pay-for-performance programs entail the identification of problems it seeks to solve, measurement of the dimensions it seeks to incentivize, methods to combine and translate performance to incentives, and application of the incentives to reward performance. For the long-term care population, pay-for-performance programs must also heed the unique challenges specific to the sector, such as patients with complex health needs and distinct health trajectories, and be structured to recognize the challenges of incentivizing performance improvement when there are multiple providers and payers involved in the care delivery.
Although empirical results indicate modest effectiveness of pay-for-performance in long-term care on improving targeted measures, some research has provided more clarity on the role of pay-for-performance design on the output of the programs, highlighting room for future research. Further, because health care is interconnected, the indirect effects of pay-for-performance programs on long-term care is an underexplored topic. As the scope of pay-for-performance in long-term care expands, both within the United States and internationally, pay-for-performance offers ample opportunities for future research.
Jesús Gonzalo and Jean-Yves Pitarakis
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Predictive regressions refer to models whose aim is to assess the predictability of a typically noisy time series, such as stock returns or currency returns with past values of a highly persistent predictor such as valuation ratios, interest rates, or volatilities, among other variables. Obtaining reliable inferences through conventional methods can be challenging in such environments mainly due to the joint interactions of predictor persistence, potential endogeneity, and other econometric complications. Numerous methods have been developed in the literature ranging from adjustments to test statistics used in significance testing to alternative instrumental variable based estimation methods specifically designed to neutralize inferences to the stochastic properties of the predictor(s).
Early developments in this area were mainly confined to linear and single predictor settings, but recent developments have raised the issue of adaptability of existing estimation and inference methods to more general environments so as to extend the use of predictive regressions to a wider range of potential applications.
An important extension involves allowing predictability to enter nonlinearly so as to capture time variation in the role of particular predictors. Economically interesting nonlinearities include, for instance, the use of threshold effects that allow predictability to vanish or strengthen during particular episodes, creating pockets of predictability. Such effects may kick in in the conditional means but also in the variances or both and may help uncover important phenomena such as the countercyclical nature of stock return predictability recently documented in the literature.
Due to the frequent need to consider multiple as opposed to single predictors it also becomes important to evaluate the validity and feasibility of inferences about linear and nonlinear predictability when multiple predictors of potentially different degrees of persistence are allowed to coexist in such settings.
Payment systems based on fixed prices have become the dominant model to finance hospitals across OECD countries. In the early 1980s, Medicare in the United States introduced the Diagnosis Related Groups (DRG) system. The idea was that hospitals should be paid a fixed price for treating a patient within a given diagnosis or treatment. The system then spread to other European countries (e.g., France, Germany, Italy, Norway, Spain, the United Kingdom) and high-income countries (e.g., Canada, Australia). The change in payment system was motivated by concerns over rapid health expenditure growth, and replaced financing arrangements based on reimbursing costs (e.g., in the United States) or fixed annual budgets (e.g., in the United Kingdom).
A more recent policy development is the introduction of pay for performance (P4P) schemes, which, in most cases, pay directly for higher quality. This is also a form of regulated price payment but the unit of payment is a (process or outcome) measure of quality, as opposed to activity, that is admitting a patient with a given diagnosis or a treatment.
Fixed price payment systems, either of the DRG type or the P4P type, affect hospital incentives to provide quality, contain costs, and treat the right patients (allocative efficiency). Quality and efficiency are ubiquitous policy goals across a range of countries.
Fixed price regulation induces providers to contain costs and, under certain conditions (e.g., excess demand), offer some incentives to sustain quality. But payment systems in the health sector are complex. Since its inception, DRG systems have been continuously refined. From their initial (around) 500 tariffs, many DRG codes have been split in two or more finer ones to reflect heterogeneity in costs within each subgroup. In turn, this may give incentives to provide excessive intensive treatments or to code patients in more remunerative tariffs, a practice known as upcoding. Fixed prices also make it financially unprofitable to treat high cost patients. This is particularly problematic when patients with the highest costs have the largest benefits from treatment. Hospitals also differ systematically in costs and other dimensions, and some of these external differences are beyond their control (e.g., higher cost of living, land, or capital). Price regulation can be put in place to address such differences.
The development of information technology has allowed constructing a plethora of quality indicators, mostly process measures of quality and in some cases health outcomes. These have been used both for public reporting, to help patients choose providers, but also for incentive schemes that directly pay for quality. P4P schemes are attractive but raise new issues, such as they might divert provider attention and unincentivized dimensions of quality might suffer as a result.
Pharmaceutical expenditure accounts for approximately 20% of healthcare expenditure across the Organisation for Economic Cooperation and Development (OECD) countries. Pharmaceutical products are regulated in all major global markets primarily to ensure product quality but also to regulate the reimbursed prices of insurance companies and central purchasing authorities that dominate this sector. Price regulation is justified as patent protection, which acts as an incentive to invest in R&D given the difficulties in appropriating the returns to such activity, creates monopoly rights to suppliers. Price regulation does itself reduce the ability of producers’ to recapture the substantial R&D investment costs incurred. Traditional price regulation through Ramsey pricing and yardstick competition is not efficient given the distortionary impact of insurance holdings, which are extensive in this sector and the inherent uncertainties that characterize Research and Development (R&D) activity. A range of other pricing regulations aimed at establishing pharmaceutical reimbursement that covers both dynamic efficiency (tied to R&D incentives) and static efficiency (tied to reducing monopoly rents) have been suggested. These range from cost-plus pricing, to internal and external reference pricing, rate-of-return pricing and, most recently value-based (essential health benefit maximization) pricing. Reimbursed prices reflecting value based pricing are, in some countries, associated with clinical treatment guidelines and cost-effectiveness analysis. Some countries are also requiring or allowing post-launch price regulation thorough a range of patient access agreements based on predefined population health targets and/or financial incentives. There is no simple, single solution to the determination of dynamic and static efficiency in this sector given the uncertainty associated with innovation, the large monopoly interests in the area, the distortionary impact of health insurance and the informational asymmetries that exist across providers and purchasers.
The concept of soft budget constraint, describes a situation where a decision-maker finds it impossible to keep an agent to a fixed budget. In healthcare it may refer to a (nonprofit) hospital that overspends, or to a lower government level that does not balance its accounts. The existence of a soft budget constraint may represent an optimal policy from the regulator point of view only in specific settings. In general, its presence may allow for strategic behavior that changes considerably its nature and its desirability. In this article, soft budget constraint will be analyzed along two lines: from a market perspective and from a fiscal federalism perspective.
The creation of an internal market for healthcare has made hospitals with different objectives and constraints compete together. The literature does not agree on the effects of competition on healthcare or on which type of organizations should compete. Public hospitals are often seen as less efficient providers, but they are also intrinsically motivated and/or altruistic. Competition for quality in a market where costs are sunk and competitors have asymmetric objectives may produce regulatory failures; for this reason, it might be optimal to implement soft budget constraint rules to public hospitals even at the risk of perverse effects. Several authors have attempted to estimate the presence of soft budget constraint, showing that they derive from different strategic behaviors and lead to quite different outcomes.
The reforms that have reshaped public healthcare systems across Europe have often been accompanied by a process of devolution; in some countries it has often been accompanied by widespread soft budget constraint policies. Medicaid expenditure in the United States is becoming a serious concern for the Federal Government and the evidence from other states is not reassuring. Several explanations have been proposed: (a) local governments may use spillovers to induce neighbors to pay for their local public goods; (b) size matters: if the local authority is sufficiently big, the center will bail it out; equalization grants and fiscal competition may be responsible for the rise of soft budget constraint policies. Soft budget policies may also derive from strategic agreements among lower tiers, or as a consequence of fiscal imbalances. In this context the optimal use of soft budget constraint as a policy instrument may not be desirable.
Joanna Coast and Manuela De Allegri
Qualitative methods are being used increasingly by health economists, but most health economists are not trained in these methods and may need to develop expertise in this area. This article discusses important issues of ontology, epistemology, and research design, before addressing the key issues of sampling, data collection, and data analysis in qualitative research. Understanding differences in the purpose of sampling between qualitative and quantitative methods is important for health economists, and the key notion of purposeful sampling is described. The section on data collection covers in-depth and semistructured interviews, focus-group discussions, and observation. Methods for data analysis are then discussed, with a particular focus on the use of inductive methods that are appropriate for economic purposes. Presentation and publication are briefly considered, before three areas that have seen substantial use of qualitative methods are explored: attribute development for discrete choice experiment, priority-setting research, and health financing initiatives.