You are looking at 1-20 of 145 articles
Martin Karlsson, Tor Iversen, and Henning Øien
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
An open issue in the economics literature is whether healthcare expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “Ageing of Population and HealthCare Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the “red herring” hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Outcomes from individuals often depend on their age, period, and cohort, where cohort + age = period. An example is consumption, where consumption patterns change with age, but the availability of product changes over time, the period, and this affects individuals of different birth years, the cohort, differently. Age-period-cohort models are linear models allowing different parameter values for each level of age, period, and cohort. Variations of the models are available for data aggregated over age, period, and cohort and for data stemming from repeated cross-sections, where the time effects can be combined with individual covariates. The models could potentially be extended to panel data. It is common to plot the estimated age, period, and cohort effects and analyze them as time series. Further, it is also common to conduct inference on the inclusion of the different time effects, and to use the models for forecasting, which involves extrapolation of the time effects.
The age, period, and cohort time effects are intertwined. Specifically, inclusion of an indicator variable for each level of age, period, and cohort will result in a collinarity, which is referred to as the age-period-cohort identification problem. A first approach to addressing the collinarity is to leave out a suitable number of indicator variables. This gives some difficulties in the interpretation, inference, and forecasting in relation to the time effects. A second approach is the canonical parametrization that is a freely varying parametrization, which is invariant to the identification problem and therefore more amenable to interpretation, inference, and forecasting.
“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory.
The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.
Andrea Gabrio, Gianluca Baio, and Andrea Manca
The evidence produced by healthcare economic evaluation studies is a key component of any Health Technology Assessment (HTA) process designed to inform resource allocation decisions in a budget-limited context. To improve the quality (and harmonize the generation process) of such evidence, many HTA agencies have established methodological guidelines describing the normative framework inspiring their decision-making process. The information requirements that economic evaluation analyses for HTA must satisfy typically involve the use of complex quantitative syntheses of multiple available datasets, handling mixtures of aggregate and patient-level information, and the use of sophisticated statistical models for the analysis of non-Normal data (e.g., time-to-event, quality of life and costs). Much of the recent methodological research in economic evaluation for healthcare has developed in response to these needs, in terms of sound statistical decision-theoretic foundations, and is increasingly being formulated within a Bayesian paradigm. The rationale for this preference lies in the fact that by taking a probabilistic approach, based on decision rules and available information, a Bayesian economic evaluation study can explicitly account for relevant sources of uncertainty in the decision process and produce information to identify an “optimal” course of actions. Moreover, the Bayesian approach naturally allows the incorporation of an element of judgment or evidence from different sources (e.g., expert opinion or multiple studies) into the analysis. This is particularly important when, as often occurs in economic evaluation for HTA, the evidence base is sparse and requires some inevitable mathematical modeling to bridge the gaps in the available data. The availability of free and open source software in the last two decades has greatly reduced the computational costs and facilitated the application of Bayesian methods and has the potential to improve the work of modelers and regulators alike, thus advancing the fields of economic evaluation of healthcare interventions. This chapter provides an overview of the areas where Bayesian methods have contributed to the address the methodological needs that stem from the normative framework adopted by a number of HTA agencies.
Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Matteo M. Galizzi and Daniel Wiesen
The state-of-the-art literature at the interface between experimental and behavioral economics and health economics is reviewed by identifying and discussing 10 areas of potential debate about behavioral experiments in health. By doing so, the different streams and areas of applications of the growing field of behavioral experiments in health are reviewed, by discussing which significant questions remain to be discussed, and by highlighting the rationale and the scope for the further development of behavioral experiments in health in the years to come.
Nikolaus Robalino and Arthur Robson
Modern economic theory rests on the basic assumption that agents’ choices are guided by preferences. The question of where such preferences might have come from has traditionally been ignored or viewed agnostically. The biological approach to economic behavior addresses the issue of the origins of economic preferences explicitly. This approach assumes that economic preferences are shaped by the forces of natural selection. For example, an important theoretical insight delivered thus far by this approach is that individuals ought to be more risk averse to aggregate than to idiosyncratic risk. Additionally the approach has delivered an evolutionary basis for hedonic and adaptive utility and an evolutionary rationale for “theory of mind.” Related empirical work has studied the evolution of time preferences, loss aversion, and explored the deep evolutionary determinants of long-run economic development.
Graciela Laura Kaminsky
This article examines the new trends in research on capital flows fueled by the 2007–2009 Global Crisis. Previous studies on capital flows focused on current account imbalances and net capital flows. The Global Crisis changed that. The onset of this crisis was preceded by a dramatic increase in gross financial flows while net capital flows remained mostly subdued. The attention in academia zoomed in on gross inflows and outflows with special attention to cross-border banking flows before the crisis erupted and the shift towards corporate bond issuance in its aftermath. The boom and bust in capital flows around the Global Crisis also stimulated a new area of research: capturing the “global factor.” This research adopts two different approaches. The traditional literature on the push–pull factors, which before the crisis was mostly focused on monetary policy in the financial center as the “push factor,” started to explore what other factors contribute to the co-movement of capital flows as well as to amplify the role of monetary policy in the financial center on capital flows. This new research focuses on global banks’ leverage, risk appetite, and global uncertainty. Since the “global factor” is not known, a second branch of the literature has captured this factor indirectly using dynamic common factors extracted from actual capital flows or movements in asset prices.
Cristina Bellés-Obrero and Judit Vall Castello
The impact of macroeconomic fluctuations on health and mortality rates has been a highly studied topic in the field of economics. Many studies, using fixed-effects models, find that mortality is procyclical in many countries, such as the United States, Germany, Spain, France, Pacific-Asian nations, Mexico, and Canada. On the other hand, a small number of studies find that mortality decreases during economic expansion. Differences in the social insurance systems and labor market institutions across countries may explain some of the disparities found in the literature. Studies examining the effects of more recent recessions are less conclusive, finding mortality to be less procyclical, or even countercyclical. This new finding could be explained by changes over time in the mechanisms behind the association between business cycle conditions and mortality.
A related strand of the literature has focused on understanding the effect of economic fluctuations on infant health at birth and/or child mortality. While infant mortality is found to be procyclical in countries like the United States and Spain, the opposite is found in developing countries.
Even though the association between business cycle conditions and mortality has been extensively documented, a much stronger effort is needed to understand the mechanisms behind the relationship between business cycle conditions and health. Many studies have examined the association between macroeconomic fluctuations and smoking, drinking, weight disorders, eating habits, and physical activity, although results are rather mixed. The only well-established finding is that mental health deteriorates during economic slowdowns.
An important challenge is the fact that the comparison of the main results across studies proves to be complicated due to the variety of empirical methods and time spans used. Furthermore, estimates have been found to be sensitive to the use of different levels of geographic aggregation, model specifications, and proxies of macroeconomic fluctuations.
Diane McIntyre, Amarech G. Obse, Edwine W. Barasa, and John E. Ataguba
Within the context of the Sustainable Development Goals, it is important to critically review research on healthcare financing in sub-Saharan Africa (SSA) from the perspective of the universal health coverage (UHC) goals of financial protection and access to quality health services for all. There is a concerning reliance on direct out-of-pocket payments in many SSA countries, accounting for an average of 36% of current health expenditure compared to only 22% in the rest of the world. Contributions to health insurance schemes, whether voluntary or mandatory, contribute a small share of current health expenditure. While domestic mandatory prepayment mechanisms (tax and mandatory insurance) is the next largest category of healthcare financing in SSA (35%), a relatively large share of funding in SSA (14% compared to <1% in the rest of the world) is attributable to, sometimes unstable, external funding sources. There is a growing recognition of the need to reduce out-of-pocket payments and increase domestic mandatory prepayment financing to move towards UHC. Many SSA countries have declared a preference for achieving this through contributory health insurance schemes, particularly for formal sector workers, with service entitlements tied to contributions. Policy debates about whether a contributory approach is the most efficient, equitable and sustainable means of financing progress to UHC are emotive and infused with “conventional wisdom.” A range of research questions must be addressed to provide a more comprehensive empirical evidence base for these debates and to support progress to UHC.
Since the 1980s policymakers have identified a wide range of policy interventions to improve hospital performance. Some of these have been initiated at the level of government, whereas others have taken the form of decisions made by individual hospitals but have been guided by regulatory or financial incentives. Studies investigating the impact that some of the most important of these interventions have had on hospital performance can be grouped into four different research streams. Among the research streams, the strongest evidence exists for the effects of privatization. Studies on this topic use longitudinal designs with control groups and have found robust increases in efficiency and financial performance. Evidence on the entry of hospitals into health systems and the effects of this on efficiency is similarly strong. Although the other three streams of research also contain well-conducted studies with valuable findings, they are predominantly cross-sectional in design and therefore cannot establish causation. While the effects of introducing DRG-based hospital payments and of specialization are largely unclear, vertical and horizontal cooperation probably have a positive effect on efficiency and financial performance. Lastly, the drivers of improved efficiency or financial performance are very different depending on the reform or intervention being investigated; however, reductions in the number of staff and improved bargaining power in purchasing stand out as being of particular importance.
Several promising avenues for future investigation are identified. One of these is situated within a new area of research examining the link between changes in the prices of treatments and hospitals’ responses. As there is evidence of unintended effects, future studies should attempt to distinguish between changes in hospitals’ responses at the intensive margin (e.g., upcoding) versus the extensive margin (e.g., increase in admissions). When looking at the effects of entering into a health system and of privatizations, there is still considerable need for research. With privatizations, in particular, the underlying processes are not yet fully understood, and the potential trade-offs between increases in performance and changes in the quality of care have not been sufficiently examined. Lastly, there is substantial need for further papers in the areas of multi-institutional arrangements and cooperation, as well as specialization. In both research streams, natural experiments carried out using program evaluation design are lacking. One of the main challenges here, however, is that cooperation and specialization cannot be directly observed but rather must be constructed based on survey or administrative data.
In many countries of the world, consumers choose their health insurance coverage from a large menu of often complex options supplied by private insurance companies. Economic benefits of the wide choice of health insurance options depend on the extent to which the consumers are active, well informed, and sophisticated decision makers capable of choosing plans that are well-suited to their individual circumstances.
There are many possible ways how consumers’ actual decision making in the health insurance domain can depart from the standard model of health insurance demand of a rational risk-averse consumer. For example, consumers can have inaccurate subjective beliefs about characteristics of alternative plans in their choice set or about the distribution of health expenditure risk because of cognitive or informational constraints; or they can prefer to rely on heuristics when the plan choice problem features a large number of options with complex cost-sharing design.
The second decade of the 21st century has seen a burgeoning number of studies assessing the quality of consumer choices of health insurance, both in the lab and in the field, and financial and welfare consequences of poor choices in this context. These studies demonstrate that consumers often find it difficult to make efficient choices of private health insurance due to reasons such as inertia, misinformation, and the lack of basic insurance literacy. These findings challenge the conventional rationality assumptions of the standard economic model of insurance choice and call for policies that can enhance the quality of consumer choices in the health insurance domain.
In the wake of the 2008 financial collapse, clearinghouses have emerged as critical players in the implementation of the post-crisis regulatory reform agenda. Recognizing serious shortcomings in the design of the over-the-counter derivatives market for swaps, regulators are now relying on clearinghouses to cure these deficiencies by taking on a central role in mitigating the risks of these instruments. Rather than leave trading firms to manage the risks of transacting in swaps privately, as was largely the case prior to 2008, post-crisis regulation requires that clearinghouses assume responsibility for ensuring that trades are properly settled, reported to authorities, and supported by strong cushions of protective collateral. With clearinghouses effectively guaranteeing that the terms of a trade will be honored—even if one of the trading parties cannot perform—the market can operate with reduced levels of counterparty risk, opacity, and the threat of systemic collapse brought on by recklessness and over-complexity.
But despite their obvious benefit for regulators, clearinghouses also pose risks of their own. First, given their deepening significance for market stability, ensuring that clearinghouses themselves operate safely represents a matter of the highest policy priority. Yet overseeing clearinghouses is far from easy and understanding what works best to undergird their safe operation can be a contentious and uncertain matter. U.S. and EU authorities, for example, have diverged in important ways on what rules should apply to the workings of international clearinghouses. Secondly, clearinghouse oversight is critical because these institutions now warehouse enormous levels of counterparty risk. By promising counterparties across the market that their trades will settle as agreed, even if one or the other firm goes bust, clearinghouses assume almost inconceivably large and complicated risks within their institutions. For swaps in particular—whose obligations can last for months, or even years—the scale of these risks can be far more extensive than that entailed in a one-off sale or a stock or bond. In this way, commentators note that by becoming the go-to bulwark against risk-taking and its spread in the financial system, clearinghouses have themselves become the too-big-to-fail institution par excellence.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
Peter Sivey and Yijuan Chen
Quality competition between alternative providers is an increasingly important topic in the health economics literature. This literature includes theoretical and empirical studies that have been developed in parallel to 21st-century policies to increase competition between doctors or hospitals. Theoretical studies have clarified how competitive markets can give healthcare providers the incentive to improve quality. Broadly speaking, if providers have an incentive to attract more patients and patients value quality, providers will raise quality until the costs of raising quality are equal to the additional revenue from patients attracted by the rise in quality. The theoretical literature has also investigated how institutional and policy parameters determine quality levels in equilibrium. Important parameters in models of quality competition include the degree of horizontal differentiation, the level of information about provider quality, the costs of switching between providers, and the time-horizon of quality investment decisions.
Empirical studies have focused on the prerequisites of quality competition (e.g., do patients choose higher quality providers?) and the impact of pro-competition policies on quality levels. The most influential studies have used modern econometric approaches, including difference-in differences and instrumental variables, to identify plausibly causal effects. The evidence suggests that in most contexts, quality is a determinant of patient choice of provider, especially after greater patient choice is made available or information is published about provider quality.
The evidence that increases in competition improve quality in healthcare is less clear cut. Perhaps reflecting the economic theory of quality competition, showing that different parameter combinations or assumptions can produce different outcomes, empirical results are also mixed. While a series of high-quality studies in the United Kingdom appear to show strong improvements in quality in more competitive areas following pro-competition reforms introducing more choice and competition, other studies showed that these quality improvements do not extend to all types of healthcare or alternative measures of quality.
The most promising areas for future research include investigating the “black box” of quality improvement under competition, and behavioral studies investigating financial and nonfinancial motivations for quality improvements in competitive markets.
Anna Vassall, Fiammetta Bozzani, and Kara Hanson
In order to secure effective service access, coverage, and impact, it is increasingly recognized that the introduction of novel health technologies such as diagnostics, drugs, and vaccines may require additional investment to address the constraints under which many health systems operate. Health-system constraints include a shortage of health workers, ineffective supply chains, or inadequate information systems, or organizational constraints such as weak incentives and poor service integration. Decision makers may be faced with the question of whether to invest in a new technology, including the specific health system strengthening needed to ensure effective implementation; or they may be seeking to optimize resource allocation across a range of interventions including investment in broad health system functions or platforms. Investment in measures to address health-system constraints therefore increasingly need to undergo economic evaluation, but this poses several methodological challenges for health economists, particularly in the context of low- and middle-income countries.
Designing the appropriate analysis to inform investment decisions concerning new technologies incorporating health systems investment can be broken down into several steps. First, the analysis needs to comprehensively outline the interface between the new intervention and the system through which it is to be delivered, in order to identify the relevant constraints and the measures needed to relax them. Second, the analysis needs to be rooted in a theoretical approach to appropriately characterize constraints and consider joint investment in the health system and technology. Third, the analysis needs to consider how the overarching priority- setting process influences the scope and output of the analysis informing the way in which complex evidence is used to support the decision, including how to represent and manage system wide trade-offs. Finally, there are several ways in which decision analytical models can be structured, and parameterized, in a context of data scarcity around constraints. This article draws together current approaches to health system thinking with the emerging literature on analytical approaches to integrating health-system constraints into economic evaluation to guide economists through these four issues. It aims to contribute to a more health-system-informed approach to both appraising the cost-effectiveness of new technologies and setting priorities across a range of program activities.
A patent is a legal right to exclude granted by the state to the inventor of a novel and useful invention. Much legal ink has been spilled on the meaning of these terms. “Novel” means that the invention has not been anticipated in the art prior to its creation by the inventor. “Useful” means that the invention has a practical application. The words “inventor” and “invention” are also legal terms of art. An invention is a work that advances a particular field, moving practitioners forward not simply through accretions of knowledge but through concrete implementations. An inventor is someone who contributes to an invention either as an individual or as part of a team. The exclusive right, finally, is not granted gratuitously. The inventor must apply and go through a review process for the invention. Furthermore, a price for the patent being granted is full, clear disclosure by the inventor of how to practice the invention. The public can use this disclosure once the patent expires or through a license during the duration of the patent.
These institutional details are common features of all patent systems. What is interesting is the economic justification for patents. As a property right, a patent resolves certain externality problems that arise in markets for knowledge. The establishment of property rights allows for trade in the invention and the dissemination of knowledge. However, the economic case for property rights is made complex because of the institutional need to apply for a patent. While in theory, patent grants could be automatic, inventions must meet certain standards for the grant to be justified. These procedural hurdles create possibilities for gamesmanship in how property rights are allocated.
Furthermore, even if granted correctly, property rights can become murky because of the problems of enforcement through litigation. Courts must determine when an invention has been used, made, or sold without permission by a third party in violation of the rights of the patent owner. This legal process can lead to gamesmanship as patent owners try to force settlements from alleged infringers. Meanwhile, third parties may act opportunistically to take advantage of the uncertain boundaries of patent rights and engage in undetectable infringement. Exacerbating these tendencies are the difficulties in determining damages and the possibility of injunctive relief.
Some caution against these criticisms through the observation that most patents are not enforced. In fact, most granted patents turn out to be worthless, when gauged in commercial value. But worthless patents still have potential litigation value. While a patent owner might view a worthless patent as a sunk cost, there is incentive to recoup investment through the sale of worthless patents to parties willing to assume the risk of litigation. Hence the phenomenon of “trolling,” or the rise of non-practicing entities, troubles the patent landscape. This phenomenon gives rise to concerns with the anticompetitive uses of patents, demonstrating the need for some limitations on patent enforcement.
With all the policy concerns arising from patents, it is no surprise that patent law has been ripe for reform. Economic analysis can inform these reform efforts by identifying ways in which patents fail to create a vibrant market for inventions. Appreciation of the political economy of patents invites a rich academic and policy debate over the direction of patent law.
Qiang Fu and Zenan Wu
Competitive situations resembling contests are ubiquitous in modern economic landscape. In a contest, economic agents expend costly effort to vie for limited prizes, and they are rewarded for “getting ahead” of their opponents instead of their absolute performance metrics. Many social, economic, and business phenomena exemplify such competitive schemes, ranging from college admissions, political campaigns, advertising, and organizational hierarchies, to warfare. The economics literature has long recognized contest/tournament as a convenient and efficient incentive scheme to remedy the moral hazard problem, especially when the production process is subject to random perturbation or the measurement of input/output is imprecise or costly. An enormous amount of scholarly effort has been devoted to developing tractable theoretical models, unveiling the fundamentals of the strategic interactions that underlie such competitions, and exploring the optimal design of contest rules. This voluminous literature has enriched basic contest/tournament models by introducing different variations to the modeling, such as dynamic structure, incomplete and asymmetric information, multi-battle confrontations, sorting and entry, endogenous prize allocation, competitions in groups, contestants with alternative risk attitude, among other things.
Helen Hayes and Matt Sutton
Contracts and working conditions are important influences on the medical workforce that must be carefully constructed and considered by policymakers. Contracts involve an enforceable agreement of the rights and responsibilities of both employer and employee. The principal-agent relationship and presence of asymmetric information in healthcare means that contracts must be incentive compatible and create sufficient incentive for doctors to act in the payer’s best interests. Within medicine, there are special characteristics that are believed to be particularly pertinent to doctors, who act as agents to both the patient and the payer. These include intrinsic motivation, professionalism, altruism, and multitasking, and they influence the success of these contracts. The three most popular methods of payment are fee-for-service, capitation, and salaries. In most contexts a blend of each of these three payment methods is used; however, guidance on the most appropriate blend is unclear and the evidence on the special nature of doctors is insubstantial. The role of skill mix and teamwork in a healthcare setting is an important consideration as it impacts the success of incentives and payment systems and the efficiency of workers. Additionally, with increasing demand for healthcare, changing skill mix is one response to problems with recruitment and retention in health services. Health systems in many settings depend on a large proportion of foreign-born workers and so migration is a key consideration in retention and recruitment of health workers. Finally, forms of external regulation such as accreditation, inspection, and revalidation are widely used in healthcare systems; however, robust evidence of their effectiveness is lacking.