You are looking at 1-10 of 50 articles
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Outcomes from individuals often depend on their age, period, and cohort, where cohort + age = period. An example is consumption, where consumption patterns change with age, but the availability of product changes over time, the period, and this affects individuals of different birth years, the cohort, differently. Age-period-cohort models are linear models allowing different parameter values for each level of age, period, and cohort. Variations of the models are available for data aggregated over age, period, and cohort and for data stemming from repeated cross-sections, where the time effects can be combined with individual covariates. The models could potentially be extended to panel data. It is common to plot the estimated age, period, and cohort effects and analyze them as time series. Further, it is also common to conduct inference on the inclusion of the different time effects, and to use the models for forecasting, which involves extrapolation of the time effects.
The age, period, and cohort time effects are intertwined. Specifically, inclusion of an indicator variable for each level of age, period, and cohort will result in a collinarity, which is referred to as the age-period-cohort identification problem. A first approach to addressing the collinarity is to leave out a suitable number of indicator variables. This gives some difficulties in the interpretation, inference, and forecasting in relation to the time effects. A second approach is the canonical parametrization that is a freely varying parametrization, which is invariant to the identification problem and therefore more amenable to interpretation, inference, and forecasting.
Martin Karlsson, Tor Iversen, and Henning Øien
An open issue in the economics literature is whether health care expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “Ageing of Population and Health Care Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the “red herring” hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.
“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory.
The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.
Matteo M. Galizzi and Daniel Wiesen
The state-of-the-art literature at the interface between experimental and behavioral economics and health economics is reviewed by identifying and discussing 10 areas of potential debate about behavioral experiments in health. By doing so, the different streams and areas of applications of the growing field of behavioral experiments in health are reviewed, by discussing which significant questions remain to be discussed, and by highlighting the rationale and the scope for the further development of behavioral experiments in health in the years to come.
Cristina Bellés-Obrero and Judit Vall Castello
The impact of macroeconomic fluctuations on health and mortality rates has been a highly studied topic in the field of economics. Many studies, using fixed-effects models, find that mortality is procyclical in many countries, such as the United States, Germany, Spain, France, Pacific-Asian nations, Mexico, and Canada. On the other hand, a small number of studies find that mortality decreases during economic expansion. Differences in the social insurance systems and labor market institutions across countries may explain some of the disparities found in the literature. Studies examining the effects of more recent recessions are less conclusive, finding mortality to be less procyclical, or even countercyclical. This new finding could be explained by changes over time in the mechanisms behind the association between business cycle conditions and mortality.
A related strand of the literature has focused on understanding the effect of economic fluctuations on infant health at birth and/or child mortality. While infant mortality is found to be procyclical in countries like the United States and Spain, the opposite is found in developing countries.
Even though the association between business cycle conditions and mortality has been extensively documented, a much stronger effort is needed to understand the mechanisms behind the relationship between business cycle conditions and health. Many studies have examined the association between macroeconomic fluctuations and smoking, drinking, weight disorders, eating habits, and physical activity, although results are rather mixed. The only well-established finding is that mental health deteriorates during economic slowdowns.
An important challenge is the fact that the comparison of the main results across studies proves to be complicated due to the variety of empirical methods and time spans used. Furthermore, estimates have been found to be sensitive to the use of different levels of geographic aggregation, model specifications, and proxies of macroeconomic fluctuations.
Diane McIntyre, Amarech G. Obse, Edwine W. Barasa, and John E. Ataguba
Within the context of the Sustainable Development Goals, it is important to critically review research on healthcare financing in sub-Saharan Africa (SSA) from the perspective of the universal health coverage (UHC) goals of financial protection and access to quality health services for all. There is a concerning reliance on direct out-of-pocket payments in many SSA countries, accounting for an average of 36% of current health expenditure compared to only 22% in the rest of the world. Contributions to health insurance schemes, whether voluntary or mandatory, contribute a small share of current health expenditure. While domestic mandatory prepayment mechanisms (tax and mandatory insurance) is the next largest category of healthcare financing in SSA (35%), a relatively large share of funding in SSA (14% compared to <1% in the rest of the world) is attributable to, sometimes unstable, external funding sources. There is a growing recognition of the need to reduce out-of-pocket payments and increase domestic mandatory prepayment financing to move towards UHC. Many SSA countries have declared a preference for achieving this through contributory health insurance schemes, particularly for formal sector workers, with service entitlements tied to contributions. Policy debates about whether a contributory approach is the most efficient, equitable and sustainable means of financing progress to UHC are emotive and infused with “conventional wisdom.” A range of research questions must be addressed to provide a more comprehensive empirical evidence base for these debates and to support progress to UHC.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
A patent is a legal right to exclude granted by the state to the inventor of a novel and useful invention. Much legal ink has been spilled on the meaning of these terms. “Novel” means that the invention has not been anticipated in the art prior to its creation by the inventor. “Useful” means that the invention has a practical application. The words “inventor” and “invention” are also legal terms of art. An invention is a work that advances a particular field, moving practitioners forward not simply through accretions of knowledge but through concrete implementations. An inventor is someone who contributes to an invention either as an individual or as part of a team. The exclusive right, finally, is not granted gratuitously. The inventor must apply and go through a review process for the invention. Furthermore, a price for the patent being granted is full, clear disclosure by the inventor of how to practice the invention. The public can use this disclosure once the patent expires or through a license during the duration of the patent.
These institutional details are common features of all patent systems. What is interesting is the economic justification for patents. As a property right, a patent resolves certain externality problems that arise in markets for knowledge. The establishment of property rights allows for trade in the invention and the dissemination of knowledge. However, the economic case for property rights is made complex because of the institutional need to apply for a patent. While in theory, patent grants could be automatic, inventions must meet certain standards for the grant to be justified. These procedural hurdles create possibilities for gamesmanship in how property rights are allocated.
Furthermore, even if granted correctly, property rights can become murky because of the problems of enforcement through litigation. Courts must determine when an invention has been used, made, or sold without permission by a third party in violation of the rights of the patent owner. This legal process can lead to gamesmanship as patent owners try to force settlements from alleged infringers. Meanwhile, third parties may act opportunistically to take advantage of the uncertain boundaries of patent rights and engage in undetectable infringement. Exacerbating these tendencies are the difficulties in determining damages and the possibility of injunctive relief.
Some caution against these criticisms through the observation that most patents are not enforced. In fact, most granted patents turn out to be worthless, when gauged in commercial value. But worthless patents still have potential litigation value. While a patent owner might view a worthless patent as a sunk cost, there is incentive to recoup investment through the sale of worthless patents to parties willing to assume the risk of litigation. Hence the phenomenon of “trolling,” or the rise of non-practicing entities, troubles the patent landscape. This phenomenon gives rise to concerns with the anticompetitive uses of patents, demonstrating the need for some limitations on patent enforcement.
With all the policy concerns arising from patents, it is no surprise that patent law has been ripe for reform. Economic analysis can inform these reform efforts by identifying ways in which patents fail to create a vibrant market for inventions. Appreciation of the political economy of patents invites a rich academic and policy debate over the direction of patent law.
Michael Drummond, Rosanna Tarricone, and Aleksandra Torbica
There are a number of challenges in the economic evaluation of medical devices (MDs). They are typically less regulated than pharmaceuticals, and the clinical evidence requirements for market authorization are generally lower. There are also specific characteristics of MDs, such as the device–user interaction (learning curve), the incremental nature of innovation, the dynamic nature of pricing, and the broader organizational impact. Therefore, a number of initiatives need to be taken in order to facilitate the economic evaluation of MDs. First, the regulatory processes for MDs need to be strengthened and more closely aligned to the needs of economic evaluation. Second, the methods of economic evaluation need to be enhanced by improving the analysis of the available clinical data, establishing high-quality clinical registries, and better recognizing MDs’ specific characteristics. Third, the market entry and diffusion of MDs need to be better managed by understanding the key influences on MD diffusion and linking diffusion with cost-effectiveness evidence through the use of performance-based risk-sharing arrangements.
Jason M. Fletcher
Two interrelated advances in genetics have occurred which have ushered in the growing field of genoeconomics. The first is a rapid expansion of so-called big data featuring genetic information collected from large population–based samples. The second is enhancements to computational and predictive power to aggregate small genetic effects across the genome into single summary measures called polygenic scores (PGSs). Together, these advances will be incorporated broadly with economic research, with strong possibilities for new insights and methodological techniques.