You are looking at 1-20 of 88 articles
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Outcomes from individuals often depend on their age, period, and cohort, where cohort + age = period. An example is consumption, where consumption patterns change with age, but the availability of product changes over time, the period, and this affects individuals of different birth years, the cohort, differently. Age-period-cohort models are linear models allowing different parameter values for each level of age, period, and cohort. Variations of the models are available for data aggregated over age, period, and cohort and for data stemming from repeated cross-sections, where the time effects can be combined with individual covariates. The models could potentially be extended to panel data. It is common to plot the estimated age, period, and cohort effects and analyze them as time series. Further, it is also common to conduct inference on the inclusion of the different time effects, and to use the models for forecasting, which involves extrapolation of the time effects.
The age, period, and cohort time effects are intertwined. Specifically, inclusion of an indicator variable for each level of age, period, and cohort will result in a collinarity, which is referred to as the age-period-cohort identification problem. A first approach to addressing the collinarity is to leave out a suitable number of indicator variables. This gives some difficulties in the interpretation, inference, and forecasting in relation to the time effects. A second approach is the canonical parametrization that is a freely varying parametrization, which is invariant to the identification problem and therefore more amenable to interpretation, inference, and forecasting.
Martin Karlsson, Tor Iversen, and Henning Øien
An open issue in the economics literature is whether health care expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “Ageing of Population and Health Care Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the “red herring” hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.
“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory.
The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.
Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Matteo M. Galizzi and Daniel Wiesen
The state-of-the-art literature at the interface between experimental and behavioral economics and health economics is reviewed by identifying and discussing 10 areas of potential debate about behavioral experiments in health. By doing so, the different streams and areas of applications of the growing field of behavioral experiments in health are reviewed, by discussing which significant questions remain to be discussed, and by highlighting the rationale and the scope for the further development of behavioral experiments in health in the years to come.
Cristina Bellés-Obrero and Judit Vall Castello
The impact of macroeconomic fluctuations on health and mortality rates has been a highly studied topic in the field of economics. Many studies, using fixed-effects models, find that mortality is procyclical in many countries, such as the United States, Germany, Spain, France, Pacific-Asian nations, Mexico, and Canada. On the other hand, a small number of studies find that mortality decreases during economic expansion. Differences in the social insurance systems and labor market institutions across countries may explain some of the disparities found in the literature. Studies examining the effects of more recent recessions are less conclusive, finding mortality to be less procyclical, or even countercyclical. This new finding could be explained by changes over time in the mechanisms behind the association between business cycle conditions and mortality.
A related strand of the literature has focused on understanding the effect of economic fluctuations on infant health at birth and/or child mortality. While infant mortality is found to be procyclical in countries like the United States and Spain, the opposite is found in developing countries.
Even though the association between business cycle conditions and mortality has been extensively documented, a much stronger effort is needed to understand the mechanisms behind the relationship between business cycle conditions and health. Many studies have examined the association between macroeconomic fluctuations and smoking, drinking, weight disorders, eating habits, and physical activity, although results are rather mixed. The only well-established finding is that mental health deteriorates during economic slowdowns.
An important challenge is the fact that the comparison of the main results across studies proves to be complicated due to the variety of empirical methods and time spans used. Furthermore, estimates have been found to be sensitive to the use of different levels of geographic aggregation, model specifications, and proxies of macroeconomic fluctuations.
Diane McIntyre, Amarech G. Obse, Edwine W. Barasa, and John E. Ataguba
Within the context of the Sustainable Development Goals, it is important to critically review research on healthcare financing in sub-Saharan Africa (SSA) from the perspective of the universal health coverage (UHC) goals of financial protection and access to quality health services for all. There is a concerning reliance on direct out-of-pocket payments in many SSA countries, accounting for an average of 36% of current health expenditure compared to only 22% in the rest of the world. Contributions to health insurance schemes, whether voluntary or mandatory, contribute a small share of current health expenditure. While domestic mandatory prepayment mechanisms (tax and mandatory insurance) is the next largest category of healthcare financing in SSA (35%), a relatively large share of funding in SSA (14% compared to <1% in the rest of the world) is attributable to, sometimes unstable, external funding sources. There is a growing recognition of the need to reduce out-of-pocket payments and increase domestic mandatory prepayment financing to move towards UHC. Many SSA countries have declared a preference for achieving this through contributory health insurance schemes, particularly for formal sector workers, with service entitlements tied to contributions. Policy debates about whether a contributory approach is the most efficient, equitable and sustainable means of financing progress to UHC are emotive and infused with “conventional wisdom.” A range of research questions must be addressed to provide a more comprehensive empirical evidence base for these debates and to support progress to UHC.
In many countries of the world, consumers choose their health insurance coverage from a large menu of often complex options supplied by private insurance companies. Economic benefits of the wide choice of health insurance options depend on the extent to which the consumers are active, well informed, and sophisticated decision makers capable of choosing plans that are well-suited to their individual circumstances.
There are many possible ways how consumers’ actual decision making in the health insurance domain can depart from the standard model of health insurance demand of a rational risk-averse consumer. For example, consumers can have inaccurate subjective beliefs about characteristics of alternative plans in their choice set or about the distribution of health expenditure risk because of cognitive or informational constraints; or they can prefer to rely on heuristics when the plan choice problem features a large number of options with complex cost-sharing design.
The second decade of the 21st century has seen a burgeoning number of studies assessing the quality of consumer choices of health insurance, both in the lab and in the field, and financial and welfare consequences of poor choices in this context. These studies demonstrate that consumers often find it difficult to make efficient choices of private health insurance due to reasons such as inertia, misinformation, and the lack of basic insurance literacy. These findings challenge the conventional rationality assumptions of the standard economic model of insurance choice and call for policies that can enhance the quality of consumer choices in the health insurance domain.
In the wake of the 2008 financial collapse, clearinghouses have emerged as critical players in the implementation of the post-crisis regulatory reform agenda. Recognizing serious shortcomings in the design of the over-the-counter derivatives market for swaps, regulators are now relying on clearinghouses to cure these deficiencies by taking on a central role in mitigating the risks of these instruments. Rather than leave trading firms to manage the risks of transacting in swaps privately, as was largely the case prior to 2008, post-crisis regulation requires that clearinghouses assume responsibility for ensuring that trades are properly settled, reported to authorities, and supported by strong cushions of protective collateral. With clearinghouses effectively guaranteeing that the terms of a trade will be honored—even if one of the trading parties cannot perform—the market can operate with reduced levels of counterparty risk, opacity, and the threat of systemic collapse brought on by recklessness and over-complexity.
But despite their obvious benefit for regulators, clearinghouses also pose risks of their own. First, given their deepening significance for market stability, ensuring that clearinghouses themselves operate safely represents a matter of the highest policy priority. Yet overseeing clearinghouses is far from easy and understanding what works best to undergird their safe operation can be a contentious and uncertain matter. U.S. and EU authorities, for example, have diverged in important ways on what rules should apply to the workings of international clearinghouses. Secondly, clearinghouse oversight is critical because these institutions now warehouse enormous levels of counterparty risk. By promising counterparties across the market that their trades will settle as agreed, even if one or the other firm goes bust, clearinghouses assume almost inconceivably large and complicated risks within their institutions. For swaps in particular—whose obligations can last for months, or even years—the scale of these risks can be far more extensive than that entailed in a one-off sale or a stock or bond. In this way, commentators note that by becoming the go-to bulwark against risk-taking and its spread in the financial system, clearinghouses have themselves become the too-big-to-fail institution par excellence.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
A patent is a legal right to exclude granted by the state to the inventor of a novel and useful invention. Much legal ink has been spilled on the meaning of these terms. “Novel” means that the invention has not been anticipated in the art prior to its creation by the inventor. “Useful” means that the invention has a practical application. The words “inventor” and “invention” are also legal terms of art. An invention is a work that advances a particular field, moving practitioners forward not simply through accretions of knowledge but through concrete implementations. An inventor is someone who contributes to an invention either as an individual or as part of a team. The exclusive right, finally, is not granted gratuitously. The inventor must apply and go through a review process for the invention. Furthermore, a price for the patent being granted is full, clear disclosure by the inventor of how to practice the invention. The public can use this disclosure once the patent expires or through a license during the duration of the patent.
These institutional details are common features of all patent systems. What is interesting is the economic justification for patents. As a property right, a patent resolves certain externality problems that arise in markets for knowledge. The establishment of property rights allows for trade in the invention and the dissemination of knowledge. However, the economic case for property rights is made complex because of the institutional need to apply for a patent. While in theory, patent grants could be automatic, inventions must meet certain standards for the grant to be justified. These procedural hurdles create possibilities for gamesmanship in how property rights are allocated.
Furthermore, even if granted correctly, property rights can become murky because of the problems of enforcement through litigation. Courts must determine when an invention has been used, made, or sold without permission by a third party in violation of the rights of the patent owner. This legal process can lead to gamesmanship as patent owners try to force settlements from alleged infringers. Meanwhile, third parties may act opportunistically to take advantage of the uncertain boundaries of patent rights and engage in undetectable infringement. Exacerbating these tendencies are the difficulties in determining damages and the possibility of injunctive relief.
Some caution against these criticisms through the observation that most patents are not enforced. In fact, most granted patents turn out to be worthless, when gauged in commercial value. But worthless patents still have potential litigation value. While a patent owner might view a worthless patent as a sunk cost, there is incentive to recoup investment through the sale of worthless patents to parties willing to assume the risk of litigation. Hence the phenomenon of “trolling,” or the rise of non-practicing entities, troubles the patent landscape. This phenomenon gives rise to concerns with the anticompetitive uses of patents, demonstrating the need for some limitations on patent enforcement.
With all the policy concerns arising from patents, it is no surprise that patent law has been ripe for reform. Economic analysis can inform these reform efforts by identifying ways in which patents fail to create a vibrant market for inventions. Appreciation of the political economy of patents invites a rich academic and policy debate over the direction of patent law.
Michael P. Clements and Ana Beatriz Galvão
At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations.
The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.
Denzil G. Fiebig and Hong Il Yoo
Stated preference methods are used to collect individual level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article will be data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.
Michael Drummond, Rosanna Tarricone, and Aleksandra Torbica
There are a number of challenges in the economic evaluation of medical devices (MDs). They are typically less regulated than pharmaceuticals, and the clinical evidence requirements for market authorization are generally lower. There are also specific characteristics of MDs, such as the device–user interaction (learning curve), the incremental nature of innovation, the dynamic nature of pricing, and the broader organizational impact. Therefore, a number of initiatives need to be taken in order to facilitate the economic evaluation of MDs. First, the regulatory processes for MDs need to be strengthened and more closely aligned to the needs of economic evaluation. Second, the methods of economic evaluation need to be enhanced by improving the analysis of the available clinical data, establishing high-quality clinical registries, and better recognizing MDs’ specific characteristics. Third, the market entry and diffusion of MDs need to be better managed by understanding the key influences on MD diffusion and linking diffusion with cost-effectiveness evidence through the use of performance-based risk-sharing arrangements.
Anthony J. Venables
Economic activity is unevenly distributed across space, both internationally and within countries. What determines this spatial distribution, and how is it shaped by trade? Classical trade theory gives the insights of comparative advantage and gains from trade but is firmly aspatial, modeling countries as points and trade (in goods and factors of production) as either perfectly frictionless or impossible. Modern theory places this in a spatial context in which geographical considerations influence the volume of trade between places. Gravity models tell us that distance is important, with each doubling of distance between places halving the volume of trade. Modeling the location decisions of firms gives a theory of location of activity based on factor costs (as in classical theory) and also on proximity to markets, proximity to suppliers, and the extent of competition in each market. It follows from this that—if there is a high degree of mobility—firms and economic activity as a whole may tend to cluster, providing an explanation of observed spatial unevenness. In some circumstances falling trade barriers may trigger the deindustrialization of some areas as activity clusters in fewer places. In other circumstances falling barriers may enable activity to spread out, reducing inequalities within and between countries. Research over the past several decades has established the mechanisms that cause these changes and placed them in full general equilibrium models of the economy. Empirical work has quantified many of the important relationships. However, geography and trade remains an area where progress is needed to develop robust tools that can be used to inform place-based policies (concerning trade, transport, infrastructure, and local economic development), particularly in view of the huge expenditures that such policies incur.
Antony W. Dnes
Economists increasingly connect legal changes to behavioral responses that many family law experts fail to see. Incentives matter in families, which respond to changes in legal regulation. Changing incentive structures linked to family law have largely affected marriage, cohabitation, and divorce. Economic analysis has been applied to assess the causes of falling marriage rates and delays in marriage. Much analysis has focused on increases in divorce rates, which appear to respond to legal changes making divorce easier, and to different settlement regimes. Less work has been done in relation to children but some research does exist showing how children are impacted by changes in incentives affecting adults.
Jason M. Fletcher
Two interrelated advances in genetics have occurred which have ushered in the growing field of genoeconomics. The first is a rapid expansion of so-called big data featuring genetic information collected from large population–based samples. The second is enhancements to computational and predictive power to aggregate small genetic effects across the genome into single summary measures called polygenic scores (PGSs). Together, these advances will be incorporated broadly with economic research, with strong possibilities for new insights and methodological techniques.
Despite the drop in transport and commuting costs since the mid-19th century, sizable and lasting differences across locations at very different spatial scales remain the most striking feature of the space-economy. The main challenges of the economics of agglomeration are therefore (a) to explain why people and economic activities are agglomerated in a few places and (b) to understand why some places fare better than others.
To meet these challenges, the usual route is to appeal to the fundamental trade-off between (internal and external) increasing returns and various mobility costs. This trade-off has a major implication for the organization of the space-economy: High transport and commuting costs foster the dispersion of economic activities, while strong increasing returns act as a strong agglomeration force.
The first issue is to explain the existence of large and persistent regional disparities within nations or continents. At that spatial scale, the mobility of commodities and production factors is critical. By combining new trade theories with the mobility of firms and workers, economic geography shows that a core periphery structure can emerge as a stable market outcome.
Second, at the urban scale, cities stem from the interplay between agglomeration and dispersion forces: The former explain why firms and consumers want to be close to each other whereas the latter put an upper limit on city sizes. Housing and commuting costs, which increase with population size, are the most natural candidates for the dispersion force. What generates agglomeration forces is less obvious. The literature on urban economics has highlighted the fact that urban size is the source of various benefits, which increase firm productivity and consumer welfare.
Within cities, agglomeration also occurs in the form of shopping districts where firms selling differentiated products congregate. Strategic location considerations and product differentiation play a central role in the emergence of commercial districts because firms compete with a small number of close retailers.
Ya-Chen Tina Shih
The goal of cancer prevention and control is to reduce cancer risk, morbidity, and mortality through transdisciplinary collaborations across biomedical, behavioral, and social sciences. Risk reduction, early detection, and timely treatment are the rationales behind policy efforts to promote cancer prevention. Economics makes three important contributions to cancer prevention and control research. Firstly, research built upon the human capital model by Grossman and the insurance model by Ehrlich and Becker offers solid theoretical foundations to study human behaviors related to preventive care. Secondly, economic evaluation provides useful analytical tools to assess the “cancer premium” (through the stated preference research approach) and to identify the optimal screening strategy (through cost-effectiveness analysis). Lastly, the rich set of quantitative methods in applied economics contributes to the estimation of the relative contribution of prevention versus treatment in the reduction of cancer mortality and the evaluation of the impact of guidelines to regulate screening practices or policy initiatives to promote cancer screening.