1-20 of 75 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Age-Period-Cohort Models  

Zoë Fannon and Bent Nielsen

Outcomes of interest often depend on the age, period, or cohort of the individual observed, where cohort and age add up to period. An example is consumption: consumption patterns change over the lifecycle (age) but are also affected by the availability of products at different times (period) and by birth-cohort-specific habits and preferences (cohort). Age-period-cohort (APC) models are additive models where the predictor is a sum of three time effects, which are functions of age, period, and cohort, respectively. Variations of these models are available for data aggregated over age, period, and cohort, and for data drawn from repeated cross-sections, where the time effects can be combined with individual covariates. The age, period, and cohort time effects are intertwined. Inclusion of an indicator variable for each level of age, period, and cohort results in perfect collinearity, which is referred to as “the age-period-cohort identification problem.” Estimation can be done by dropping some indicator variables. However, dropping indicators has adverse consequences such as the time effects are not individually interpretable and inference becomes complicated. These consequences are avoided by instead decomposing the time effects into linear and non-linear components and noting that the identification problem relates to the linear components, whereas the non-linear components are identifiable. Thus, confusion is avoided by keeping the identifiable non-linear components of the time effects and the unidentifiable linear components apart. A variety of hypotheses of practical interest can be expressed in terms of the non-linear components.

Article

Aging and Healthcare Costs  

Martin Karlsson, Tor Iversen, and Henning Øien

An open issue in the economics literature is whether healthcare expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “aging of Population and HealthCare Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the red herring hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.

Article

Applications of Web Scraping in Economics and Finance  

Piotr Śpiewanowski, Oleksandr Talavera, and Linh Vi

The 21st-century economy is increasingly built around data. Firms and individuals upload and store enormous amount of data. Most of the produced data is stored on private servers, but a considerable part is made publicly available across the 1.83 billion websites available online. These data can be accessed by researchers using web-scraping techniques. Web scraping refers to the process of collecting data from web pages either manually or using automation tools or specialized software. Web scraping is possible and relatively simple thanks to the regular structure of the code used for websites designed to be displayed in web browsers. Websites built with HTML can be scraped using standard text-mining tools, either scripts in popular (statistical) programming languages such as Python, Stata, R, or stand-alone dedicated web-scraping tools. Some of those tools do not even require any prior programming skills. Since about 2010, with the omnipresence of social and economic activities on the Internet, web scraping has become increasingly more popular among academic researchers. In contrast to proprietary data, which might not be feasible due to substantial costs, web scraping can make interesting data sources accessible to everyone. Thanks to web scraping, the data are now available in real time and with significantly more details than what has been traditionally offered by statistical offices or commercial data vendors. In fact, many statistical offices have started using web-scraped data, for example, for calculating price indices. Data collected through web scraping has been used in numerous economic and finance projects and can easily complement traditional data sources.

Article

A Survey of Econometric Approaches to Convergence Tests of Emissions and Measures of Environmental Quality  

Junsoo Lee, James E. Payne, and Md. Towhidul Islam

The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary. The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks. Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.

Article

Bayesian Vector Autoregressions: Applications  

Silvia Miranda-Agrippino and Giovanni Ricco

Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications. A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs. This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments. BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon. The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.

Article

Bayesian Vector Autoregressions: Estimation  

Silvia Miranda-Agrippino and Giovanni Ricco

Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection. In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots. Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted. Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.

Article

Behavioral Experiments in Health Economics  

Matteo M. Galizzi and Daniel Wiesen

The state-of-the-art literature at the interface between experimental and behavioral economics and health economics is reviewed by identifying and discussing 10 areas of potential debate about behavioral experiments in health. By doing so, the different streams and areas of application of the growing field of behavioral experiments in health are reviewed, by discussing which significant questions remain to be discussed, and by highlighting the rationale and the scope for the further development of behavioral experiments in health in the years to come.

Article

Boom-Bust Capital Flow Cycles  

Graciela Laura Kaminsky

This article examines the new trends in research on capital flows fueled by the 2007–2009 Global Crisis. Previous studies on capital flows focused on current account imbalances and net capital flows. The Global Crisis changed that. The onset of this crisis was preceded by a dramatic increase in gross financial flows while net capital flows remained mostly subdued. The attention in academia zoomed in on gross inflows and outflows with special attention to cross-border banking flows before the crisis erupted and the shift towards corporate bond issuance in its aftermath. The boom and bust in capital flows around the Global Crisis also stimulated a new area of research: capturing the “global factor.” This research adopts two different approaches. The traditional literature on the push–pull factors, which before the crisis was mostly focused on monetary policy in the financial center as the “push factor,” started to explore what other factors contribute to the co-movement of capital flows as well as to amplify the role of monetary policy in the financial center on capital flows. This new research focuses on global banks’ leverage, risk appetite, and global uncertainty. Since the “global factor” is not known, a second branch of the literature has captured this factor indirectly using dynamic common factors extracted from actual capital flows or movements in asset prices.

Article

Bootstrapping in Macroeconometrics  

Helmut Herwartz and Alexander Lange

Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.

Article

Choice Inconsistencies in the Demand for Private Health Insurance  

Olena Stavrunova

In many countries of the world, consumers choose their health insurance coverage from a large menu of often complex options supplied by private insurance companies. Economic benefits of the wide choice of health insurance options depend on the extent to which the consumers are active, well informed, and sophisticated decision makers capable of choosing plans that are well-suited to their individual circumstances. There are many possible ways how consumers’ actual decision making in the health insurance domain can depart from the standard model of health insurance demand of a rational risk-averse consumer. For example, consumers can have inaccurate subjective beliefs about characteristics of alternative plans in their choice set or about the distribution of health expenditure risk because of cognitive or informational constraints; or they can prefer to rely on heuristics when the plan choice problem features a large number of options with complex cost-sharing design. The second decade of the 21st century has seen a burgeoning number of studies assessing the quality of consumer choices of health insurance, both in the lab and in the field, and financial and welfare consequences of poor choices in this context. These studies demonstrate that consumers often find it difficult to make efficient choices of private health insurance due to reasons such as inertia, misinformation, and the lack of basic insurance literacy. These findings challenge the conventional rationality assumptions of the standard economic model of insurance choice and call for policies that can enhance the quality of consumer choices in the health insurance domain.

Article

The Cointegrated VAR Methodology  

Katarina Juselius

The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.

Article

COVID-19 and Mental Health: Natural Experiments of the Costs of Lockdowns  

Climent Quintana-Domeque and Jingya Zeng

The global impact of the COVID-19 pandemic has been profound, leaving a significant imprint on physical health, the economy, and mental well-being. Researchers have undertaken empirical investigations across different countries, with a primary focus on understanding the association between lockdown measures—an essential public health intervention—and mental health. These studies aim to discern the causal effect of lockdowns on mental well-being. Three notable studies have adopted natural experiments to explore the causal effect of lockdowns on mental health in diverse countries. Despite variations in their research methodologies, these studies collectively support the conclusion that lockdowns have had detrimental consequences on mental health. Furthermore, they reveal that the intensity of these negative effects varies among distinct population groups. Certain segments of the population, such as women, have borne a more profound burden of the mental health costs associated with lockdown measures. In light of these findings, it becomes imperative to consider the implications for mental health when implementing public health interventions, especially during crises like the COVID-19 pandemic. While rigorous measures like lockdowns are essential for safeguarding public health, striking a balance with robust mental health support policies becomes crucial to mitigating the adverse impacts on mental well-being.

Article

Data Revisions and Real-Time Forecasting  

Michael P. Clements and Ana Beatriz Galvão

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations. The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.

Article

Econometric Methods for Business Cycle Dating  

Máximo Camacho Alonso and Lola Gadea

Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics. Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere. In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.

Article

Econometrics for Modelling Climate Change  

Jennifer L. Castle and David F. Hendry

Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.

Article

Econometrics of Stated Preferences  

Denzil G. Fiebig and Hong Il Yoo

Stated preference methods are used to collect individual-level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article is data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.

Article

Economic Evaluation of Medical Screening  

Eline Aas, Emily Burger, and Kine Pedersen

The objective of medical screening is to prevent future disease (secondary prevention) or to improve prognosis by detecting the disease at an earlier stage (early detection). This involves examination of individuals with no symptoms of disease. Introducing a screening program is resource demanding, therefore stakeholders emphasize the need for comprehensive evaluation, where costs and health outcomes are reasonably balanced, prior to population-based implementation. Economic evaluation of population-based screening programs involves quantifying health benefits (e.g., life-years gained) and monetary costs of all relevant screening strategies. The alternative strategies can vary by starting- and stopping-age, frequency of the screening and follow-up regimens after a positive test result. Following evaluation of all strategies, the efficiency frontier displays the efficient strategies and the country-specific cost-effectiveness threshold is used to determine the optimal, i.e., most cost-effective, screening strategy. Similar to other preventive interventions, the costs of screening are immediate, while the health benefits accumulate after several years. Hence, the effect of discounting can be substantial when estimating the net present value (NPV) of each strategy. Reporting both discounting and undiscounted results is recommended. In addition, intermediate outcome measures, such as number of positive tests, cases detected, and events prevented, can be valuable supplemental outcomes to report. Estimating the cost-effectiveness of alternative screening strategies is often based on decision-analytic models, synthesizing evidence from clinical trials, literature, guidelines, and registries. Decision-analytic modeling can include evidence from trials with intermediate or surrogate endpoints and extrapolate to long-term endpoints, such as incidence and mortality, by means of sophisticated calibration methods. Furthermore, decision-analytic models are unique, as a large number of screening alternatives can be evaluated simultaneously, which is not feasible in a randomized controlled trial (RCT). Still, evaluation of screening based on RCT data are valuable as both costs and health benefits are measured for the same individual, enabling more advanced analysis of the interaction of costs and health benefits. Evaluation of screening involves multiple stakeholders and other considerations besides cost-effectiveness, such as distributional concerns, severity of the disease, and capacity influence decision-making. Analysis of harm-benefit trade-offs is a useful tool to supplement cost-effectiveness analyses. Decision-analytic models are often based on 100% participation, which is rarely the case in practice. If those participating are different from those not choosing to participate, with regard to, for instance, risk of the disease or condition, this would result in selection bias, and the result in practice could deviate from the results based on 100% participation. The development of new diagnostics or preventive interventions requires re-evaluation of the cost-effectiveness of screening. For example, if treatment of a disease becomes more efficient, screening becomes less cost-effective. Similarly, the introduction of vaccines (e.g., HPV-vaccination for cervical cancer) may influence the cost-effectiveness of screening. With access to individual level data from registries, there is an opportunity to better represent heterogeneity and long-term consequences of screening on health behavior in the analysis.

Article

The Economics of End-of-Life Spending  

Hans Olav Melberg

End-of-life spending is commonly defined as all health costs in the 12 months before death. Typically, the costs represent about 10% of all health expenses in many countries, and there is a large debate about the effectiveness of the spending and whether it should be increased or decreased. Assuming that health spending is effective in improving health, and using a wide definition of benefits from end-of-life spending, several economists have argued for increased spending in the last years of life. Others remain skeptical about the effectiveness of such spending based on both experimental evidence and the observation that geographic within-country variations in spending are not correlated with variations in mortality.

Article

Equality of Opportunity in Health and Healthcare  

Florence Jusot and Sandy Tubeuf

Recent developments in the analysis of inequality in health and healthcare have turned their interest into an explicit normative understanding of the sources of inequalities that calls upon the concept of equality of opportunity. According to this concept, some sources of inequality are more objectionable than others and could represent priorities for policies aiming to reduce inequality in healthcare use, access, or health status. Equality of opportunity draws a distinction between “legitimate” and “illegitimate” sources of inequality. While legitimate sources of differences can be attributed to the consequences of individual effort (i.e. determinants within the individual’s control), illegitimate sources of differences are related to circumstances (i.e. determinants beyond the individual’s responsibility). The study of inequality of opportunity is rooted in social justice research, and the last decade has seen a rapid growth in empirical work using this literature at the core of its approach in both developed and developing countries. Empirical research on inequality of opportunity in health and healthcare is mainly driven by data availability. Most studies in adult populations are based on data from European countries, especially from the UK, while studies analyzing inequalities of opportunity among children are usually based on data from low- or middle-income countries and focus on children under five years old. Regarding the choice of circumstances, most studies have considered social background to be an illegitimate source of inequality in health and healthcare. Geographical dimensions have also been taken into account, but to a lesser extent, and more frequently in studies focusing on children or those based on data from countries outside Europe. Regarding effort variables or legitimate sources of health inequality, there is wide use of smoking-related variables. Regardless of the population, health outcome, and circumstances considered, scholars have provided evidence of illegitimate inequality in health and healthcare. Studies on inequality of opportunity in healthcare are mainly found in children population; this emphasizes the need to tackle inequality as early as possible.

Article

Estimation and Inference for Cointegrating Regressions  

Martin Wagner

Widely used modified least squares estimators for estimation and inference in cointegrating regressions are discussed. The standard case with cointegration in the I(1) setting is examined and some relevant extensions are sketched. These include cointegration analysis with panel data as well as nonlinear cointegrating relationships. Extensions to higher order (co)integration, seasonal (co)integration and fractional (co)integration are very briefly mentioned. Recent developments and some avenues for future research are discussed.