1-4 of 4 Results

  • Keywords: causal inference x
Clear all

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

Martin Karlsson, Daniel Kühnle, and Nikolaos Prodromidis

Due to the similarities with the COVID–19 pandemic, there has been a renewed interest in the 1918–1919 influenza pandemic, which represents the most severe pandemic of the 20th century with an estimated total death toll ranging between 30 and 100 million. This rapidly growing literature in economics and economic history has devoted attention to contextual determinants of excess mortality in the pandemic; to the impact of the pandemic on economic growth, inequality, and a range of other outcomes; and to the impact of nonpharmaceutical interventions. Estimating the effects of the pandemic, or the effects of countermeasures, is challenging. There may not be much exogenous variation to go by, and the historical data sets available are typically small and often of questionable quality. Yet the 1918–1919 pandemic offers a unique opportunity to learn how large pandemics play out in the long run. The studies evaluating effects of the pandemic, or of policies enacted to combat it, typically rely on some version of difference-in-differences, or instrumental variables. The assumptions required for these designs to achieve identification of causal effects have rarely been systematically evaluated in this particular historical context. Using a purpose-built dataset covering the entire Swedish population, such an assessment is provided here. The empirical analysis indicates that the identifying assumptions used in previous work may indeed be satisfied. However, the results cast some doubt on the general external validity of previous findings as the analysis fails to replicate several results in the Swedish context. These disagreements highlight the need for additional studies in other populations and contexts which puts the spotlight on further digitization and linkage of historical datasets.

Article

Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli

In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries. The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features. The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.

Article

George Batta and Fan Yu

Corporate credit derivatives are over-the-counter (OTC) contracts whose payoffs are determined by a single corporate credit event or a portfolio of such events. Credit derivatives became popular in the late 1990s and early 2000s as a way for financial institutions to reduce their regulatory capital requirement, and early research treated them as redundant securities whose pricing is tied to the underlying corporate bonds and equities, with liquidity and counterparty risk factors playing supplementary roles. Research in the 2010s and beyond, however, increasingly focused on the effects of market frictions on the pricing of CDSs, how CDS trading has impacted corporate behaviors and outcomes as well as the price efficiency and liquidity of other related markets, and the microstructure of the CDS market itself. This was made possible by the availability of market statistics and more granular trade and quote data as a result of the broad movement of the OTC derivatives market toward central clearing.