Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, ECONOMICS AND FINANCE (oxfordre.com/economics). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 08 December 2019

Machine Learning in Policy Evaluation: New Tools for Causal Inference

Summary and Keywords

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).

This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.

Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Keywords: machine learning, causal inference, treatment effects, health economics, program evaluation, policy evaluation, doubly robust methods, matching

Overview

Most scientific questions, such as those asked when evaluating policies, are causal in nature, even if they are not specifically framed as such. Causal inference reasoning helps clarify the scientific question, and define the corresponding causal estimand, that is, the quantity of interest, such as the average treatment effect (ATE). It also makes clear the assumptions necessary to express the estimand in terms of the observed data, known as identification. Once this is achieved, the focus shifts to estimation and inference. While machine learning methods have received a lot of attention in recent years, these methods are primarily geared for prediction. There are many excellent texts covering machine learning focused on prediction (Friedman, Hastie, & Tibshirani, 2001; James, Witten, Hastie, & Tibshirani, 2013), but not dealing with causal problems. Recently, some authors within the economics community have started examining the usefulness of machine learning for the causal questions that are typically the subject of applied econometric research (Athey, 2017a, 2017b; Athey & Imbens, 2017; Kleinberg, Ludwig, Mullainathan, & Obermeyer, 2015; Mullainathan & Spiess, 2017; Varian, 2014).

In this article, we contribute to this literature by providing an overview and an illustration of machine learning methods for causal inference, with a view to answering typical causal questions in policy evaluation, and show how these can be implemented with widely used statistical packages. We draw on innovations form a wide range of quantitative social and health sciences, including economics, biostatistics, and political science.

We focus on methods to estimate the ATE of a binary treatment (or exposure), under “no unobserved confounding assumptions” (see “Estimands and Assumptions for Identification”). The remainder of this article is structured as follows. First, we introduce the “Notation and Assumptions” for the identification of causal effects. Then we outline our “Illustrative Example” of a treatment effect estimation problem, an impact evaluation of a social health insurance program in Indonesia. Next, we provide a brief “Introduction to Machine Learning for Causal Inference.” In the following sections, we review methods for estimating the ATE. These can be roughly categorized into three main types: methods that aim to balance covariate distributions (“Propensity Score” [PS] and other matching methods), methods that fit outcome regressions to “impute” potential outcomes and estimate causal effects, and the so called “Double-Robust” methods that combine these. We also discuss the use of machine learning for “Variable Selection,” a challenge increasingly important with “big data,” especially with a large number of variables. In the last sections we provide a brief overview, “Further Topics” of developments for other settings and a “Discussion.”

Estimands and Assumptions for Identification

Notation and Assumptions

Let A be an indicator variable for treatment, and Y be the outcome of interest. Denote by Yia the potential outcome that would manifest if the i-th subject were exposed to level a of the treatment, with a{0,1}. The observed outcome can then be written as Yi=Yi0(1Ai)+Yi1Ai (Rubin, 1978).

Throughout, we assume the Stable Unit Treatment Value Assumption (SUTVA) holds, which comprises no interference, that is, the potential outcomes of the i-th individual are unrelated to the treatment status of all other individuals, and consistency, that is, for all individuals i=1,,N, if Ai=a then Yia=Yi, for all a (Cole & Frangakis, 2009; Pearl, 2010; Robins, Hernan, & Brumback, 2000; VanderWeele, 2009).

Denote the observed data of each individual by Oi=(Xi,Ai,Yi), where Xi is a vector of confounding variables, that is factors that influence simultaneously the potential outcomes and treatment. We assume that the data are an independent identically distributed sample of size N. Individual level causal effects are defined as the difference between these “potential outcomes.” Researchers are often interested in the average of these individual causal effects over some population. A widely considered casual estimand is the ATE, defined as ψ=E[Yi1Y10]. Further estimands include the average taken over the treated subjects (the average treatment effect on the treated, ATT) or the conditional average treatment effect (CATE), which takes the expectation over individuals with certain observed characteristics. Here we focus on the ATE.

Since the potential outcomes can never be simultaneously directly observed, these estimands cannot be expressed in terms of observed data, or identified, without further assumptions. A commonly invoked assumption which we will make throughout is ignorability or unconfoundedness of the treatment assignment (also known as conditional exchangeability). This assumption requires that the potential outcomes are independent of treatment, conditional on the observed covariates,

Ai(Yi0,Yi1)Xi=x.
(1)

The plausibility of this assumption needs to be carefully argued in each case, ideally with careful data collection and based on subject matter knowledge about the variables that may be associated with the outcome as well as influencing the treatment, as it cannot be tested using the observed data (Rubin, 2005).

The second necessary assumption is the positivity of the treatment assignment (also referred as “overlap”):

0<P(Ai=1|Xi=x)<1,
(2)

implying that for any combination of covariates, there is a nonzero probability of receiving both the treatment and control states.

Using the unconfoundedness and the positivity assumptions, the conditional mean of the potential outcomes corresponds with the conditional mean of the observed outcomes,

E[Yi1|Xi,Ai=1]=E[Yi|Xi,Ai=1] and E[Yi0|Xi,Ai=0]=E[Yi|Xi,Ai=0], and the ATE can be identified by:

ψ=E[Yi|Xi,Ai=1]E[Yi|Xi,Ai=0].
(3)

Illustrative Example: The Impact of Health Insurance on Assisted Birth in Indonesia

We illustrate the methods by applying them each in turn to an impact evaluation of a national health insurance program in Indonesia (more details in Kreif et al., 2018). The dataset consists of births between 2002 and 2014, extracted from the Indonesian Family Life Survey (IFLS). The policy of interest, that is, the treatment, is “being covered by the health insurance offered for those in formal employment and their families” (contributory health insurance). We are interested in the ATE of such health insurance on the probability of the birth being assisted by a healthcare professional (physician or midwife). We construct a list of observed covariates including the mother’s characteristics, (age, education, wealth in quintiles) and household’s characteristics (social assistance, experienced a natural disaster, rurality, availability of health services—a village midwife, birth clinic, hospital).

We expect that the variables describing socioeconomic status may be particularly important, because those with contributory insurance tend to work in the formal sector, and have higher education than those uninsured, and these characteristics would make a mother more likely to use health services even in the absence of health insurance. Similarly, the availability of health services is expected to be an important confounder, as those who have health insurance may live in areas where it is easier to access healthcare, with or without health insurance.

The final dataset reflects typical characteristics of survey data: the majority of variables are binary, with two variables categorical and one continuous (altogether 34 variables). Due to the nature of the survey, for around one-third of women we could not measure confounder information from the past, but had to impute it with information from the time of the survey. Two binary variables indicate imputed observations.

For simplicity, any records with any other missing data have been listwise deleted. This approach provides unbiased estimates of the ATE as long as missingness does not depend on both the treatment and the outcome (Bartlett, Harel, & Carpenter, 2015). The resulting complete-case dataset consists of 10,985 births, of which 1,181 are in the treated group, as the mother had health insurance in the year of the child’s birth, while 8,574 babies had their birth assisted by a health professional.

Introduction to Machine Learning for Causal Inference

Supervised Machine Learning

The type of machine learning tools most useful for causal inference are those labeled as “supervised machine learning.” These tools, similarly to regression, can summarize linear and nonlinear relationships in the data and can predict some Y variable given new values of covariates (A, X) (Varian, 2014). A “good” prediction is defined in relation to a loss function, for example the mean sum of squared errors. A commonly used measure of this is the test mean squared error (test MSE), defined as the average square prediction error among observations not previously seen. This quantity differs from the usual MSE calculated among observations that were used to fit the model. In the absence of a very large dataset that can be used to directly estimate the test MSE, it can be estimated by holding out a subset of the observation from the model fitting process, using the so-called “V-fold cross-validation” procedure (see, e.g., Zhang, 1993). When performing V-fold cross-validation, the researcher randomly divides the set of observations into V groups (folds). The first group is withheld from the fitting process, and thus referred to as the test data. The algorithm is then fitted using the data in the remaining V − 1 folds, called the training data. Finally, the MSE is calculated using the test data, thus evaluating the performance of the algorithm. This process is repeated for each fold, resulting in V estimates of the test MSE, which are then averaged to obtain the so-called cross-validated MSE. In principle, it is possible to perform cross-validation with just one split of the sample, though results are highly dependent on the sample split. Thus, typically, V = 5 or V = 10 is used in practice.

The ultimate goal of machine learning algorithms is to get good out-of-sample predictions minimizing the test MSE (Varian, 2014), typically achieved by a combination of flexibility and simplicity, often described as the “bias-variance trade off.” The formula for the MSE shows why minimizing it achieves this: MSE(θ^)=Var(θ^)+Bias(θ^,θ)2. For example, a nonlinear model with many higher-order terms is more likely to fit the data better than a simpler model, however it is unlikely that it will fit a new dataset similarly well, often referred to as “overfitting.”

Regularization is a general approach that aims to achieve balance between flexibility and complexity, by penalizing more complex models. With less regularization, one does a better job at approximating the within-sample variation, but for this very reason, the out-of-sample fit will typically get worse, increasing the bias. The key question is choosing the level of regularization: how to tune the algorithm so that it does not over-smooth or overfit.

In the context of selecting regularization parameters, cross-validation can be used to perform so-called empirical tuning to find the optimal level of complexity, where complexity is often indexed by the tuning parameters. The potential range of tuning parameters is divided into a grid (e.g., ten possible values), and V-fold cross-validation process is performed for each parameter value, enabling the researcher to choose the value of the tuning parameter with the lowest test MSE. Finally, the algorithm with the selected tuning parameter is refitted using all observations.

Prediction Versus Causal Inference

Machine learning is naturally suited to prediction problems, which have been traditionally treated as distinct from causal questions (Mullainathan & Spiess, 2017). It may be tempting to interpret causally the output of the machine learning predictions, however making inferences from machine learning models is complicated by (1) the lack of interpretable coefficients for some of the algorithms, and (2) the lack of standard errors (Athey, 2017a). Moreover, for certain “regression-like” algorithms (e.g., Lasso), selecting the best model using cross-validation, and then doing inference for the model parameters, ignoring the selection process, though common in practice, should be avoided as it leads to potential biases stemming from the shrunk coefficients, and underestimation of the true variance in the parameter estimates (Mullainathan & Spiess, 2017).

The causal inference literature (see, e.g., Kennedy, 2016; Petersen, 2014; van der Laan & Rose, 2011) stresses the importance of first defining the causal estimand of interest (also referred to as “target parameter”), and then carefully thinking about the necessary assumptions for identification. Once the causal estimand has been mapped to an estimator (a functional of the observed data), via the identification assumptions, the problem becomes an estimation exercise. In practice many estimators involve models for parameters (e.g., conditional distributions, means), which are not of interest per se, but are necessary to estimate the target parameter, these are called nuisance models. Nuisance models estimation can be thought of as prediction problem for which machine learning can be used. Examples include the estimation of propensity scores, or outcome regressions that can later be used to predict potential outcomes (see “Further Topics”). So while most machine learning methods cannot be readily used to infer causal effects, they can help the process. A potential key advantage of using machine learning for the nuisance models is that it fits and compares many alternative algorithms, by for example, using cross-validation (while cross-validation can be used to select among parametric models as well). Selecting models based on a well-defined loss function (e.g., the cross-validated MSE) can, beyond improving model fit, benefit the overall transparency of the research process (Athey, 2017b). This is in contrast to the way that model selection is usually viewed in economics, where a model is chosen based on theory and estimated only once.

This has led to many researchers using machine learning for the estimation of the nuisance parameters of standard estimators (e.g., outcome regression, inverse probability weighting by the propensity score (see, e.g., Lee, Lessler, & Stuart, 2010; Westreich, Lessler, & Funk, 2010). However, the behavior of these estimators can be poor, resulting in slower convergence rates and confidence intervals which are difficult to construct (van der Vaart, 2014). In addition, the resulting estimators are irregular and the nonparametric bootstrap is in general not valid (Bickel, Götze, & van Zwet, 1997).

An increasingly popular strategy to avoid these biases and have valid inference is to use the so-called doubly robust estimators (combining nuisance models for the outcome regressions and the propensity score) which we review in “Double-Robust Estimation With Machine Learning.” This is because DR estimators can converge at fast rates (N) to the true parameter, and are therefore consistent asymptotically normal, even when the nuisance models have been estimated via machine learning.

In the following sections, we briefly describe the machine learning approaches that have been most widely used for nuisance model prediction in causal inference, either because of their similarity to traditional regression approaches, their easy implementation due to the availability of statistical packages, their superior performance in prediction, or a combination of these.

Lasso

LASSO (Least Absolute Shrinkage and Selection Operator) is a penalized linear (or generalized linear) regression algorithm, fitting the model including all d predictors. It aims to find the set of coefficients that minimize the sum-of-squares loss function, but subject to a constraint on the sum of absolute values (or `l1 norm) of coefficients being equal to a constant, c, often referred to as budget, that is, j=id||βj||1=c. This results in a (generalized) linear regression in which only a small number of covariates have nonzero coefficient: this absolute-value regularizer induces a sparse coefficient vector. The nonzero coefficient estimates are also shrunk towards zero. This significantly reduces their variance at the “price” of increasing the bias. An equivalent formulation of the lasso is

minβd{||yXβ||2+λ||β||1},
(4)

with the penalty λ‎ being the tuning parameter.

As λ‎ increases, the flexibility of the lasso regression fit decreases, leading to decreased variance but increased bias. Beyond a certain point, however, the decrease in variance due to increasing λ‎ slows, and the shrinkage on the coefficients causes them to be significantly underestimated, resulting in a large increase in the bias. Thus the choice of λ‎ is critical. This is usually done by cross-validation, implemented by several packages such as glmnet and caret.

Because the lasso results in some of the coefficients being exactly zero when the penalty λ‎ is sufficiently large, it essentially performs variable selection. The variable selection, however, is driven by the tuning parameter, and it can happen that some variables are selected in some of the CV partitions, but may be unused in another. This problem is common when the variables are correlated with each other, and they explain very similar “portions” of the outcome variability. A practical implication of this is that the researcher should remove from the set of potential variables those that are irrelevant, in the sense that they are very correlated to a combination of other, more relevant ones.

Another problem is inference after model selection, with some results (Leeb & Benedikt, 2008) showing it is not possible to obtain (uniform) model selection consistency. As we demonstrate in the section “Variable Selection,” some uses of lasso enable consistent estimation of post-variable selection.

Tree-Based Methods

Regression Trees

Tree-based methods, also known as classification and regression trees or “CARTs,” have a similar logic to decision trees familiar to economists, but here the “decision” is a choice about how to classify the observation. The goal is to construct (or “grow”) a decision tree that leads to good out-of-sample predictions. They can be used for classification with binary or multicategory outcomes (“classification trees”) or with continuous outcomes (“regression trees”). A regression tree uses a recursive algorithm to estimate a function describing the relationship between a multivariate set of independent variables and a single dependent variable, such as treatment assignment.

Trees tend to work well for settings with nonlinearities and interactions in the outcome–covariate relationship. In these cases, they can improve upon traditional classification algorithms such logistic regression. In order to avoid overfitting, trees are pruned by applying tuning parameters that penalize complexity (the number of leaves). A major challenge with tree methods is that they are sensitive to the initial split of the data, leading to high variance. Hence, single trees are rarely used in practice, but instead ensembles—algorithms that stack or add together different algorithms—of trees are used, such as random forest or boosted CARTs.

Random Forests

Random forests are constructed using bootstrapped samples of the data, and then growing a tree where only a (random) subset of covariates is used for creating the splits (and thus the leaves). These trees are then averaged, which leads to a reduction in variance. The tuning parameters, which can be set or selected using cross-validation, include the number of trees, depth of each tree, and the number of covariates to be randomly selected (usually recommended to be approximately d where d is the number of available independent variables). Popular implementations include the R packages caret and ranger.

Boosting

Boosting generates a sequence of trees where the first tree’s residuals are used as outcomes for the construction of the next tree. Generalized boosted models add together many simple functions to estimate a smooth function of a large number of covariates. Each individual simple function lacks smoothness and is a poor approximation to the function of interest, but added together they can approximate a smooth function just like a sequence of line segments can approximate a smooth curve. In the implementation in the R package gbm (McCaffrey, Ridgeway, & Morral, 2004), each simple function is a regression tree with limited depth. Another popular package is xgboost.

Bayesian Additive Regression Trees

Bayesian Additive Regression Trees (BARTs) can be distinguished from other tree-based ensembling algorithms due to its underlying probability model (Kapelner & Bleich, 2016). As a Bayesian model, BART consists of a set of priors for the structure and the leaf parameters and a likelihood for data in the terminal nodes. The aim of the priors is to provide regularization, preventing any single regression tree from dominating the total fit. To do this, BARTs employ so-called “Bayesian backfitting” where the j-th tree is fit iteratively, holding all other m − 1 trees constant by exposing only the residual response that remains unfitted. Over many MCMC iterations, trees evolve to capture the fit left currently unexplained (Kapelner & Bleich, 2016).

BART is described as particularly well-suited to detecting interactions and discontinuities, and typically requires little parameter tuning (Hahn, Murray, & Carvalho, 2017). There is ample evidence on BART’s good performance in predictions and even in causal inference (Dorie, Hill, Shalit, Scott, & Cervone, 2017; Hill, 2011), and is implemented in several R packages (bartMachine, dbarts). Despite its excellent performance in practice, there are limited theoretical results about BARTs.

Super Learner Ensembling

Varian (2014) highlights the importance of recognizing uncertainty due to the model selection process, and the potential role ensembling can play in combining several models to create one that outperforms single models. Here we focus on the Super Learner (SL) (van der Laan & Dudoit, 2003), a machine learning algorithm that uses cross-validation to find the optimal weighted convex combination of multiple candidate prediction algorithms. The algorithms prespecified by the analyst form the library, and can include parametric and machine learning approaches. The Super Learner has the oracle property, that is, it produces predictions that are at least as good as those of the best algorithm included in the library (see van der Laan, Polley, & Hubbard, 2007; van der Laan & Rose, 2011 for details).

Beyond its use for prediction (Polley & van der Laan, 2010; Rose, 2013), it has been used for PS and outcome model estimation (see, for example, (Eliseeva, Hubbard, & Tager, 2013; Gruber, Logan, Jarrín, Monge, & Hernán, 2015; van der Laan & Luedtke, 2014), and has been shown to reduce bias from model misspecification (Kreif, Gruber, Radice, Grieve, & Sekhon, 2016; Pirracchio, Petersen, & van der Laan, 2015; Porter, Gruber, van der Laan, & Sekhon, 2011). Implementations of the Super Learner include the SuperLearner, h2oEnsembleR and the subsemble R packages, the latter two with increased computational speed to suit large datasets.

Machine Learning Methods to Create Balance Between Covariate Distributions

Propensity Score Methods

The propensity score (PS) (Rosenbaum & Rubin, 1983) defined as the conditional probability of treatment A given observed covariates, that is, p(xi)=P(Ai=1|Xi=xi), is referred to as a “balancing score,” due to its property of balancing the distributions of observed confounders amongst the treatment and control groups. The propensity score has been widely used to control for confounding, either for subclassification (Rosenbaum & Rubin, 1984), as a metric to establish matched pairs in nearest neighbor matching (Abadie & Imbens, 2016; Rubin & Thomas, 1996), and for reweighting, using inverse probability of treatment weights (Hirano, Imbens, & Ridder, 2003). The latter two approaches have been demonstrated to have the best performance (Austin, 2009; Lunceford & Davidian, 2004).

The PS matching estimator constructs the missing potential outcome using the observed outcome of the closest observation(s) from the other group, and calculates the ATE as a simple mean difference between these predicted potential outcomes (Abadie & Imbens, 2006, 2011, 2016). The inverse probability of treatment weighting (IPTW) estimator for the ATE is simply a weighted mean difference between the observed outcomes of the treatment and control groups, where the weights, wi, are constructed from the estimated propensity score as

wi=Aip^(Xi)+(1Ai)1p^(Xi).
(5)

With a correctly specified p(X), ψIPTW is consistent and efficient (Hirano et al., 2003). The IPTW estimator can be expressed as

ψIPTW=1Ni=1NAiYip^(Xi)(1Ai)Yi1p^(X1).
(6)

Obtaining SEs for IPTW estimators can be done by the Delta method, assuming the PS is known, or using robust covariance matrix, so-called sandwich estimator, to acknowledge that the PS was estimated, or by bootstrapping. IPTW estimators are sensitive to large weights.

The validity of these methods depends on correctly specifying the PS model. In empirical work, typically probit or logistic regression models are used without interactions or higher-order terms. However, the assumptions necessary for these to be correctly specified, for example the linearity of the relationship between covariates and probability of treatment in the logit scale, are rarely assessed (Westreich et al., 2010). More flexible modeling approaches, such as series regression estimation (Hirano et al., 2003), and machine learning methods, including decision trees, neural networks and linear classifiers (Westreich et al., 2010), generalized boosting methods (Lee et al., 2010; McCaffrey et al., 2004; Westreich et al., 2010; Wyss et al., 2014) or the Super Learner (Pirracchio et al., 2012) have been proposed to improve the specification of the PS. However, even such methods may have poor properties, if their loss function targets measures of model fit (e.g., log likelihood, area under the curve) instead of balancing covariates that are important to reduce bias (Westreich, Cole, Funk, Brookhart, & Stürmer, 2011). Imai and Ratkovic (2014) proposed a score that explicitly balances the covariates, exploiting moment conditions that capture the desired mean independence between the treatment variable and the covariates that the balancing aims to achieve. A machine learning method for estimating propensity scores that aims to maximize balance is the boosted CART approach (McCaffrey et al., 2004; Lee et al., 2010), implemented as the TWANG R package. This approach minimizes a chosen loss function, based on covariate balance achieved in the IPTW weighted data, by iteratively forming a collection of simple regression tree models and adding them together to estimate the propensity score. It models directly the log-odds of treatment rather than the propensity scores, to simplify computations. The algorithm can be specified to stop when the best balance is achieved. A recommended stopping rule is the average standardized absolute mean difference (ASAM) in the covariates. Beyond the balance metric, the number of iterations, depth of interactions and shrinkage parameters need to be specified. The boosted CART approach to estimating PS has been demonstrated to improve balance and reduce bias in the estimated ATE (Lee et al., 2010; Setoguchi, Schneeweiss, Brookhart, Glynn, & Cook, 2008) and has been extended to settings with continuous treatments (Zhu, Coffman, & Ghosh, 2015).

Methods Aiming to Directly Create Balanced Samples

There is an extensive literature on methods which aim to create matched samples that are automatically balanced on the covariates, instead of estimating and matching on a PS. An extension of Mahalanobis distance matching, the “Genetic Matching” algorithm (Diamond & Sekhon, 2013) searches a large space of potential matched treatment and control groups to minimize loss functions based on tests statistics describing covariate imbalance (e.g., Kolmogorov–Smirnov tests). The accompanying Matching R package (Mebane & Sekhon, 2011; Sekhon, 2011) has a wide range of matching methods (including propensity score), matching options (e.g., with or without replacement, 1:1 or 1:m matching), estimands (ATE vs ATT) and balance statistics. The “genetic” component of the matching algorithm chooses weights to give relative importance to the matching covariates to optimize the specified loss function. The algorithm proposes batches of weights, “a generation,” and moves towards the batch of weights which maximize overall balance. Each generation is then used iteratively to produce a subsequent generation with better candidate weights. The “population size”—the size of each generation—is the tuning parameter to be specified by the user.

Similar approaches to creating optimal matched samples, with a different algorithmic solution are offered by Zubizarreta (2012) and Hainmueller (2012). Both approaches use integer programming optimization algorithms to construct comparison groups given balance constraints (maximum allowed imbalance) specified by the user, in the former case by one-to-many matching, in the latter case by constructing optimal weights.

Demonstration of Balancing Methods Using the Birth Dataset

We estimate a range of PS: first, using a main terms logistic regression to estimate the conditional probability of being enrolled in health insurance, followed by two data-adaptive propensity scores. We include all covariates in the prediction algorithms, without prior covariate selection. The first is a boosted CART, with 5,000 trees, of a maximum depth of two interactions, and shrinkage of 0.005. The loss function used is the “average standardized difference.”

Second, we use the Super Learner with a library containing a range of increasingly data-adaptive prediction algorithms:

  • logistic regression with and without all pairwise interaction,

  • generalized additive models with two, three, and four degrees of freedom,

  • random forests–including four random forest learners varying the number of trees (500, 2,000), and the number of covariates to split on (five and eight), implemented in the ranger R package),

  • boosting–using the R package xgboost, with varying number of trees (100 and 1,000), shrinkage (0.001 and 0.1) and maximum tree depth (one and four).

  • a BART prediction algorithm using 200 regression trees with the tuning parameters set to default implementation in the darts R package.

We use 10-fold cross-validation and the mean sum of squares loss function. For the purposes of comparison, we have also implemented two 1:1 matching estimators with replacement, for the ATE parameter. First, we created a matched dataset based on the boosted CART propensity score, implemented without calipers. Second, we implemented the Genetic Matching algorithm, using a population size of 500, and a loss function that aims to minimize the largest p-values from paired t-test. We have reassessed the balance for the pair-matched data. Throughout, we evaluate balance based on standardized mean differences, a metric that is comparable across weighting and matching methods (Austin, 2009). We calculate the ATE using IPTW and matching. The SEs for the IPTW are “sandwich” SEs while for the matching estimators the Abadie–Imbens formula is used (Abadie & Imbens, 2006) that accounts for matching with replacement.

Figure 1 can be inspected to assess the relative performance of the candidate algorithms included in the SL. It displays the cross-validated estimates of the loss function (MSE) after an additional layer of cross-validation, so that the out-of-sample performance of each individual algorithm and the convex combination of these algorithms (Super Learner) can be compared. The “Discrete SL” is defined as an algorithm that gives the best candidate the weight of 1. We see that the convex Super Learner performs best. Table 1 shows the coefficients attributed to the different candidate algorithms in the final prediction algorithm that was used to estimate the PS.

Table 1: Nonzero Weights Corresponding to the Algorithms in the Convex Super Learner, for Estimating Propensity Scores

Algorithm’s Weight in Ensemble

Random Forest (500 trees, 5 variables)

0.18

Random Forest (2,000 trees, 5 variables)

0.36

Generalized additive models degree 3

0.46

GLM with all two-way interactions

0.01

For each propensity score and matching approach, we compare balance on the covariates in the data (reweighted by IPTW weights or by frequency weights from the matching, respectively).

Machine Learning in Policy Evaluation: New Tools for Causal InferenceClick to view larger

Figure 1: Estimated Mean Squared Error loss from candidate algorithms of the Super Learner.

Algorithms labeled by Rg are variations of random forests, algorithms labeled by xgb are variations of the boosting algorithm. Algorithms labeled with SL are implemented in the SuperLearner R package.

Figure 2 displays the absolute standardized differences (ASD) for all the covariates, starting from the variables that were least imbalanced in the unweighted data, moving towards the more imbalanced. Generally, all weighting approaches tend to improve balance compared to the unweighted data, except for variables that were well balanced (ASD < 0.05) to begin with. Using the rule of thumb of 0.1 as a metric of significant imbalance, we find that TWANG, when used for weighting, achieves acceptable balance on all covariates, except for the binary variable indicating having at least secondary education. Based on the more stringent criterion of ASD < 0.05, however, TWANG leaves several covariates imbalanced, including the indicator of rural community, the availability of health center, the availability of birth clinic, and whether the mother can write in Indonesian. When used for pair-matching, the boosted CART based propensity score leaves high imbalances. This is expected, as the balance metric in the loss function used the weighted, not the matched, data. The SL-based propensity score results in the largest imbalance, again reflecting that the loss function was set to maximize the cross-validated MSE of the propensity score model, and not to maximize balance1.

Machine Learning in Policy Evaluation: New Tools for Causal InferenceClick to view larger

Figure 2: Covariate balance compared across balancing estimators for the ATE.

The estimated ATE results in a 10 % increase in the probability of giving birth attended by a health professional, among those with contributory insurance (vs. those without) (see Table 2). With all the adjustment methods, this effect decreases, indicating an important role of adjusting for observed confounders. As expected, the method that reported the largest imbalances, IPTW SL, reports an ATE closest to the unadjusted estimate.

Limitations of Balancing Methods

Balancing methods allow for the consistent estimation of treatment effects, provided the assumptions of no unobserved confounding and positivity hold. Crucially the analyst does not need to model outcomes, thus increasing transparency by avoiding cherry-picking of models. While machine learning can help by making the choices of the PS model or a distance metric data-adaptive, subjective choices remain. For example, the loss function needs to be specified, and if the loss function is based on balance, so does the choice of balance metric. The ASM chosen for demonstration purposes creates a metric of average imbalance for the means. This ignores two potential complexities. First, imbalances in higher moments of the distribution are not taken into account by this metric. While Kolmogorov–Smirnov tests statistics can take into account imbalance in the entire covariate distribution (Diamond & Sekhon, 2013; Stuart, 2010), and can be selected to enter the loss function for both the boosted CART and the Genetic Matching algorithms, there is a further issue remaining: how should the researcher trade off imbalance across covariates? With a large number of covariates, balancing one variable may decrease balance on another covariate. Moreover, it is unclear how univariate balance measures should be summarized. The default of the TWANG package is to look at average covariate balance, while the Genetic Matching algorithm, as default, prioritizes covariate balance on the variable that is the most imbalanced. This, however, may not prioritize variables that are relatively strong predictors of the outcome, and any remaining imbalance would translate into a larger bias. Hence, there is an increasing consensus that exploiting information from the outcome regression generally improves on the properties of balance-based estimators (Abadie & Imbens, 2011; Kang & Schafer, 2007). ML methods to estimate the nuisance models for the outcome can provide reassurance against subjectively selecting outcome models that provide the most favorable treatment effect estimate. We review these methods in the following section.

Table 2: ATEs and 95 % CIs Estimated Using IPTW and Matching Methods

ATE

95 % CI L

95 % CI U

Unadjusted (naive)

0.13

0.11

0.16

IPTW logistic

0.06

0.02

0.11

IPTW TWANG

0.08

0.04

0.12

IPTW SL

0.11

0.09

0.13

PS matching TWANG

0.08

0.04

0.12

Genetic Matching

0.06

0.03

0.09

Machine Learning Methods for the Outcome Model

Recall that under the unconfoundedness and positivity assumptions, the ATE can be identified by E[Y|A=1,X]E[Y]A=0,X], reducing the problem to one of estimation of these conditional expectations (Hill, 2011; Imbens & Wooldridge, 2009). Denoting the true conditional expectation function for the observed outcome as μ(A,X)=E[Y|A,X], the regression estimator for the ATE can be obtained as

ψ^reg=1Ni=1N{μ^(A=1,Xi)μ^(A=0,Xi)},
(7)

where μ^(A=a,X=x) can interpreted as the predicted potential outcome for level a of the treatment among individuals with covariates X=x, and can be estimated by, for example, a regression E[Y|A=a,X=x]=η0(x)+β1a; with η0(x) the nuisance function and β1 the parameter of interest for level a of the treatment. Under correct specification of the models for μ(A,X), the outcome regression estimator is consistent (Bang & Robins, 2005), but it is prone to extrapolation.

The problem can now be viewed as a prediction problem, making it appealing to use ML to obtain good predictions for μ(A,X). Indeed, some methods do this: BARTs have been successfully used to obtain ATEs (Hahn et al., 2017; Hill, 2011). Austin (2012) demonstrates the use of a wide range of tree-based machine learning techniques to obtain regression estimators for the ATE.

However, there are three reasons why ML is generally not recommended for outcome regression. First, the asymptotic properties of such estimators are unknown. Typically, the convergence of the resulting regression estimator for the causal effect will be slower than N when using ML fits. A related problem is the so-called “regularization bias” (Athey, Imbens, & Wager, 2018; Chernozhukov et al., 2017). Data-adaptive methods use regularization to achieve optimal bias-variance trade-off, which shrinks the estimates towards zero, introducing bias (Mullainathan & Spiess, 2017), especially if the shrunk coefficients correspond to variables which are strong confounders (Athey et al., 2018). This problem increases as the number of parameters compared to sample size grows. Third, it is difficult to conduct inference for causal parameters, as in general there is no way of constructing valid confidence intervals, and the nonparametric bootstrap is not generally valid (Bickel et al., 1997).

This motivates going beyond single nuisance model ML plug-in estimators, and using double-robust estimators with ML nuisance model fits, reviewed in the next section (Athey et al., 2018; Chernozhukov et al., 2017; Farrell, 2015; Seaman & Vansteelandt, 2018).

Double-Robust Estimation With Machine Learning

Methods that combine the strengths of outcome regression modeling with the balancing properties of the propensity score have been long advocated. The intuition is that using propensity score matching or weighting as a “preprocessing” step can be followed by regression adjustment to control further for any residual confounding (Abadie & Imbens, 2006, 2011; Imbens & Wooldridge, 2009; Rubin & Thomas, 2000; Stuart, 2010). While these methods have performed well in simulations (Busso, DiNardo, & McCrary, 2014; Kreif, Grieve, Radice, & Sekhon, 2013; Kreif et al., 2016), their asymptotic properties are not well understood.

A formal approach to combining outcome and treatment modeling was originally developed to improve the efficiency of IPTW estimators (Robins, Rotnitzky, & Zhao, 1995). Double-robust (DR) estimators use two nuisance models, and have the special property that they are consistent as long as at least one of the two nuisance models is correctly specified. In addition, some DR estimators are shown to be semi-parametrically efficient, if both components are correctly specified (Robins, Sued, Lei-Gomez, & Rotnitzky, 2007). A simple DR method is the augmented inverse probability weighting (AIPTW) estimator (Robins, Rotnitzky, & Zhao, 1994). The AIPTW can be written as ψAIPTW=ψAIPTW(1)ψAIPTW(0), where

ψAIPTW(a)=1Ni=1N(YiI(Ai=a)p(Xi)I(Ai=a)p(Xi)p(Xi)μ(Xi,Ai)),
(8)

where μ(A,X)=E[Y|A,X], as before.

The variance of DR estimators is based on the variance of their influence function. Let ψ be an estimator of a scalar parameter ψ0, satisfying

N(ψψ0)=Ni=1Nϕ(Oi)+o(1),
(9)

where o(1) denotes a term that converges in probability to 0, and where E[ϕ(O)]=0 and 0<E[ϕ(O)2]<, that is, ϕ‎(O) has zero mean and finite variance. Then ϕ‎(O) is the influence function (IF) of ψ^.

By the central limit theorem, the estimator ψ^ is asymptotically normal with asymptotic variance N1 times the variance of its influence function. Thus, the empirical variance of the IF can be used to construct normal-based confidence intervals.

A consequence of this convergence behavior is that good asymptotic properties of DR estimators can be achieved even when the convergence rates of the nuisance models are slower than the conventional N, as the DR estimator ψ^ can still converge at a fast N rate, as long as the product of the nuisance models’ convergence rates is faster than N (under regularity conditions, and empirical process conditions (e.g., Donsker class, which can be avoided via sample splitting (Bickel & Kwon, 2001; Chernozhukov et al., 2017; van der Laan & Robins, 2003; van der Laan & Robins, 2003), described later).

This discovery allows for the use of flexible machine learning-based estimation of the nuisance functions, leading to an increased applicability of DR estimators, which were previously criticized, given that most likely both nuisance models are misspecified (Kang et al., 2007). Concerns about the sensitivity to extreme propensity score weights remain (Petersen, Porter, Gruber, Wang, & van der Laan, 2012).

To improve on AIPTW estimators, van Der Laan and Rubin (2006) introduced targeted minimum loss based estimation (TMLE), a class of double-robust semiparametric efficient estimators. TMLEs “target” or de-bias an initial estimate of the parameter of interest in two stages (Gruber & van Der Laan, 2009). In the first stage, an initial estimate μ0(A,X) of E[Y|A,X] is obtained (typically by machine learning), and used to predict potential outcomes under both exposures, for each individual (van der Laan & Rose, 2011).

In the second stage, these initial estimates are “updated,” by fitting a generalized linear model for E(Y|X), typically with logit link, an offset term logit {μ0(A,X)} and a single so-called clever covariate. When the outcome is continuous, but bounded, the update can also be performed on the logit scale (Gruber & van der Laan, 2010). For the ATE, the clever covariate is h(A,k)=Ap(x)(1A)(1p(x)). The coefficient ε‎, corresponding to the clever covariate, is then used to update (de-bias) the estimate of μ0(A,X). The updating procedure continues until a step is reached where ε=0. The final update μ*(A,X) is the TMLE. For the special case of the ATE, convergence is mathematically guaranteed in one step, so there is no need to iterate.

This procedure exploits the information in the treatment assignment mechanism and also ensures that the resulting estimator stays in the appropriate model space, that is, it is a substitution estimator. Again, data adaptive estimation of the propensity score is recommended (van der Laan & Rose, 2011). Available software implementation of TMLE (R package tmle (Gruber & van der Laan, 2011) incorporates a Super Learner algorithm to provide the initial predictions of the potential outcomes and the propensity scores.

Another DR estimator with machine learning is the so-called double machine learning (DML) estimator (Chernozhukov et al., 2017). For a simple setting of the ATE, this estimator simply combines the residuals of an outcome regression and the residuals of a propensity score model, into a new regression, motivated by the partially linear regression approach of Robinson (1988). For the more general case when the treatment can have an interactive effect with covariates, the form of the estimator corresponds to the AIPTW estimator, where the nuisance parameters are estimated using machine learning algorithms. While the estimator does not claim “double-robustness” (as it does not aim to “correctly specify” any of the models), it aims to de-bias estimates of the average treatment effects by combining “good enough” estimates of the nuisance parameters. The machine learning methods used can be highly data-adaptive. This estimator is also semiparametric efficient, under weak conditions, due to an extra step of sample splitting (thus avoiding empirical process conditions [Bickel & Kwon, 2001]). The estimator is constructed by using “cross-fitting,” which divides the data into K random splits, and withholds one part of data from fitting the nuisance parameters, while using the rest of the data to obtain predictions and constructing the ATE. This is then repeated K times, and the average of the resulting estimates is the DML estimate for the ATE. The standard errors are based on the influence function (Chernozhukov et al., 2017). Sample splitting is designed to help avoid overfitting, and thus reduces the bias. A further adaptation of the method also takes into account the uncertainty about the particular sample splitting, by doing large number of re-partitionings of the data, and taking the mean or median of the resulting estimates as the final ATE, and also correcting the estimated standard errors to capture the spread of the estimates.

Demonstration of DR and Double Machine Learning Approaches Using the Birth Data

We begin by fitting a parametric AIPTW using logistic models for both PS and outcome regression. SEs are estimated by nonparametric bootstrap. We then use Super Learner (SL) fits for both nuisance models (with the libraries described in “Demonstration of Balancing Methods Using the Birth Dataset”). For the outcome models, we fit two separate prediction models, for the treated and control observations, and obtain the predictions for the two expected potential outcomes, the probabilities of assisted birth under no health insurance and health insurance, given the individual’s observed covariates. We plug these predictions into the standard AIPTW. The SEs are based on the influence function (without further modification). Next, we implement the TMLE using the same nuisance model SL fits. SEs are based on the efficient influence function, as coded in the R package tmle. Finally, the double machine-learning estimates for the ATE are obtained using one split of approximately equal size. The nuisance models are re-estimated in the first split of the sample using the SL with the same libraries as before, and obtaining predictions for the other half of the split sample. We then (in the cross-fitting step) switch the roles of the two samples, and average the resulting estimates with the formulae for SEs based on the influence function, as in Chernozhukov et al., 2017.

The relative weights of the candidate algorithms in the SL library are displayed in Table 3, showing that the highly data-adaptive algorithms (boosting, random forests and BART) received the majority of the weights. The estimated ATEs with 95 % CIs are reported in Table 4. While the point estimates are of similar magnitude (a 5–8 % increase in the probability of assisted birth), their confidence intervals show a large variation. The SL AIPTW and TMLE appear to be very precisely estimated, displaying a narrower CI than the parametric AIPTW, where CIs have been obtained using parametric bootstrap. One potential explanation may be that without the sample splitting, the nuisance parameters may be overfitted, and the influence function based standard errors do not take this into account. This is consistent with the finding of Dorie et al. (2017) in simulated datasets that TMLE results in undercoverage of 95 % CIs. Indeed, the DML estimator, which only differs from the SL AIPTW estimator in the cross-fitting stage, displays the widest CIs, including zero.

Table 3: Nonzero Weights Corresponding to the Algorithms in the Convex Super Learner, for Modeling the Outcome

Algorithm’s Weight in SL

Model for Control

Model for Treated

Boosting (100 trees, depth of 4, shrinkage 0.1)

0.02

0.38

Boosting (1,000 trees, depth of 4, shrinkage 0.1)

0.00

0.04

Random forest (500 trees, 5 variables)

0.00

0.36

Random forest (500 trees, 8 variables)

0.00

0.09

Random forest (2,000 trees, 8 variables)

0.32

0.00

BART

0.54

0.00

GLM with no interaction

0.00

0.11

GLM with all two-way interactions

0.12

0.01

Variable Selection

A problem empirical researchers face when relying on conditioning on a sufficient set of observed covariates for confounding control is variable selection, that is, identifying which covariates to include in the model(s) for conditional exchangeability to hold. In principle, subject matter knowledge should be used to select a sufficient control set (Rubin, 2007). In practice, however, there is often little prior knowledge on which variables in a given data set are confounders. Hence data-adaptive procedures to select the variables to adjust for become increasingly necessary when the number of potential confounders is very large. There is a lack of clear guidance about what procedures to use, and about how to obtain valid inferences after variable selection. In this section, we consider some approaches for variable selection when the focus is on the estimation of causal effects.

Table 4: ATE Obtained With Logistic AIPTW, Super Learner Fits for the PS and Outcome Model AIPTW (SL AIPW), TMLE, and DML Estimators Applied to the Birth Data

ATE

95 % CI

AIPTW (boot SE)

0.066

0.029

0.103

SL AIPTW

0.081

0.077

0.086

TMLE

0.073

0.065

0.081

DML (1 split)

0.053

-0.026

0.133

Decisions on whether to include a covariate in a regression model, whether these are done manually or by automated methods, such as stepwise regression, are usually based on the strength of evidence for the residual association with the outcome, by for example, iteratively testing for significance in models that include or exclude the variable, and comparing the resulting p-value to a prespecified significance level. Stepwise models (backwards or forwards selection) are, however, widely recognized to perform poorly (Heinze, Wallisch, & Dunkler, 2018), for two main reasons. First, collinearity can be an issue, which is especially problematic for forward-selection, while in high-dimensional settings backward selection may be unfeasible. Second, tests performed during the variable selection process are not prespecified, and this is typically not acknowledged in the subsequent analysis, compromising the validity and the interpretability of subsequent inferences, derived from models after variable selection.

Decisions about which covariates to adjust for in a regression must ideally be based on the evidence of confounding, taking into account the covariate–exposure association. Yet causal inference procedures that only rely on the strength of covariate–treatment relationships (e.g., propensity score methods) may also be problematic. For example, they may lead to adjusting for variables that are causes of the exposure only (so-called pure instruments), inducing bias (Vansteelandt, Bekaert, & Claeskens, 2012). On the other hand, if variable selection only relies on modeling the outcome, using for example, lasso regression, it may introduce regularization bias, due to underestimating coefficients, and as a result, mistakenly excluding variables with nonzero coefficients.

To address these challenges, Belloni, Chernozhukov, and Hansen (2014) proposed a solution that offers principled variable selection, taking into account both the covariate–outcome and the covariate–treatment assignment association, resulting in valid inferences after variable selection. Their framework, referred to as “post double selection,” or “double-lasso,” also allows to extend the space of possible confounding variables to include higher-order terms. Following Belloni et al. (2014), we consider the partially linear model Yi=g(Xi)+β0Ai+ζi, where Xi is a set of confounder-control variables, and ζi is the error term satisfying E[ζi|Ai,Xi]=0. We examine the problem of selecting a set of variables V from among d2 potential variables Wi=f(X), which includes X and transformations of X as to adequately approximate g(X), and allowing for d2>N. Crucially, pure instruments—variables associated with the treatment but not the outcome—do not need to be identified and excluded in advance.

We identify covariates for inclusion in our estimators of causal effects in two steps. First, we find those that predict the outcome and then, in a separate second step, those that predict the treatment. A lasso linear regression calibrated to avoid overfitting is used for both models. In a final step, the union of the variables selected in either step is used as the confounder control set, to be used in the causal parameter estimator. The control set can include some additional covariates identified beforehand.

Belloni et al. (2014) show that the double lasso results in valid estimates of ATE under the key assumption of ultra-sparsity, that is, conditional exchangeability holds after controlling for a relatively small number s<<N of variables in X not known a priori. Implementation is straightforward, for example using the glmnet R package. We use cross-validation for choosing the tuning parameter, following Dukes, Avagyan, and Vansteelandt (2018). Once the confounder control set is selected, a standard method of estimation is used, for example ordinary least squares estimation of the outcome regression.

Application of Double Lasso to the Birth Data

We now apply the double-lasso approach for variable selection to our birth data example. We begin by running a lasso linear regression (using the glmnet R package) for both outcome and treatment separately, including all the variables available and using cross-validation to select the penalty. The union of the variables selected for both models was all 35 available covariates. These variables are used to control for confounding first in a parametric logistic outcome regression model, which we use to predict the potential outcomes, and obtain the ATE. We also calculate IPTW and AIPTW estimates using the weights from a parametric logistic model for the PS. For all estimates we use bootstrap to obtain SEs.

We then increase the covariate space to include all the two-way interactions between the covariates, excluding the exposure and the outcome, resulting in a total of 595 covariates. Using double-lasso on this extended covariate space, we select 156 covariates for the outcome and 89 for the treatment model, leaving us a union set used for confounding control of 211. We repeat the three estimators now based on this expanded control set.

Table 5 reports the estimated ATEs and 95 % CIs. The CIs for the double-lasso outcome regression were obtained using the nonparametric bootstrap, while the IPTW and AIPTW were obtained as before using the sandwich SEs. The top panel of the table shows estimates using all covariates but no interactions, and the bottom panel shows estimates using 72 covariates, including the main terms and the interactions. The point estimates change very little, implying a minor role of the interactions in controlling for confounding.

Collaborative TMLE

Covariate selection for the propensity score can also be done within the TMLE framework. Previously, we have seen that for a standard TMLE estimator of the ATE, the estimation of the propensity score model is performed independently from the estimation of the initial estimator of the outcome model, that is, without seeking to optimize the fit of the PS for the estimation of the target parameter.

Table 5: ATE Post-Double-Lasso for Selection of Confounders Applied to the Birth Data

ATE

95 % CI

Outcome Regression

0.027

0.012

0.041

IPTW

0.083

0.016

0.107

AIPTW

0.066

0.029

0.103

with two-way interactions

Outcome Regression

0.032

0.016

0.046

IPTW

0.078

0.019

0.107

AIPTW

0.063

0.026

0.101

However, it is possible, and even desirable, to choose the treatment model which is optimized to reduce the mean square error the target parameter. An extension of the TMLE framework, the so-called collaborative TMLE (CTMLE) does just this.

The original version of CTMLE (van Der Laan & Gruber, 2010) is often referred to as “greedy CTMLE.” Here, a sequence of nested logistic regression PS models is created by a greedy forward stepwise selection algorithm, nested such that at each stage (i.e., among all PS models with k main terms), we select the PS model which results in the estimated MSE of the ATE parameter being the smallest, when used in the targeting step to update the initial estimator of the outcome model. If the resulting TMLE does not improve upon the empirical fit of the initial outcome regression, then this TMLE is not included in the sequence of TMLEs. Instead, the initial outcome regression estimator is replaced by the last previously accepted TMLE and we start over. This procedure is performed iteratively until all d covariates have been incorporated into the PS model. The greedy nature of the algorithm makes it computationally intensive: the total number of models explored is d+(d1)+(d2)++1, which is of the order O(d2) (Ju et al., 2019a).

This has led to the development of scalable versions (Ju et al., 2019b) of CTMLE, which replace the greedy search with a data-adaptive preordering of the candidate PS model estimators. The time burden of these scalable CTMLE algorithms is of order O(d). Two CTMLE preordering strategies are proposed: logistic and correlation. The logistic preordering CTMLE constructs a univariable estimator of the PS for each available covariate, pk with Xk the baseline variable, k=1,,d. Using the resulting predicted PS, we construct the clever covariate corresponding to the TMLE for the ATE, namely h(A,k)=Apk(1A)(1pk), and fluctuate the initial estimate of the parameter of interest μ0 using this clever covariate (and a logistic log-likelihood) as usual in the TMLE literature. We obtain the targeted estimate and compute the empirical loss, which could be, for instance, the mean sum of squared errors. Finally we order the covariates by increasing value of this loss.

The correlation preordering is motivated by noting that we would like the k-th covariate added to the PS model to be that which best explains the current residual, that is, between Y and current targeted estimate. Thus, the correlation preordering ranks the covariates based on their correlation with the residual between Y and the initial outcome regression estimate μ0 (Ju et al., 2019a).

For both preorderings, at each step of the CTMLE, we add the variables to the PS prediction model in this order, as long as the value of the loss function continues to decrease. Another version of the CTMLE exploits lasso regression for the selection of the variables to be included in the PS estimation (Ju et al., 2019b). This CTMLE algorithm also constructs a sequence of propensity score estimators, each of them a lasso logistic model with a penalty λk, where k is monotonically decreasing, and which is “initialized” with λ1 the minimum λ selected by cross-validation. Then, the corresponding TMLE estimator for the ATE is constructed for each PS model, finally choosing by cross-validation the TMLE which minimizes the loss function (MSE). The SEs for all CTMLE versions are computed based on the variance of the influence function, as implemented in the R package ctmle.

CTMLE Applied to the Birth Data

We now apply all these variants of CTMLE to the case study. The final logisic preordering CTMLE is based on a PS model containing nine covariates, including variables indicating the year of birth, and variables capturing education and socioeconomic status. The CTMLE based on correlation preordering selected three covariates for the PS, one variable that captures participation in a social assistance program, but also two variables that measure the availability of healthcare providers in the community. These latter variables are indeed to be expected to have a strong association with the outcome, assisted birth, hence it is not surprising that they were selected, based on their role in reducing the MSE of the ATE. Finally, the lasso CTMLE is based on a penalty term of 0.000098, chosen by cross-validation, which results in all variables having nonzero coefficients for the PS.

The results are reported on Table 6. The estimated ATEs are somewhat larger than those obtained using TMLE, all reporting an increase of around 8 % in the probability of giving birth assisted by a healthcare professional, for those who have social health insurance vs. uninsured. However, the CTMLE estimate resulting from the lasso, which selected all variables into the PS model, is (unsurprisingly) very similar to the TMLE estimate reported in “Double-Robust Estimation With Machine Learning,” and thus is further away from the naive estimate than the estimates from CTMLEs that use only a subset of the variables. Lasso–CTMLE also has wider 95 % CIs than the rest, while the greedy CTMLE has the tightest. This may be an empirical evidence of the bias–variance trade off: due to using fewer covariates in the PS, the estimators that use aggressive variable selection for the PS are slightly biased, but with lower variance.

Table 6: ATE Obtained With CTML Estimators Applied to the Birth Data

ATE

95% CI

Greedy CTMLE

0.082

(0.071, 0.094)

Scalable CTMLE logistic preordering

0.085

(0.067, 0.104)

Scalable CTMLE correlation preordering

0.082

(0.068, 0.097)

Lasso CTMLE

0.076

(0.048, 0.105)

Further Topics

Throughout this chapter, we have seen how machine learning can be used in estimating ATE, by using it as a prediction tool for outcome regression or PS models. The same logic can be applied to many other estimation problems where there are nuisance parameters that need to be predicted as part of the estimation of the parameter of interest.

Estimating Treatment Effect Heterogeneity

We focused the discussion to the common target parameter, the ATE. Most of the methods considered are also available for the ATT parameter (see, e.g., Chernozhukov et al., 2018). The difference between the ATE and ATT stems from heterogeneous treatment effects, and this heterogeneity—in particular, heterogeneity with respect to observed covariates—in itself can be an interesting target of causal inference. For example, in looking at the birth cohort example, we may be interested in how the effect of health insurance varies by the socioeconomic gradient. To answer this question, one may either want to specify the “treatment effect function,” a possibly nonparametric function of the treatment effects as a function of deprivation, and possibly other covariates such as age. Another approach may be subgroup analysis, based on variables that have been selected in a data-adaptive way.

Imai and Ratkovic (2013) propose a variable selection method using Support Vector Machines, to estimate heterogeneous treatment effects in randomized trials. Hahn et al. (2017) further develop the BART framework to estimate treatment effect heterogeneity, by flexibly using the estimated propensity score to correct for regularization bias. Athey and Imbens (2016) propose a regression tree method, referred to as “causal trees” to identify subgroups with treatment effect heterogeneity, using sample-splitting to avoid overfitting. Wager and Athey (2018) extend this idea to random forest-based algorithms, referred to as “causal forests” for which they establish theoretical properties. A second interesting question may concern optimal policies: if a health policy maker has limited resources to provide free insurance for those currently uninsured, what would be an optimal allocation mechanism that maximizes health benefits? The literature on “optimal policy learning” is rapidly growing. Kitagawa and Tetenov (2017) focus on estimating optimal policies from a set of possible policies with limited complexity, while Athey and Wager (2017) further develop the double-machine learning approach (Chernozhukov et al., 2018) to estimate optimal policies. Further approaches have been proposed, for example (Kallus, 2017) based on balancing and within the TMLE framework (Luedtke & Chambaz, 2017; van der Laan & Luedtke, 2015).

Instrumental Variables

In certain situations, even after adjusting for observed covariates, there may be doubts as to whether conditional exchangeability holds. However, other methods can be used where there is an instrumental variable (IV) available, that is a variable which is correlated with the exposure but is not associated with any confounder of the exposure–outcome association, nor is there any pathway by which the IV affects the outcome, other than through the exposure. Depending on the additional assumptions the analyst is prepared to make, different estimands can be identified. Here, we focus on the local average treatment effect (LATE) under monotonic treatment (Angrist, Imbens, & Rubin, 1996).

Consider the (partially) linear instrumental variable model, which in its simpler form can be thought of as a two-stage procedure, where the first stage consists of a linear regression of the endogenous exposure A on the instrument Z, A=α0+αZ+εa. Then in a second stage, we regress the outcome on the predicted exposure A^, Y=β0+β1A^+εy.

Usually, the first stage is treated as an estimation step, and the coefficients are obtained using OLS. In fact, we are only interested in the predicted exposure for each observation, and the parameters in the first stage are merely nuisance parameters that must be estimated to calculate the fitted values for exposure. Thus, we can think of this problem directly as a prediction problem, and use machine learning algorithms for the first stage. This can help alleviate some of the finite sample bias, often observed in IV estimates, which are typically biased towards the OLS, as a consequence of overfitting the first-stage regression, a problem that is more serious with small sample sizes or weak instruments (Mullainathan & Spiess, 2017).

A number of recent studies have used ML for the first stage of the IV models. Belloni, Chen, Chernozhukov, and Hansen (2012) use lasso, while Hansen and Kozbur (2014) use ridge regression. More recently, a TMLE has been developed for IV models, which uses ML fits also for the initial target parameter estimation (Tóth & van der Laan, 2016). Double-robust IV estimators can also be used with machine learning predictions for the nuisance models in the second stage, as shown in Chernozhukov et al. (2018) and DiazOrdaz, Daniel, and Kreif (2018).

Discussion

We have attempted to provide an overview of the current use of ML methods for causal inference, in the setting of evaluating the average treatment effects of binary static treatments, assuming no unobserved confounding. We used a case study of an evaluation of a health insurance scheme on healthcare utilization in Indonesia. The case study displayed characteristics typical of applied evaluations: a binary outcome, and a mixture of binary, categorical and continuous covariates. A practical limitation of the presented case study is the presence of missing data. For simplicity, we used a complete case analysis, assuming that missingness does not depend on both the treatment and the outcome (Bartlett et al., 2015). If this is not the case, the resulting estimates may be biased. Several options to handle missing data under less restrictive assumptions exist: for example, multiple imputation (Rubin, 1987), which is in general valid under missing-at-random assumptions, or the missing indicator method, which includes indicators for missingness in the set of potential variables to adjust for, and relies on the assumption that the confounders are only such when observed (D’Agostino, Lang, Walkup, Morgan, & Karter, 2001). Another alternative is to use inverse probability of “being a complete case” weights, which can be easily combined with many of the methods described in this article (Seaman & White, 2014).

We have highlighted the limitations of naively interpreting the output of machine learning prediction methods as causal estimates, and provided a review of recent innovations that plug-in ML prediction of nuisance parameters in ATE estimators. We have demonstrated how ML can make the estimation of the PS more principled, and also illustrated a multivariate matching approach that uses ML to data-adaptively select balanced comparison groups. We also highlighted the limitations of such “design-based” approaches: they may not improve balance on variables that really matter to reduce bias in the estimated ATE, as they cannot easily take into account information on the relative importance of confounders for the outcome variable.

We gave a brief overview of the possibility of using ML for estimating ATEs via outcome regressions. We emphasized that obtaining valid confidence intervals after such procedures is complicated, and the bootstrap is not valid. Some methods, such as BARTs are able to give inferences based on the corresponding posterior distributions, and have been used in practice with success (Dorie et al., 2017). Nevertheless, there are currently no theoretical results underpinning its use (Wager & Athey, 2018), and thus BART inferences should be used with caution. Instead, we illustrated double-robust approaches that combine the strengths of PS estimation and outcome modeling, and are able to incorporate ML predictors in a principled way. These approaches, specifically TMLE pioneered by van der Laan and colleagues, and the double machine learning estimators developed by Chernozokov and colleagues, have appealing theoretical properties and increasing evidence of their good finite sample performance (Dorie et al., 2017; Porter et al., 2011). All estimation approaches demonstrated in this article rely on the assumption that selection into treatment is based on observable covariates only. In many settings of policy evaluations, this assumption is not tenable. Under such settings, beyond “Instrumental Variables” methods, panel data approaches are commonly used to control for one source of unobserved confounding, that is due to unobservables that remain constant over time. To date, ML approaches have not been combined with panel data econometric methods. Exceptions are Bajari, Nekipelov, Ryan, and Yang (2015) and Chernozhukov et al. (2017) who demonstrate ML approaches for demand estimation using panel data.

We stress once again that ML methods can improve the estimation of causal effects only once the identification step has been firmed up and using estimators with appropriate convergence rates, so that they remain consistent even when using ML fits. However, with the increasing availability of Big Data, in particular in settings with a very large number of covariates, assumptions such as “no unobserved confounders” may be more plausible (Titiunik, 2015). With such d » n datasets, ML methods are indispensable for variable selection as well as the construction of low dimensional parameters such as average treatment effects. Indeed, many innovations in ML for causal inference are taking place in such d » n settings (e.g., Belloni & Chernozhukov, 2011; Belloni et al., 2014; Wager & Athey, 2018).

Finally, we believe that, paradoxically, ML methods that are often criticized for their “black box” nature, may increase the transparency of applied research. In particular, ensemble learning algorithms such as the Super Learner, can provide a safeguard against having to hand-pick the best model or algorithm. ML should be encouraged to enhance expert substantive knowledge when selecting confounders and model specification.

Acknowledgments

Karla DiazOrdaz was supported by the U.K. Wellcome Trust Institutional Strategic Support Fund–LSHTM Fellowship 204928/Z/16/Z.

Noémi Kreif gratefully acknowledges her co-authors on the impact evaluation of the Indonesian public health insurance scheme: Andrew Mirelman, Rodrigo Moreno-Serra, Marc Suhrcke (Centre for Health Economics, University of York) and Budi Hidayat (University of Indonesia).

Further Reading

Athey, S. (2017a). Beyond prediction: Using big data for policy problems. Science, 355(6324), 483–485.Find this resource:

Athey, S., Imbens, G. W., & Wager, S. (2018). Approximate residual balancing: Debiased inference of average treatment effects in high dimensions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(4), 597–623.Find this resource:

Athey, S., & Imbens, G. W. (2017). The state of applied econometrics: Causality and policy evaluation. Journal of Economic Perspectives, 31(2), 3–32.Find this resource:

Belloni, A., Chernozhukov, V., & Hansen, C. (2014). Inference on treatment effects after selection among high-dimensional controls. Review of Economic Studies, 81(2), 608–650.Find this resource:

Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., & Newey, W. (2017). Double/ debiased/ Neyman machine learning of treatment effects. American Economic Review, 107(5), 261–65.Find this resource:

Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/ debiased machine learning for treatment and structural parameters. Econometrics Journal, 21 (1), C1–C68.Find this resource:

Diamond, A., & Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95(3), 932–945.Find this resource:

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning, Vol. 112. New York: Springer.Find this resource:

Kleinberg, J., Ludwig, J., Mullainathan, S., & Obermeyer, Z. (2015). Prediction policy problems. American Economic Review, 105(5), 491–495.Find this resource:

Lee, B. K., Lessler, J., & Stuart, E. A. (2010) Improving propensity score weighting using machine learning. Statistics in Medicine, 29(3), 337–346.Find this resource:

McCaffrey, D. F., Ridgeway, G., & Morral, A. R. (2004). Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9(4), 403.Find this resource:

Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives, 31(2), 87–106.Find this resource:

Polley, E. C., & van der Laan, M. J. (2010, May). Super Learner in prediction. U.C. Berkeley Division of Biostatistics Working Paper Series Working Paper 266. Berkeley: University of California.Find this resource:

van Der Laan, M. J., & Gruber, S. (2010). Collaborative double robust targeted maximum likelihood estimation. International Journal of Biostatistics, 6(1).Find this resource:

van der Laan, M. J., & Rose, S. (2011). Targeted learning: Causal inference for observational and experimental data. New York: Springer.Find this resource:

van Der Laan, M. J., & Rubin, D. (2006). Targeted maximum likelihood learning. International Journal of Biostatistics, 2(1).Find this resource:

Varian, H. R. (2014). Big Data: New tricks for econometrics. Journal of Economic Perspectives, 28(2), 3–287.Find this resource:

References

Abadie, A., & Imbens, G. W. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica, 74(1), 235–267.Find this resource:

Abadie, A., & Imbens, G. W. (2011). Bias-corrected matching estimators for average treatment effects. Journal of Business & Economic Statistics, 29(1), 1–11.Find this resource:

Abadie, A., & Imbens, G. W. (2016). Matching on the estimated propensity score. Econometrica, 84(2), 781–807.Find this resource:

Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), pp. 444–455.Find this resource:

Athey, S. (2017a). Beyond prediction: Using big data for policy problems. Science, 355(6324), 483–485.Find this resource:

Athey, S. (2017b). The impact of machine learning on economics. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), Economics of Artificial Intelligence. Chicago, IL: University of Chicago Press.Find this resource:

Athey, S., & Imbens, G. W. (2016). Recursive partitioning for heterogeneous causal effects. Proceedings of the National Academy of Sciences, 113(27), 7353–7360.Find this resource:

Athey, S., & Imbens, G. W. (2017). The state of applied econometrics: Causality and policy evaluation. Journal of Economic Perspectives, 31(2), 3–32.Find this resource:

Athey, S., Imbens, G. W., & Wager, S. (2018). Approximate residual balancing: Debiased inference of average treatment effects in high dimensions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(4), 597–623.Find this resource:

Athey, S., & Wager, S. (2017). Efficient policy learning. arXiv preprint arXiv:1702.02896.Find this resource:

Austin, P. C. (2009). Some methods of propensity-score matching had superior performance to others: results of an empirical investigation and Monte Carlo simulations. Biometrical Journal, 51(1), 171–184.Find this resource:

Austin, P. C. (2012). Using ensemble-based methods for directly estimating causal effects: An investigation of tree-based g-computation. Multivariate Behavioral Research, 47(1), 115–135.Find this resource:

Bajari, P., Nekipelov, D., Ryan, S. P., & Yang, M. (2015). Machine learning methods for demand estimation. American Economic Review, 105(5), 481–485.Find this resource:

Bang, H., & Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4), 962–973.Find this resource:

Bartlett, J. W., Harel, O., & Carpenter, J. R. (2015). Asymptotically unbiased estimation of exposure odds ratios in complete records logistic regression. American Journal of Epidemiology, 182(8), 730–736.Find this resource:

Belloni, A., Chen, D., Chernozhukov, V., & Hansen, C. (2012). Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica, 80 (6), 2369–2429.Find this resource:

Belloni, A., & Chernozhukov, V. (2011). L1-penalized quantile regression in high-dimensional sparse models. Annals of Statistics, 39(1), 82–130.Find this resource:

Belloni, A., Chernozhukov, V., & Hansen, C. (2014). Inference on treatment effects after selection among high-dimensional controls. Review of Economic Studies, 81(2), 608–650.Find this resource:

Bickel, P. J., Götze, F., & van Zwet, W. R. (1997). Resampling fewer than n observations: Gains, losses, and remedies for losses. Statistica Sinica, 7, 1–31.Find this resource:

Bickel, P. J., & Kwon, J. (2001). Inference for semiparametric models: Some questions and an answer. Statistica Sinica, 11, 920–936.Find this resource:

Busso, M., DiNardo, J., & McCrary, J. (2014). New evidence on the finite sample properties of propensity score reweighting and matching estimators. Review of Economics and Statistics, 96(5), 885–897.Find this resource:

Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., & Newey, W. (2017). Double/ debiased/ Neyman machine learning of treatment effects. American Economic Review, 107(5), 261–65.Find this resource:

Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/ debiased machine learning for treatment and structural parameters. Econometrics Journal, 21 (1), C1–C68.Find this resource:

Chernozhukov, V., Goldman, M., Semenova, V., & Taddy, M. (2017). Orthogonal ML for demand estimation: High dimensional causal inference in dynamic panels. arXiv preprint arXiv:1712.09988.Find this resource:

Cole, S. R., & Frangakis, C. E. (2009). The consistency statement in causal inference: a definition or an assumption? Epidemiology, 20(1), 3–5.Find this resource:

D’Agostino, R., Lang, W., Walkup, M., Morgan, T., & Karter, A. (2001). Examining the impact of missing data on propensity score estimation in determining the effectiveness of self-monitoring of blood glucose (SMBG). Health Services and Outcomes Research Methodology, 2(3), 291–315.Find this resource:

Diamond, A., & Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95(3), 932–945.Find this resource:

DiazOrdaz, K., Daniel, R., & Kreif, N. (2018). Data-adaptive doubly robust instrumental variable methods for treatment effect heterogeneity. ArXiv e-prints arXiv:1802.02821v1.Find this resource:

Dorie, V., Hill, J., Shalit, U., Scott, M., & Cervone, D. (2017). Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. arXiv preprint arXiv:1707.02641.Find this resource:

Dukes, O., Avagyan, V., & Vansteelandt, S. (2018). High-dimensional doubly robust tests for regression parameters. arXiv preprint arXiv:1805.06714.Find this resource:

Eliseeva, E., Hubbard, A. E., & Tager, I. B. (2013). An application of machine learning methods to the derivation of exposure-response curves for respiratory outcomes. U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 309. Berkeley: University of California.Find this resource:

Farrell, M. H. (2015). Robust inference on average treatment effects with possibly more covariates than observations. Journal of Econometrics, 189(1), 1–23.Find this resource:

Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning, Vol. 1. New York: Springer.Find this resource:

Gruber, S., Logan, R. W., Jarrín, I., Monge, S., & Hernán, M. A. (2015). Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets. Statistics in Medicine, 34 (1), 106–117.Find this resource:

Gruber, S., & van Der Laan, M. J. (2009). Targeted maximum likelihood estimation: A gentle introduction. U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 252. Berkeley: University of California.Find this resource:

Gruber, S., & van der Laan, M. J. (2010). A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. International Journal of Biostatistics, 6(1).Find this resource:

Gruber, S., & van der Laan, M. J. (2012). tmle: an R package for targeted maximum likelihood estimation. Journal of Statistical Software, 51(13)Find this resource:

Hahn, P. R., Murray, J. S., & Carvalho, C. M. (2017). Bayesian regression tree models for causal inference: regularization, confounding, and heterogeneous effects. arXiv:1706.09523.Find this resource:

Hainmueller, J. (2012). Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. Political Analysis, 20(1), 25–46.Find this resource:

Hansen, C., & Kozbur, D. (2014). Instrumental variables estimation with many weak instruments using regularized jive. Journal of Econometrics, 182(2), 290–308.Find this resource:

Heinze, G., Wallisch, C., & Dunkler, D. (2018). Variable selection—A review and recommendations for the practicing statistician. Biometrical Journal, 60(3), 431–449.Find this resource:

Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1), 217–240.Find this resource:

Hirano, K., Imbens, G. W., & Ridder, G. (2003). Efficient estimation of average treatment effects using the estimated propensity score. Econometrica, 71(4), 1161–1189.Find this resource:

Imai, K., & Ratkovic, M. (2013). Estimating treatment effect heterogeneity in randomized program evaluation. Annals of Applied Statistics, 7(1), 443–470.Find this resource:

Imai, K., & Ratkovic, M. (2014). Covariate balancing propensity score. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 243–263.Find this resource:

Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47(1), 5–86.Find this resource:

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning, Vol. 112. New York: Springer.Find this resource:

Ju, C., Gruber, S., Lendle, S. D., Chambaz, A., Franklin, J. M., Wyss, R.,. . . van der Laan, M. J. (2019a). Scalable collaborative targeted learning for high-dimensional data. Statistical Methods in Medical Research, 28(2), 532–554.Find this resource:

Ju, C., Wyss, R., Franklin, J. M., Schneeweiss, S., Häggström, J., & van der Laan, M. J. (2019b). Collaborative-controlled LASSO for constructing propensity score-based estimators in high-dimensional data. Statistical Methods in Medical Research, 28(4), 1044–1063Find this resource:

Kallus, N. (2017). Balanced policy evaluation and learning. arXiv preprint arXiv:1705.07384.Find this resource:

Kang, J. D., & Schafer, J. L. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical Science, 22(4), 523–539.Find this resource:

Kapelner, A., & Bleich, J. (2016). bartMachine: Machine learning with Bayesian additive regression trees. Journal of Statistical Software, 70(4), 1–40.Find this resource:

Kennedy, E. H. (2016). Statistical causal inferences and their applications in public health research. In H. He, P. Wu, & P. Chen (Eds.), Semiparametric theory and empirical processes in causal inference (pp. 141–167). Cham, Switzerland: Springer.Find this resource:

Kitagawa, T., & Tetenov, A. (2017). Who should be treated? Empirical welfare maximization methods for treatment choice. cemmap Working Paper, CWP24/17. London: Institute for Fiscal Studies.Find this resource:

Kleinberg, J., Ludwig, J., Mullainathan, S., & Obermeyer, Z. (2015). Prediction policy problems. American Economic Review, 105(5), 491–495.Find this resource:

Kreif, N., Grieve, R., Radice, R., & Sekhon, J. S. (2013). Regression-adjusted matching and double-robust methods for estimating average treatment effects in health economic evaluation. Health Services and Outcomes Research Methodology, 13(2–4), 174–202.Find this resource:

Kreif, N., Gruber, S., Radice, R., Grieve, R., & Sekhon, J. S. (2016). Evaluating treatment effectiveness under model misspecification: A comparison of targeted maximum likelihood estimation with bias-corrected matching. Statistical Methods in Medical Research, 25(5), 2315–2336.Find this resource:

Kreif, N., Mirelman, A., Moreno Serra, R., Hidayat, B., Erlannga, D., & Suhrcke, M. (2018). Evaluating the impact of public health insurance reforms on infant mortality in Indonesia, technical report. Cited in Kreif, N., & Diaz Ordaz, K. (2019). Machine learning in policy evaluation: New tools for causal inference. arXiv:1903.00402v1.Find this resource:

Lee, B. K., Lessler, J., & Stuart, E. A. (2010) Improving propensity score weighting using machine learning. Statistics in Medicine, 29(3), 337–346.Find this resource:

Leeb, H., & Benedikt, M. P. (2008). Can one estimate the unconditional distribution of post-model-selection estimators? Econometric Theory, 24, 338–376.Find this resource:

Luedtke, A., & Chambaz, A. (2017). Faster rates for policy learning. arXiv preprint arXiv:1704.06431.Find this resource:

Lunceford, J. K., & Davidian, M. (2004). Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Statistics in Medicine, 23(19), 2937–2960.Find this resource:

McCaffrey, D. F., Ridgeway, G., & Morral, A. R. (2004). Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9(4), 403.Find this resource:

Mebane, W. R., Jr., & Sekhon, J. S. (2011). Genetic optimization using derivatives: the rgenoud package for R. Journal of Statistical Software, 42(11), 1–26.Find this resource:

Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives, 31(2), 87–106.Find this resource:

Pearl, J. (2010). On the consistency rule in causal inference: axiom, definition, assumption, or theorem? Epidemiology, 21(6), 872–875.Find this resource:

Petersen, M. L. (2014). Applying a causal road map in settings with time-dependent confounding. Epidemiology, 25(6), 898.Find this resource:

Petersen, M. L., Porter, K. E., Gruber, S., Wang, Y., & van der Laan, M. J. (2012). Diagnosing and responding to violations in the positivity assumption. Statistical Methods in Medical Research, 21(1), 31–54.Find this resource:

Pirracchio, R., & Carone, M. (2018). The Balance Super Learner: A robust adaptation of the Super Learner to improve estimation of the average treatment effect in the treated based on propensity score matching. Statistical Methods in Medical Research, 27(8), 2504–2518.Find this resource:

Pirracchio, R., Petersen, M. L., & van der Laan, M. J. (2015). Improving propensity score estimators’ robustness to model misspecification using Super Learner. American Journal of Epidemiology, 181(2), 108–119.Find this resource:

Pirracchio, R., Resche-Rigon, M., & Chevret, S. (2012). Evaluation of the propensity score methods for estimating marginal odds ratios in case of small sample size. BMC Medical Research Methodology, 12(1), 70.Find this resource:

Polley, E. C., & van der Laan, M. J. (2010, May). Super Learner in prediction. U.C. Berkeley Division of Biostatistics Working Paper Series Working Paper 266. Berkeley: University of California.Find this resource:

Porter, K. E., Gruber, S., van der Laan, M. J., & Sekhon, J. S. (2011). The relative performance of targeted maximum likelihood estimators. International Journal of Biostatistics, 7(1), 1–34.Find this resource:

Robins, J. M., Hernan, M. A., & Brumback, B. (2000). Marginal structural models and causal inference in epidemiology. Epidemiology, 11(5), 550–560.Find this resource:

Robins, J. M., Rotnitzky, A., & Zhao, L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association, 89(427), 846–866.Find this resource:

Robins, J. M., Rotnitzky, A., & Zhao, L. P. (1995). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data. Journal of the American Statistical Association, 90(429), 106–121.Find this resource:

Robins, J., Sued, M., Lei-Gomez, Q., & Rotnitzky, A. (2007). Comment: Performance of double-robust estimators when” inverse probability” weights are highly variable. Statistical Science, 22(4), 544–559.Find this resource:

Robinson, P. M. (1988). Root-n-consistent semiparametric regression. Econometrica, 56(4), 931–954.Find this resource:

Rose, S. (2013). Mortality risk score prediction in an elderly population using machine learning. American Journal of Epidemiology, 177(5), 443–452.Find this resource:

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.Find this resource:

Rosenbaum, P. R., & Rubin, D. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79(387), 516–524.Find this resource:

Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6(1), 34–58.Find this resource:

Rubin, D. B. (1987). In discussion of Tanner, M. A. and Wong, W. H., The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82, 543–546.Find this resource:

Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469), 322–331.Find this resource:

Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine, 26(1), 20–36.Find this resource:

Rubin, D. B., & Thomas, N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52(1), 249–264.Find this resource:

Rubin, D. B., & Thomas, N. (2000). Combining propensity score matching with additional adjustments for prognostic covariates. Journal of the American Statistical Association, 95(450), 573–585.Find this resource:

Seaman, S., & White, I. (2014). Inverse probability weighting with missing predictors of treatment assignment or missingness. Communications in Statistics–Theory and Methods, 43(16), 3499–3515.Find this resource:

Seaman, S. R., & Vansteelandt, S. (2018). Introduction to double robust methods for incomplete data. Statistical Science, 33(2), 184–197.Find this resource:

Sekhon, J. S. (2011). Multivariate and propensity score matching software with automated balance optimization: the matching package for R. Journal of Statistical Software, 42(7)Find this resource:

Setoguchi, S., Schneeweiss, S., Brookhart, M. A., Glynn, R. J., & Cook, E. F. (2008). Evaluating uses of data mining techniques in propensity score estimation: a simulation study. Pharmacoepidemiology and Drug Safety, 17(6), 546–555.Find this resource:

Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science, 25(1), 1–21.Find this resource:

Titiunik, R. (2015). Can big data solve the fundamental problem of causal inference? PS: Political Science & Politics, 48(1), 75–79.Find this resource:

Tóth, B., & van der Laan, M. J. (2016). TMLE for marginal structural models based on an instrument. U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 350. Berkeley: University of California.Find this resource:

van der Laan, M. J., & Dudoit, S. (2003). Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: Finite sample oracle inequalities and examples. U.C. Berkeley Division of Biostatistics Working Paper Series Working Paper 130. Berkeley: University of California.Find this resource:

van Der Laan, M. J., & Gruber, S. (2010). Collaborative double robust targeted maximum likelihood estimation. International Journal of Biostatistics, 6(1).Find this resource:

van der Laan, M. J., & Luedtke, A. R. (2014). Targeted learning of an optimal dynamic treatment, and statistical inference for its mean outcome. U.C. Berkeley Division of Biostatistics Working Paper Series Working Paper 317. Berkeley: University of California.Find this resource:

van der Laan, M. J., & Luedtke, A. R. (2015). Targeted learning of the mean outcome under an optimal dynamic treatment rule. Journal of Causal Inference, 3(1), 61–95.Find this resource:

van der Laan, M. J., Polley, E. C., & Hubbard, A. E. (2007). Super Learner. U.C. Berkeley Division of Biostatistics Working Paper Series Working Paper 222. Berkeley: University of California.Find this resource:

van der Laan, M. J., & Robins, J. M. (2003). Unified methods for censored longitudinal data and causality. New York: Springer.Find this resource:

van der Laan, M. J., & Rose, S. (2011). Targeted learning: Causal inference for observational and experimental data. New York: Springer.Find this resource:

van Der Laan, M. J., & Rubin, D. (2006). Targeted maximum likelihood learning. International Journal of Biostatistics, 2(1).Find this resource:

van der Vaart, A. (2014). Higher order tangent spaces and influence functions. Statistical Science, 29(4), 679–686.Find this resource:

VanderWeele, T. J. (2009). Concerning the consistency assumption in causal inference. Epidemiology, 20(6), 880–883.Find this resource:

Vansteelandt, S., Bekaert, M., & Claeskens, G. (2012). On model selection and model misspecification in causal inference. Statistical Methods in Medical Research, 21(1), 7–30.Find this resource:

Varian, H. R. (2014). Big data: New tricks for econometrics. Journal of Economic Perspectives, 28(2), 3–287.Find this resource:

Wager, S., & S. Athey (2018). Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523), 1228–1242.Find this resource:

Westreich, D., Cole, S. R., Funk, M. J., Brookhart, M. A., & Stürmer, T. (2011). The role of the c-statistic in variable selection for propensity score models. Pharmacoepidemiology and Drug Safety, 20(3), 317–320.Find this resource:

Westreich, D., Lessler, J., & Funk, M. J. (2010). Propensity score estimation: neural networks, support vector machines, decision trees (cart), and meta-classifiers as alternatives to logistic regression. Journal of Clinical Epidemiology, 63(8), 826–833.Find this resource:

Wyss, R., Ellis, A. R., Brookhart, M. A., Girman, C. J., Jonsson Funk, M., LoCasale, R., & Stürmer, T. (2014). The role of prediction modeling in propensity score estimation: An evaluation of logistic regression, bCART, and the covariate-balancing propensity score. American Journal of Epidemiology, 180(6), 645–655.Find this resource:

Zhang, P. (1993). Model selection via multifold cross validation. Annals of Statistics, 21(1), 299–313.Find this resource:

Zhu, Y., Coffman, D. L., & Ghosh, D. (2015). A boosting algorithm for estimating generalized propensity scores with continuous treatments. Journal of Causal Inference, 3(1), 25–40.Find this resource:

Zubizarreta, J. R. (2012). Using mixed integer programming for matching in an observational study of kidney failure after surgery. Journal of the American Statistical Association, (500), 1360–1371.Find this resource:

Notes:

(1.) We note that an extension of the SL that optimizes balance has been proposed by Pirracchio and Carone (2018).