1-3 of 3 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
  • Economic Development x
Clear all

Article

Growth Econometrics  

Jonathan R. W. Temple

Growth econometrics is the application of statistical methods to the study of economic growth and levels of national output or income per head. Researchers often seek to understand why growth rates differ across countries. The field developed rapidly in the 1980s and 1990s, but the early work often proved fragile. Cross-section analyses are limited by the relatively small number of countries in the world and problems of endogeneity, parameter heterogeneity, model uncertainty, and cross-section error dependence. The long-term prospects look better for approaches using panel data. Overall, the quality of the evidence has improved over time, due to better measurement, more data, and new methods. As longer spans of data become available, the methods of growth econometrics will shed light on fundamental questions that are hard to answer any other way.

Article

Mergers and Acquisitions: Long-Run Performance and Success Factors  

Luc Renneboog and Cara Vansteenkiste

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance. Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.

Article

Machine Learning in Policy Evaluation: New Tools for Causal Inference  

Noémi Kreif and Karla DiazOrdaz

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.