41-46 of 46 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.

Article

Elisa Tosetti, Rita Santos, Francesco Moscone, and Giuseppe Arbia

The spatial dimension of supply and demand factors is a very important feature of healthcare systems. Differences in health and behavior across individuals are due not only to personal characteristics but also to external forces, such as contextual factors, social interaction processes, and global health shocks. These factors are responsible for various forms of spatial patterns and correlation often observed in the data, which are desirable to include in health econometrics models. This article describes a set of exploratory techniques and econometric methods to visualize, summarize, test, and model spatial patterns of health economics phenomena, showing their scientific and policy power when addressing health economics issues characterized by a strong spatial dimension. Exploring and modeling the spatial dimension of the two-sided healthcare provision may help reduce inequalities in access to healthcare services and support policymakers in the design of financially sustainable healthcare systems.

Article

The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.

Article

Alessandro Casini and Pierre Perron

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Article

Thomas J. Kniesner and W. Kip Viscusi

The value of a statistical life (VSL) is the local tradeoff rate between fatality risk and money. When the tradeoff values are derived from choices in market contexts the VSL serves as both a measure of the population’s willingness to pay for risk reduction and the marginal cost of enhancing safety. Given its fundamental economic role, policy analysts have adopted the VSL as the economically correct measure of the benefit individuals receive from enhancements to their health and safety. Estimates of the VSL for the United States are around $10 million ($2017), and estimates for other countries are generally lower given the positive income elasticity of the VSL. Because of the prominence of mortality risk reductions as the justification for government policies the VSL is a crucial component of the benefit-cost analyses that are part of the regulatory process in the United States and other countries. The VSL is also foundationally related to the concepts of value of a statistical life year (VSLY) and value of a statistical injury (VSI), which also permeate the labor and health economics literatures. Thus, the same types of valuation approaches can be used to monetize non-fatal injuries and mortality risks that pose very small effects on life expectancy. In addition to formalizing the concept and measurement of the VSL and presenting representative estimates for the United States and other countries our Encyclopedia selection addresses the most important questions concerning the nuances that are of interest to researchers and policymakers.

Article

High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows: (*) x i t = α i u t + β i u t − 1 + ξ i t , i = 1 , … , n , t = 1 , … , T , where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common shock, and by the idiosyncratic component ξ i t . The common shock u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock u t affect the variable x i t dynamically, that is, through the lag polynomial α i + β i L . Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both n and T tending to infinity. The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation ( ∗ ) becomes x i t = α i F 1 t + β i F 2 t + ξ i t , F 1 t = u t , F 2 t = u t − 1 . Instead of the dynamic equation ( ∗ ) there is now a static equation, while instead of the white noise u t there are now two factors, also called static factors, which are dynamically linked: F 1 t = u t , F 2 t = F 1, t − 1 . This transformation into a static representation, whose general form is x i t = λ i 1 F 1 t + ⋯ + λ i r F r t + ξ i t , is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors F j t and the loadings λ i j can be consistently estimated from the principal components of the observable variables x i t . Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.