1-10 of 74 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

COVID-19 and Mental Health: Natural Experiments of the Costs of Lockdowns  

Climent Quintana-Domeque and Jingya Zeng

The global impact of the COVID-19 pandemic has been profound, leaving a significant imprint on physical health, the economy, and mental well-being. Researchers have undertaken empirical investigations across different countries, with a primary focus on understanding the association between lockdown measures—an essential public health intervention—and mental health. These studies aim to discern the causal effect of lockdowns on mental well-being. Three notable studies have adopted natural experiments to explore the causal effect of lockdowns on mental health in diverse countries. Despite variations in their research methodologies, these studies collectively support the conclusion that lockdowns have had detrimental consequences on mental health. Furthermore, they reveal that the intensity of these negative effects varies among distinct population groups. Certain segments of the population, such as women, have borne a more profound burden of the mental health costs associated with lockdown measures. In light of these findings, it becomes imperative to consider the implications for mental health when implementing public health interventions, especially during crises like the COVID-19 pandemic. While rigorous measures like lockdowns are essential for safeguarding public health, striking a balance with robust mental health support policies becomes crucial to mitigating the adverse impacts on mental well-being.

Article

Persistence Change and Segmented Cointegration Testing  

Paulo M. M. Rodrigues

The change in persistence of a time series refers to a shift in the order of integration. Rather than displaying stationary or nonstationary behavior throughout the whole sample period, as is frequently considered in empirical work, many time series display changes in persistence over time. The analysis and impact of possible changes in persistence has been an important topic of research and has led to a large literature devoted to the development of procedures to detect such behavior. This review explores different tests designed to detect changes in the persistence and in the long-run equilibrium of time series.

Article

Econometric Methods for Business Cycle Dating  

Máximo Camacho Alonso and Lola Gadea

Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics. Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere. In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.

Article

Happiness and Productivity in the Workplace  

Mahnaz Nazneen and Daniel Sgroi

Happiness has become an important concept in economics as a target for government policy at the national level. This is mirrored in an increasing understanding of the microeconomic effects of increased happiness. While correlational studies have for many years documented a relationship between individual-level happiness and productivity, more recent work provides causal evidence that a positive shock to happiness can boost productivity significantly. These studies include three strands of research. The first provides a number of longitudinal surveys that have generated evidence linking happiness to productivity but run the risk of confounding happiness with other related variables that may be driving the relationship. The second includes laboratory experiments that simulate a workplace under tightly controlled conditions, and this strand has established a clear relationship between positive happiness shocks and rises in productivity. The third involves examining experimental field data, which sacrifices the control of laboratory experiments but offers greater realism. However, there is still work to be done generalizing these findings to more complex work environments, especially those that involve cooperative and team-based tasks where increases in happiness may have other consequences.

Article

Real-Time Transaction Data for Nowcasting and Short-Term Economic Forecasting  

John W. Galbraith

Transaction data from consumer purchases is used for monitoring, nowcasting, or short-term forecasting of important macroeconomic aggregates such as personal consumption expenditure and national income. Data on individual purchase transactions, recorded electronically at point of sale or online, offer the potential for accurate and rapid estimation of retail sales expenditure, itself an important component of personal consumption expenditure and therefore of national income. Such data may therefore allow policymakers to base actions on more up-to-date estimates of the state of the economy. However, while transaction data may be obtained from a number of sources, such as national payments systems, individual banks, or financial technology companies, data from each of these sources contain limitations. Data sets will differ in the forms of information contained in a record, the degree to which the samples are representative of the relevant population of consumers, and the different types of payments that are observed and captured in the record. As well, the commercial nature of the data may imply constraints on the researcher’s ability to make data sets available for replication. Regardless of the source, the data will generally require filtering and aggregation in order to provide a clear signal of changes in economic activity. The resulting series may be incorporated into any of a variety of model types, along with other data, for nowcasting and short-term forecasting.

Article

The Implications of Pupil Rank for Achievement  

Richard Murphy and Felix Weinhardt

The significance of social interaction has become an increasingly important part of economic thought and models through the work on peer effects, social norms, and networks. Within this literature, a novel focus of ranking within groups has emerged. The rank of an individual is usually defined as the ordinal position within a specific group. This could be the work environment or a classroom, and much of this literature focuses on rank effects in education settings. The literature studies rank effects for various age groups. There is evidence that a rank position even during early life phases, such as in elementary education, has lasting effects on education outcomes such as test scores or subject specializations, choices during college, and wages. A first-order challenge in the study of rank effects is to separate them from other highly correlated effects. For example, individuals with a high rank academic rank in a group will likely have high academic ability in absolute terms. Papers in this field directly account for measured ability, and so rely on the variation in rank that exists across groups for any given ability measure, that is, a score of 80 in one group would rank the student top, while near the bottom in another. The comparability of achievement measures across settings is key; one commonly employed solution is to account for level differences across settings. While the literature has now established the importance of rank, there are several—potentially non-competing—ideas for the precise behavioral mechanisms of why rank matters so much. Future work will most likely focus on integrating rank effects into the literature on social interactions to discuss implications for optimal group formation.

Article

Publication Bias in Asset Pricing Research  

Andrew Y. Chen and Tom Zimmermann

Researchers are more likely to share notable findings. As a result, published findings tend to overstate the magnitude of real-world phenomena. This bias is a natural concern for asset pricing research, which has found hundreds of return predictors and little consensus on their origins. Empirical evidence on publication bias comes from large-scale metastudies. Metastudies of cross-sectional return predictability have settled on four stylized facts that demonstrate publication bias is not a dominant factor: (a) almost all findings can be replicated, (b) predictability persists out-of-sample, (c) empirical t-statistics are much larger than 2.0, and (d) predictors are weakly correlated. Each of these facts has been demonstrated in at least three metastudies. Empirical Bayes statistics turn these facts into publication bias corrections. Estimates from three metastudies find that the average correction (shrinkage) accounts for only 10%–15% of in-sample mean returns and that the risk of inference going in the wrong direction (the false discovery rate) is less than 10%. Metastudies also find that t-statistic hurdles exceed 3.0 in multiple testing algorithms and that returns are 30%–50% weaker in alternative portfolio tests. These facts are easily misinterpreted as evidence of publication bias. Other misinterpretations include the conflating of phrases such as “mostly false findings” with “many insignificant findings,” “data snooping” with “liquidity effects,” and “failed replications” with “insignificant ad-hoc trading strategies.” Cross-sectional predictability may not be representative of other fields. Metastudies of real-time equity premium prediction imply a much larger effect of publication bias, although the evidence is not nearly as abundant as it is in the cross section. Measuring publication bias in areas other than cross-sectional predictability remains an important area for future research.

Article

Stochastic Volatility in Bayesian Vector Autoregressions  

Todd E. Clark and Elmar Mertens

Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.

Article

Unobserved Components Models  

Joanne Ercolani

Unobserved components models (UCMs), sometimes referred to as structural time-series models, decompose a time series into its salient time-dependent features. These typically characterize the trending behavior, seasonal variation, and (nonseasonal) cyclical properties of the time series. The components are usually specified in a stochastic way so that they can evolve over time, for example, to capture changing seasonal patterns. Among many other features, the UCM framework can incorporate explanatory variables, allowing outliers and structural breaks to be captured, and can deal easily with daily or weekly effects and calendar issues like moving holidays. UCMs are easily constructed in state space form. This enables the application of the Kalman filter algorithms, through which maximum likelihood estimation of the structural parameters are obtained, optimal predictions are made about the future state vector and the time series itself, and smoothed estimates of the unobserved components can be determined. The stylized facts of the series are then established and the components can be illustrated graphically, so that one can, for example, visualize the cyclical patterns in the time series or look at how the seasonal patterns change over time. If required, these characteristics can be removed, so that the data can be detrended, seasonally adjusted, or have business cycles extracted, without the need for ad hoc filtering techniques. Overall, UCMs have an intuitive interpretation and yield results that are simple to understand and communicate to others. Factoring in its competitive forecasting ability, the UCM framework is hugely appealing as a modeling tool.

Article

Estimation Error in Optimal Portfolio Allocation Problems  

Jose Olmo

Markowitz showed that an investor who cares only about the mean and variance of portfolio returns should hold a portfolio on the efficient frontier. The application of this investment strategy proceeds in two steps. First, the statistical moments of asset returns are estimated from historical time series, and second, the mean-variance portfolio selection problem is solved separately, as if the estimates were the true parameters. The literature on portfolio decision acknowledges the difficulty in estimating means and covariances in many instances. This is particularly the case in high-dimensional settings. Merton notes that it is more difficult to estimate means than covariances and that errors in estimates of means have a larger impact on portfolio weights than errors in covariance estimates. Recent developments in high-dimensional settings have stressed the importance of correcting the estimation error of traditional sample covariance estimators for portfolio allocation. The literature has proposed shrinkage estimators of the sample covariance matrix and regularization methods founded on the principle of sparsity. Both methodologies are nested in a more general framework that constructs optimal portfolios under constraints on different norms of the portfolio weights including short-sale restrictions. On the one hand, shrinkage methods use a target covariance matrix and trade off bias and variance between the standard sample covariance matrix and the target. More prominence has been given to low-dimensional factor models that incorporate theoretical insights from asset pricing models. In these cases, one has to trade off estimation risk for model risk. Alternatively, the literature on regularization of the sample covariance matrix uses different penalty functions for reducing the number of parameters to be estimated. Recent methods extend the idea of regularization to a conditional setting based on factor models, which increase with the number of assets, and apply regularization methods to the residual covariance matrix.