1-10 of 61 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Parental choice over public schools has become a major policy tool to combat inequality in access to schools. Traditional neighborhood-based assignment is being replaced by school choice programs, broadening families’ access to schools beyond their residential location. Demand and supply in school choice programs are cleared via centralized admissions algorithms. Heterogeneous parental preferences and admissions policies create trade-offs among efficiency and equity. The data from centralized admissions algorithms can be used effectively for credible research design toward better understanding of school effectiveness, which in turn can be used for school portfolio planning and student assignment based on match quality between students and schools.

Article

Gianluca Cubadda and Alain Hecq

Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.

Article

Jennifer L. Castle and David F. Hendry

Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.

Article

Martin Karlsson, Daniel Kühnle, and Nikolaos Prodromidis

Due to the similarities with the COVID–19 pandemic, there has been a renewed interest in the 1918–1919 influenza pandemic, which represents the most severe pandemic of the 20th century with an estimated total death toll ranging between 30 and 100 million. This rapidly growing literature in economics and economic history has devoted attention to contextual determinants of excess mortality in the pandemic; to the impact of the pandemic on economic growth, inequality, and a range of other outcomes; and to the impact of nonpharmaceutical interventions. Estimating the effects of the pandemic, or the effects of countermeasures, is challenging. There may not be much exogenous variation to go by, and the historical data sets available are typically small and often of questionable quality. Yet the 1918–1919 pandemic offers a unique opportunity to learn how large pandemics play out in the long run. The studies evaluating effects of the pandemic, or of policies enacted to combat it, typically rely on some version of difference-in-differences, or instrumental variables. The assumptions required for these designs to achieve identification of causal effects have rarely been systematically evaluated in this particular historical context. Using a purpose-built dataset covering the entire Swedish population, such an assessment is provided here. The empirical analysis indicates that the identifying assumptions used in previous work may indeed be satisfied. However, the results cast some doubt on the general external validity of previous findings as the analysis fails to replicate several results in the Swedish context. These disagreements highlight the need for additional studies in other populations and contexts which puts the spotlight on further digitization and linkage of historical datasets.

Article

The specification of model equations for nominal wage setting has important implications for the properties of macroeconometric models and requires system thinking and multiple equation modeling. The main models classes are the Phillips curve model (PCM), the wage–price equilibrium correction model (WP-ECM), and the New Keynesian Phillips curve (NKPCM). The PCM was included in the macroeconometric models of the 1960s. The WP‑ECM arrived in the late 1980s. The NKPCM is central in dynamic stochastic general equilibrium models (DSGEs). The three model classes can be interpreted as different specifications of the system of stochastic difference equations that define the supply side of a medium-term macroeconometric model. This calls for an appraisal of the different wage models, in particular in relation to the concept of the non-accelerating inflation rate of unemployment (NAIRU, or natural rate of unemployment), and of the methods and research strategies used. The construction of macroeconomic model used to be based on the combination of theoretical and practical skills in economic modeling. Wage formation was viewed as being forged between the forces of markets and national institutions. In the age of DSGE models, macroeconomics has become more of a theoretical discipline. Nevertheless, producers of DSGE models make use of hybrid forms if an initial theoretical specification fails to meet a benchmark for acceptable data fit. A common ground therefore exists between the NKPC, WP‑ECM, and PCM, and it is feasible to compare the model types empirically.

Article

The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary. The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks. Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.

Article

For nearly 25 years, advances in panel data and quantile regression were developed almost completely in parallel, with no intersection until the work by Koenker in the mid-2000s. The early theoretical work in statistics and economics raised more questions than answers, but it encouraged the development of several promising new approaches and research that offered a better understanding of the challenges and possibilities at the intersection of the literatures. Panel data quantile regression allows the estimation of effects that are heterogeneous throughout the conditional distribution of the response variable while controlling for individual and time-specific confounders. This type of heterogeneous effect is not well summarized by the average effect. For instance, the relationship between the number of students in a class and average educational achievement has been extensively investigated, but research also shows that class size affects low-achieving and high-achieving students differently. Advances in panel data include several methods and algorithms that have created opportunities for more informative and robust empirical analysis in models with subject heterogeneity and factor structure.

Article

Since the late 1990s, spatial models have become a growing addition to econometric research. They are characterized by attention paid to the location of observations (i.e., ordered spatial locations) and the interaction among them. Specifically, spatial models formally express spatial interaction by including variables observed at other locations into the regression specification. This can take different forms, mostly based on an averaging of values at neighboring locations through a so-called spatially lagged variable, or spatial lag. The spatial lag can be applied to the dependent variable, to explanatory variables, and/or to the error terms. This yields a range of specifications for cross-sectional dependence, as well as for static and dynamic spatial panels. A critical element in the spatially lagged variable is the definition of neighbor relations in a so-called spatial weights matrix. Historically, the spatial weights matrix has been taken to be given and exogenous, but this has evolved into research focused on estimating the weights from the data and on accounting for potential endogeneity in the weights. Due to the uneven spacing of observations and the complex way in which asymptotic properties are obtained, results from time series analysis are not applicable, and specialized laws of large numbers and central limit theorems need to be developed. This requirement has yielded an active body of research into the asymptotics of spatial models.

Article

Subhasish M. Chowdhury

Conflicts are a ubiquitous part of our life. One of the main reasons behind the initiation and escalation of conflict is the identity, or the sense of self, of the engaged parties. It is hence not surprising that there is a consistent area of academic literature that focuses on identity, conflict, and their interaction. This area models conflicts as contests and focuses on the theoretical, experimental, and empirical literature from economics, political science, and psychology. The theoretical literature investigates the behavioral aspects—such as preference and beliefs—to explain the reasons for and the effects of identity on human behavior. The theoretical literature also analyzes issues such as identity-dependent externality, endogenous choice of joining a group, and so on. The applied literature consists of laboratory and field experiments as well as empirical studies from the field. The experimental studies find that the salience of an identity can increase conflict in a field setting. Laboratory experiments show that whereas real identity indeed increases conflict, a mere classification does not do so. It is also observed that priming a majority–minority identity affects the conflict behavior of the majority, but not of the minority. Further investigations explain these results in terms of parochial altruism. The empirical literature in this area focuses on the various measures of identity, identity distribution, and other economic variables on conflict behavior. Religious polarization can explain conflict behavior better than linguistic differences. Moreover, polarization is a more significant determinants of conflict when the winners of the conflict enjoy a public good reward; but fractionalization is a better determinant when the winners enjoy a private good reward. As a whole, this area of literature is still emerging, and the theoretical literature can be extended to various avenues such as sabotage, affirmative action, intra-group conflict, and endogenous group formation. For empirical and experimental research, exploring new conflict resolution mechanisms, endogeneity between identity and conflict, and evaluating biological mechanisms for identity-related conflict will be of interest.

Article

Jonathan R. W. Temple

Growth econometrics is the application of statistical methods to the study of economic growth and levels of national output or income per head. Researchers often seek to understand why growth rates differ across countries. The field developed rapidly in the 1980s and 1990s, but the early work often proved fragile. Cross-section analyses are limited by the relatively small number of countries in the world and problems of endogeneity, parameter heterogeneity, model uncertainty, and cross-section error dependence. The long-term prospects look better for approaches using panel data. Overall, the quality of the evidence has improved over time, due to better measurement, more data, and new methods. As longer spans of data become available, the methods of growth econometrics will shed light on fundamental questions that are hard to answer any other way.