11-20 of 74 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Fractional Integration and Cointegration  

Javier Hualde and Morten Ørregaard Nielsen

Fractionally integrated and fractionally cointegrated time series are classes of models that generalize standard notions of integrated and cointegrated time series. The fractional models are characterized by a small number of memory parameters that control the degree of fractional integration and/or cointegration. In classical work, the memory parameters are assumed known and equal to 0, 1, or 2. In the fractional integration and fractional cointegration context, however, these parameters are real-valued and are typically assumed unknown and estimated. Thus, fractionally integrated and fractionally cointegrated time series can display very general types of stationary and nonstationary behavior, including long memory, and this more general framework entails important additional challenges compared to the traditional setting. Modeling, estimation, and testing in the context of fractional integration and fractional cointegration have been developed in time and frequency domains. Related to both alternative approaches, theory has been derived under parametric or semiparametric assumptions, and as expected, the obtained results illustrate the well-known trade-off between efficiency and robustness against misspecification. These different developments form a large and mature literature with applications in a wide variety of disciplines.

Article

Score-Driven Models: Methods and Applications  

Mariia Artemova, Francisco Blasques, Janneke van Brummelen, and Siem Jan Koopman

The flexibility, generality, and feasibility of score-driven models have contributed much to the impact of score-driven models in both research and policy. Score-driven models provide a unified framework for modeling the time-varying features in parametric models for time series. The predictive likelihood function is used as the driving mechanism for updating the time-varying parameters. It leads to a flexible, general, and intuitive way of modeling the dynamic features in the time series while the estimation and inference remain relatively simple. These properties remain valid when models rely on non-Gaussian densities and nonlinear dynamic structures. The class of score-driven models has become even more appealing since the developments in theory and methodology have progressed rapidly. Furthermore, new formulations of empirical dynamic models in this class have shown their relevance in economics and finance. In the context of macroeconomic studies, the key examples are nonlinear autoregressive, dynamic factor, dynamic spatial, and Markov-switching models. In the context of finance studies, the major examples are models for integer-valued time series, multivariate scale, and dynamic copula models. In finance applications, score-driven models are especially important because they provide particular updating mechanisms for time-varying parameters that limit the effect of the influential observations and outliers that are often present in financial time series.

Article

Applications of Web Scraping in Economics and Finance  

Piotr Śpiewanowski, Oleksandr Talavera, and Linh Vi

The 21st-century economy is increasingly built around data. Firms and individuals upload and store enormous amount of data. Most of the produced data is stored on private servers, but a considerable part is made publicly available across the 1.83 billion websites available online. These data can be accessed by researchers using web-scraping techniques. Web scraping refers to the process of collecting data from web pages either manually or using automation tools or specialized software. Web scraping is possible and relatively simple thanks to the regular structure of the code used for websites designed to be displayed in web browsers. Websites built with HTML can be scraped using standard text-mining tools, either scripts in popular (statistical) programming languages such as Python, Stata, R, or stand-alone dedicated web-scraping tools. Some of those tools do not even require any prior programming skills. Since about 2010, with the omnipresence of social and economic activities on the Internet, web scraping has become increasingly more popular among academic researchers. In contrast to proprietary data, which might not be feasible due to substantial costs, web scraping can make interesting data sources accessible to everyone. Thanks to web scraping, the data are now available in real time and with significantly more details than what has been traditionally offered by statistical offices or commercial data vendors. In fact, many statistical offices have started using web-scraped data, for example, for calculating price indices. Data collected through web scraping has been used in numerous economic and finance projects and can easily complement traditional data sources.

Article

The Implications of School Assignment Mechanisms for Efficiency and Equity  

Atila Abdulkadiroğlu

Parental choice over public schools has become a major policy tool to combat inequality in access to schools. Traditional neighborhood-based assignment is being replaced by school choice programs, broadening families’ access to schools beyond their residential location. Demand and supply in school choice programs are cleared via centralized admissions algorithms. Heterogeneous parental preferences and admissions policies create trade-offs among efficiency and equity. The data from centralized admissions algorithms can be used effectively for credible research design toward better understanding of school effectiveness, which in turn can be used for school portfolio planning and student assignment based on match quality between students and schools.

Article

Reduced Rank Regression Models in Economics and Finance  

Gianluca Cubadda and Alain Hecq

Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.

Article

Econometrics for Modelling Climate Change  

Jennifer L. Castle and David F. Hendry

Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.

Article

The 1918–1919 Influenza Pandemic in Economic History  

Martin Karlsson, Daniel Kühnle, and Nikolaos Prodromidis

Due to the similarities with the COVID–19 pandemic, there has been a renewed interest in the 1918–1919 influenza pandemic, which represents the most severe pandemic of the 20th century with an estimated total death toll ranging between 30 and 100 million. This rapidly growing literature in economics and economic history has devoted attention to contextual determinants of excess mortality in the pandemic; to the impact of the pandemic on economic growth, inequality, and a range of other outcomes; and to the impact of nonpharmaceutical interventions. Estimating the effects of the pandemic, or the effects of countermeasures, is challenging. There may not be much exogenous variation to go by, and the historical data sets available are typically small and often of questionable quality. Yet the 1918–1919 pandemic offers a unique opportunity to learn how large pandemics play out in the long run. The studies evaluating effects of the pandemic, or of policies enacted to combat it, typically rely on some version of difference-in-differences, or instrumental variables. The assumptions required for these designs to achieve identification of causal effects have rarely been systematically evaluated in this particular historical context. Using a purpose-built dataset covering the entire Swedish population, such an assessment is provided here. The empirical analysis indicates that the identifying assumptions used in previous work may indeed be satisfied. However, the results cast some doubt on the general external validity of previous findings as the analysis fails to replicate several results in the Swedish context. These disagreements highlight the need for additional studies in other populations and contexts which puts the spotlight on further digitization and linkage of historical datasets.

Article

The Role of Wage Formation in Empirical Macroeconometric Models  

Ragnar Nymoen

The specification of model equations for nominal wage setting has important implications for the properties of macroeconometric models and requires system thinking and multiple equation modeling. The main models classes are the Phillips curve model (PCM), the wage–price equilibrium correction model (WP-ECM), and the New Keynesian Phillips curve (NKPCM). The PCM was included in the macroeconometric models of the 1960s. The WP‑ECM arrived in the late 1980s. The NKPCM is central in dynamic stochastic general equilibrium models (DSGEs). The three model classes can be interpreted as different specifications of the system of stochastic difference equations that define the supply side of a medium-term macroeconometric model. This calls for an appraisal of the different wage models, in particular in relation to the concept of the non-accelerating inflation rate of unemployment (NAIRU, or natural rate of unemployment), and of the methods and research strategies used. The construction of macroeconomic model used to be based on the combination of theoretical and practical skills in economic modeling. Wage formation was viewed as being forged between the forces of markets and national institutions. In the age of DSGE models, macroeconomics has become more of a theoretical discipline. Nevertheless, producers of DSGE models make use of hybrid forms if an initial theoretical specification fails to meet a benchmark for acceptable data fit. A common ground therefore exists between the NKPC, WP‑ECM, and PCM, and it is feasible to compare the model types empirically.

Article

A Survey of Econometric Approaches to Convergence Tests of Emissions and Measures of Environmental Quality  

Junsoo Lee, James E. Payne, and Md. Towhidul Islam

The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary. The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks. Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.

Article

Quantile Regression for Panel Data and Factor Models  

Carlos Lamarche

For nearly 25 years, advances in panel data and quantile regression were developed almost completely in parallel, with no intersection until the work by Koenker in the mid-2000s. The early theoretical work in statistics and economics raised more questions than answers, but it encouraged the development of several promising new approaches and research that offered a better understanding of the challenges and possibilities at the intersection of the literatures. Panel data quantile regression allows the estimation of effects that are heterogeneous throughout the conditional distribution of the response variable while controlling for individual and time-specific confounders. This type of heterogeneous effect is not well summarized by the average effect. For instance, the relationship between the number of students in a class and average educational achievement has been extensively investigated, but research also shows that class size affects low-achieving and high-achieving students differently. Advances in panel data include several methods and algorithms that have created opportunities for more informative and robust empirical analysis in models with subject heterogeneity and factor structure.