The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary.
The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks.
Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.
Article
A Survey of Econometric Approaches to Convergence Tests of Emissions and Measures of Environmental Quality
Junsoo Lee, James E. Payne, and Md. Towhidul Islam
Article
Econometrics for Modelling Climate Change
Jennifer L. Castle and David F. Hendry
Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.
Article
Fractional Integration and Cointegration
Javier Hualde and Morten Ørregaard Nielsen
Fractionally integrated and fractionally cointegrated time series are classes of models that generalize standard notions of integrated and cointegrated time series. The fractional models are characterized by a small number of memory parameters that control the degree of fractional integration and/or cointegration. In classical work, the memory parameters are assumed known and equal to 0, 1, or 2. In the fractional integration and fractional cointegration context, however, these parameters are real-valued and are typically assumed unknown and estimated. Thus, fractionally integrated and fractionally cointegrated time series can display very general types of stationary and nonstationary behavior, including long memory, and this more general framework entails important additional challenges compared to the traditional setting. Modeling, estimation, and testing in the context of fractional integration and fractional cointegration have been developed in time and frequency domains. Related to both alternative approaches, theory has been derived under parametric or semiparametric assumptions, and as expected, the obtained results illustrate the well-known trade-off between efficiency and robustness against misspecification. These different developments form a large and mature literature with applications in a wide variety of disciplines.
Article
Frequency-Domain Approach in High-Dimensional Dynamic Factor Models
Marco Lippi
High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both
n
the number of variables in the dataset and
T
, the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of
n
, a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows:
x
i
t
=
α
i
u
t
+
β
i
u
t
−
1
+
ξ
i
t
,
i
=
1,
…
,
n
,
t
=
1,
…
,
T
,
(*)
where the observable variables
x
i
t
are driven by the white noise
u
t
, which is common to all the variables, the common factor, and by the idiosyncratic component
ξ
i
t
. The common factor
u
t
is orthogonal to the idiosyncratic components
ξ
i
t
, the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor
u
t
affect the variable
x
i
t
dynamically, that is through the lag polynomial
α
i
+
β
i
L
. Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both
n
and
T
tending to infinity.
Model
(
∗
)
, generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter.
We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector
(
x
1
t
x
2
t
⋯
x
n
t
)
. Dynamic principal components, based on the spectral density of the
x
’s, are then used to construct estimators of the common factors.
These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the
x
’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors.
Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension
n
that are driven by a
q
-dimensional white noise, with
q
<
n
, that is, singular vector stochastic processes. The main features of this literature are described with some detail.
Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.
Article
The Implications of School Assignment Mechanisms for Efficiency and Equity
Atila Abdulkadiroğlu
Parental choice over public schools has become a major policy tool to combat inequality in access to schools. Traditional neighborhood-based assignment is being replaced by school choice programs, broadening families’ access to schools beyond their residential location. Demand and supply in school choice programs are cleared via centralized admissions algorithms. Heterogeneous parental preferences and admissions policies create trade-offs among efficiency and equity. The data from centralized admissions algorithms can be used effectively for credible research design toward better understanding of school effectiveness, which in turn can be used for school portfolio planning and student assignment based on match quality between students and schools.
Article
Improving on Simple Majority Voting by Alternative Voting Mechanisms
Jacob K. Goeree, Philippos Louis, and Jingjing Zhang
Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.
Article
Incentives and Performance of Healthcare Professionals
Martin Chalkley
Economists have long regarded healthcare as a unique and challenging area of economic activity on account of the specialized knowledge of healthcare professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in healthcare. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.
Article
Limited Dependent Variables and Discrete Choice Modelling
Badi H. Baltagi
Limited dependent variables considers regression models where the dependent variable takes limited values like zero and one for binary choice mowedels, or a multinomial model where there is a few choices like modes of transportation, for example, bus, train, or a car. Binary choice examples in economics include a woman’s decision to participate in the labor force, or a worker’s decision to join a union. Other examples include whether a consumer defaults on a loan or a credit card, or whether they purchase a house or a car. This qualitative variable is recoded as one if the female participates in the labor force (or the consumer defaults on a loan) and zero if she does not participate (or the consumer does not default on the loan). Least squares using a binary choice model is inferior to logit or probit regressions. When the dependent variable is a fraction or proportion, inverse logit regressions are appropriate as well as fractional logit quasi-maximum likelihood. An example of the inverse logit regression is the effect of beer tax on reducing motor vehicle fatality rates from drunken driving. The fractional logit quasi-maximum likelihood is illustrated using an equation explaining the proportion of participants in a pension plan using firm data. The probit regression is illustrated with a fertility empirical example, showing that parental preferences for a mixed sibling-sex composition in developed countries has a significant and positive effect on the probability of having an additional child. Multinomial choice models where the number of choices is more than 2, like, bond ratings in Finance, may have a natural ordering. Another example is the response to an opinion survey which could vary from strongly agree to strongly disagree. Alternatively, this choice may not have a natural ordering like the choice of occupation or modes of transportation. The Censored regression model is motivated with estimating the expenditures on cars or estimating the amount of mortgage lending. In this case, the observations are censored because we observe the expenditures on a car (or the mortgage amount) only if the car is bought or the mortgage approved. In studying poverty, we exclude the rich from our sample. In this case, the sample is not random. Applying least squares to the truncated sample leads to biased and inconsistent results. This differs from censoring. In the latter case, no data is excluded. In fact, we observe the characteristics of all mortgage applicants even those that do not actually get their mortgage approved. Selection bias occurs when the sample is not randomly drawn. This is illustrated with a labor participating equation (the selection equation) and an earnings equation, where earnings are observed only if the worker participates in the labor force, otherwise it is zero. Extensions to panel data limited dependent variable models are also discussed and empirical examples given.
Article
Machine Learning Econometrics: Bayesian Algorithms and Methods
Dimitris Korobilis and Davide Pettenuzzo
Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution.
In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines.
While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.
Article
Preferential Trade Agreements: Recent Theoretical and Empirical Developments
James Lake and Pravin Krishna
In recent decades, there has been a dramatic proliferation of preferential trade agreements (PTAs) between countries that, while legal, contradict the non-discrimination principle of the world trade system. This raises various issues, both theoretical and empirical, regarding the evolution of trade policy within the world trade system and the welfare implications for PTA members and non-members. The survey starts with the Kemp-Wan-Ohyama and Panagariya-Krishna analyses in the literature that theoretically show PTAs can always be constructed so that they (weakly) increase the welfare of members and non-members. Considerable attention is then devoted to recent developments on the interaction between PTAs and multilateral trade liberalization, focusing on two key incentives: an “exclusion incentive” of PTA members and a “free riding incentive” of PTA non-members. While the baseline presumption one should have in mind is that these incentives lead PTAs to inhibit the ultimate degree of global trade liberalization, this presumption can be overturned when dynamic considerations are taken into account or when countries can negotiate the degree of multilateral liberalization rather than facing a binary choice over global free trade. Promising areas for pushing this theoretical literature forward include the growing use of quantitative trade models, incorporating rules of origin and global value chains, modeling the issues surrounding “mega-regional” agreements, and modelling the possibility of exit from PTAs. Empirical evidence in the literature is mixed regarding whether PTAs lead to trade diversion or trade creation, whether PTAs have significant adverse effects on non-member terms-of-trade, whether PTAs lead members to lower external tariffs on non-members, and the role of PTAs in facilitating deep integration among members.
Article
Quantile Regression for Panel Data and Factor Models
Carlos Lamarche
For nearly 25 years, advances in panel data and quantile regression were developed almost completely in parallel, with no intersection until the work by Koenker in the mid-2000s. The early theoretical work in statistics and economics raised more questions than answers, but it encouraged the development of several promising new approaches and research that offered a better understanding of the challenges and possibilities at the intersection of the literatures. Panel data quantile regression allows the estimation of effects that are heterogeneous throughout the conditional distribution of the response variable while controlling for individual and time-specific confounders. This type of heterogeneous effect is not well summarized by the average effect. For instance, the relationship between the number of students in a class and average educational achievement has been extensively investigated, but research also shows that class size affects low-achieving and high-achieving students differently. Advances in panel data include several methods and algorithms that have created opportunities for more informative and robust empirical analysis in models with subject heterogeneity and factor structure.
Article
The Economics of Identity and Conflict
Subhasish M. Chowdhury
Conflicts are a ubiquitous part of our life. One of the main reasons behind the initiation and escalation of conflict is the identity, or the sense of self, of the engaged parties. It is hence not surprising that there is a consistent area of academic literature that focuses on identity, conflict, and their interaction. This area models conflicts as contests and focuses on the theoretical, experimental, and empirical literature from economics, political science, and psychology. The theoretical literature investigates the behavioral aspects—such as preference and beliefs—to explain the reasons for and the effects of identity on human behavior. The theoretical literature also analyzes issues such as identity-dependent externality, endogenous choice of joining a group, and so on. The applied literature consists of laboratory and field experiments as well as empirical studies from the field. The experimental studies find that the salience of an identity can increase conflict in a field setting. Laboratory experiments show that whereas real identity indeed increases conflict, a mere classification does not do so. It is also observed that priming a majority–minority identity affects the conflict behavior of the majority, but not of the minority. Further investigations explain these results in terms of parochial altruism. The empirical literature in this area focuses on the various measures of identity, identity distribution, and other economic variables on conflict behavior. Religious polarization can explain conflict behavior better than linguistic differences. Moreover, polarization is a more significant determinants of conflict when the winners of the conflict enjoy a public good reward; but fractionalization is a better determinant when the winners enjoy a private good reward. As a whole, this area of literature is still emerging, and the theoretical literature can be extended to various avenues such as sabotage, affirmative action, intra-group conflict, and endogenous group formation. For empirical and experimental research, exploring new conflict resolution mechanisms, endogeneity between identity and conflict, and evaluating biological mechanisms for identity-related conflict will be of interest.