Since the late 1990s, spatial models have become a growing addition to econometric research. They are characterized by attention paid to the location of observations (i.e., ordered spatial locations) and the interaction among them. Specifically, spatial models formally express spatial interaction by including variables observed at other locations into the regression specification. This can take different forms, mostly based on an averaging of values at neighboring locations through a so-called spatially lagged variable, or spatial lag. The spatial lag can be applied to the dependent variable, to explanatory variables, and/or to the error terms. This yields a range of specifications for cross-sectional dependence, as well as for static and dynamic spatial panels.
A critical element in the spatially lagged variable is the definition of neighbor relations in a so-called spatial weights matrix. Historically, the spatial weights matrix has been taken to be given and exogenous, but this has evolved into research focused on estimating the weights from the data and on accounting for potential endogeneity in the weights.
Due to the uneven spacing of observations and the complex way in which asymptotic properties are obtained, results from time series analysis are not applicable, and specialized laws of large numbers and central limit theorems need to be developed. This requirement has yielded an active body of research into the asymptotics of spatial models.

### Article

## Spatial Models in Econometric Research

### Luc Anselin

### Article

## The Economics of Identity and Conflict

### Subhasish M. Chowdhury

Conflicts are a ubiquitous part of our life. One of the main reasons behind the initiation and escalation of conflict is the identity, or the sense of self, of the engaged parties. It is hence not surprising that there is a consistent area of academic literature that focuses on identity, conflict, and their interaction. This area models conflicts as contests and focuses on the theoretical, experimental, and empirical literature from economics, political science, and psychology. The theoretical literature investigates the behavioral aspects—such as preference and beliefs—to explain the reasons for and the effects of identity on human behavior. The theoretical literature also analyzes issues such as identity-dependent externality, endogenous choice of joining a group, and so on. The applied literature consists of laboratory and field experiments as well as empirical studies from the field. The experimental studies find that the salience of an identity can increase conflict in a field setting. Laboratory experiments show that whereas real identity indeed increases conflict, a mere classification does not do so. It is also observed that priming a majority–minority identity affects the conflict behavior of the majority, but not of the minority. Further investigations explain these results in terms of parochial altruism. The empirical literature in this area focuses on the various measures of identity, identity distribution, and other economic variables on conflict behavior. Religious polarization can explain conflict behavior better than linguistic differences. Moreover, polarization is a more significant determinants of conflict when the winners of the conflict enjoy a public good reward; but fractionalization is a better determinant when the winners enjoy a private good reward. As a whole, this area of literature is still emerging, and the theoretical literature can be extended to various avenues such as sabotage, affirmative action, intra-group conflict, and endogenous group formation. For empirical and experimental research, exploring new conflict resolution mechanisms, endogeneity between identity and conflict, and evaluating biological mechanisms for identity-related conflict will be of interest.

### Article

## Growth Econometrics

### Jonathan R. W. Temple

Growth econometrics is the application of statistical methods to the study of economic growth and levels of national output or income per head. Researchers often seek to understand why growth rates differ across countries. The field developed rapidly in the 1980s and 1990s, but the early work often proved fragile. Cross-section analyses are limited by the relatively small number of countries in the world and problems of endogeneity, parameter heterogeneity, model uncertainty, and cross-section error dependence. The long-term prospects look better for approaches using panel data. Overall, the quality of the evidence has improved over time, due to better measurement, more data, and new methods. As longer spans of data become available, the methods of growth econometrics will shed light on fundamental questions that are hard to answer any other way.

### Article

## Limited Dependent Variables and Discrete Choice Modelling

### Badi H. Baltagi

Limited dependent variables considers regression models where the dependent variable takes limited values like zero and one for binary choice mowedels, or a multinomial model where there is a few choices like modes of transportation, for example, bus, train, or a car. Binary choice examples in economics include a woman’s decision to participate in the labor force, or a worker’s decision to join a union. Other examples include whether a consumer defaults on a loan or a credit card, or whether they purchase a house or a car. This qualitative variable is recoded as one if the female participates in the labor force (or the consumer defaults on a loan) and zero if she does not participate (or the consumer does not default on the loan). Least squares using a binary choice model is inferior to logit or probit regressions. When the dependent variable is a fraction or proportion, inverse logit regressions are appropriate as well as fractional logit quasi-maximum likelihood. An example of the inverse logit regression is the effect of beer tax on reducing motor vehicle fatality rates from drunken driving. The fractional logit quasi-maximum likelihood is illustrated using an equation explaining the proportion of participants in a pension plan using firm data. The probit regression is illustrated with a fertility empirical example, showing that parental preferences for a mixed sibling-sex composition in developed countries has a significant and positive effect on the probability of having an additional child. Multinomial choice models where the number of choices is more than 2, like, bond ratings in Finance, may have a natural ordering. Another example is the response to an opinion survey which could vary from strongly agree to strongly disagree. Alternatively, this choice may not have a natural ordering like the choice of occupation or modes of transportation. The Censored regression model is motivated with estimating the expenditures on cars or estimating the amount of mortgage lending. In this case, the observations are censored because we observe the expenditures on a car (or the mortgage amount) only if the car is bought or the mortgage approved. In studying poverty, we exclude the rich from our sample. In this case, the sample is not random. Applying least squares to the truncated sample leads to biased and inconsistent results. This differs from censoring. In the latter case, no data is excluded. In fact, we observe the characteristics of all mortgage applicants even those that do not actually get their mortgage approved. Selection bias occurs when the sample is not randomly drawn. This is illustrated with a labor participating equation (the selection equation) and an earnings equation, where earnings are observed only if the worker participates in the labor force, otherwise it is zero. Extensions to panel data limited dependent variable models are also discussed and empirical examples given.

### Article

## Markov Switching

### Yong Song and Tomasz Woźniak

Markov switching models are a family of models that introduces time variation in the parameters in the form of their state, or regime-specific values. This time variation is governed by a latent discrete-valued stochastic process with limited memory. More specifically, the current value of the state indicator is determined by the value of the state indicator from the previous period only implying the Markov property. A transition matrix characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period conditionally on the state in the current period. This setup decides on the two main advantages of the Markov switching models: the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities.
The most commonly applied models from this family are those that presume a finite number of regimes and the exogeneity of the Markov process, which is defined as its independence from the model’s unpredictable innovations. In many such applications, the desired properties of the Markov switching model have been obtained either by imposing appropriate restrictions on transition probabilities or by introducing the time dependence of these probabilities determined by explanatory variables or functions of the state indicator. One of the extensions of this basic specification includes infinite hidden Markov models that provide great flexibility and excellent forecasting performance by allowing the number of states to go to infinity. Another extension, the endogenous Markov switching model, explicitly relates the state indicator to the model’s innovations, making it more interpretable and offering promising avenues for development.

### Article

## Methodology of Macroeconometrics

### Aris Spanos

The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.

### Article

## The Role of Uncertainty in Controlling Climate Change

### Yongyang Cai

Integrated assessment models (IAMs) of the climate and economy aim to analyze the impact and efficacy of policies that aim to control climate change, such as carbon taxes and subsidies. A major characteristic of IAMs is that their geophysical sector determines the mean surface temperature increase over the preindustrial level, which in turn determines the damage function. Most of the existing IAMs assume that all of the future information is known. However, there are significant uncertainties in the climate and economic system, including parameter uncertainty, model uncertainty, climate tipping risks, and economic risks. For example, climate sensitivity, a well-known parameter that measures how much the equilibrium temperature will change if the atmospheric carbon concentration doubles, can range from below 1 to more than 10 in the literature. Climate damages are also uncertain. Some researchers assume that climate damages are proportional to instantaneous output, while others assume that climate damages have a more persistent impact on economic growth. The spatial distribution of climate damages is also uncertain. Climate tipping risks represent (nearly) irreversible climate events that may lead to significant changes in the climate system, such as the Greenland ice sheet collapse, while the conditions, probability of tipping, duration, and associated damage are also uncertain. Technological progress in carbon capture and storage, adaptation, renewable energy, and energy efficiency are uncertain as well. Future international cooperation and implementation of international agreements in controlling climate change may vary over time, possibly due to economic risks, natural disasters, or social conflict. In the face of these uncertainties, policy makers have to provide a decision that considers important factors such as risk aversion, inequality aversion, and sustainability of the economy and ecosystem. Solving this problem may require richer and more realistic models than standard IAMs and advanced computational methods. The recent literature has shown that these uncertainties can be incorporated into IAMs and may change optimal climate policies significantly.

### Article

## An Introduction to Bootstrap Theory in Time Series Econometrics

### Giuseppe Cavaliere, Heino Bohn Nielsen, and Anders Rahbek

While often simple to implement in practice, application of the bootstrap in econometric modeling of economic and financial time series requires establishing validity of the bootstrap. Establishing bootstrap asymptotic validity relies on verifying often nonstandard regularity conditions. In particular, bootstrap versions of classic convergence in probability and distribution, and hence of laws of large numbers and central limit theorems, are critical ingredients. Crucially, these depend on the type of bootstrap applied (e.g., wild or independently and identically distributed (i.i.d.) bootstrap) and on the underlying econometric model and data. Regularity conditions and their implications for possible improvements in terms of (empirical) size and power for bootstrap-based testing differ from standard asymptotic testing, which can be illustrated by simulations.

### Article

## Machine Learning Econometrics: Bayesian Algorithms and Methods

### Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution.
In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines.
While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

### Article

## Mergers and Acquisitions: Long-Run Performance and Success Factors

### Luc Renneboog and Cara Vansteenkiste

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance.
Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.