Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations.
This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications.
An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.
Article
A Review of Gender Differences in Negotiation
Iñigo Hernandez-Arenaz and Nagore Iriberri
Article
The Role of Uncertainty in Controlling Climate Change
Yongyang Cai
Integrated assessment models (IAMs) of the climate and economy aim to analyze the impact and efficacy of policies that aim to control climate change, such as carbon taxes and subsidies. A major characteristic of IAMs is that their geophysical sector determines the mean surface temperature increase over the preindustrial level, which in turn determines the damage function. Most of the existing IAMs assume that all of the future information is known. However, there are significant uncertainties in the climate and economic system, including parameter uncertainty, model uncertainty, climate tipping risks, and economic risks. For example, climate sensitivity, a well-known parameter that measures how much the equilibrium temperature will change if the atmospheric carbon concentration doubles, can range from below 1 to more than 10 in the literature. Climate damages are also uncertain. Some researchers assume that climate damages are proportional to instantaneous output, while others assume that climate damages have a more persistent impact on economic growth. The spatial distribution of climate damages is also uncertain. Climate tipping risks represent (nearly) irreversible climate events that may lead to significant changes in the climate system, such as the Greenland ice sheet collapse, while the conditions, probability of tipping, duration, and associated damage are also uncertain. Technological progress in carbon capture and storage, adaptation, renewable energy, and energy efficiency are uncertain as well. Future international cooperation and implementation of international agreements in controlling climate change may vary over time, possibly due to economic risks, natural disasters, or social conflict. In the face of these uncertainties, policy makers have to provide a decision that considers important factors such as risk aversion, inequality aversion, and sustainability of the economy and ecosystem. Solving this problem may require richer and more realistic models than standard IAMs and advanced computational methods. The recent literature has shown that these uncertainties can be incorporated into IAMs and may change optimal climate policies significantly.
Article
The Role of Wage Formation in Empirical Macroeconometric Models
Ragnar Nymoen
The specification of model equations for nominal wage setting has important implications for the properties of macroeconometric models and requires system thinking and multiple equation modeling. The main models classes are the Phillips curve model (PCM), the wage–price equilibrium correction model (WP-ECM), and the New Keynesian Phillips curve (NKPCM). The PCM was included in the macroeconometric models of the 1960s. The WP‑ECM arrived in the late 1980s. The NKPCM is central in dynamic stochastic general equilibrium models (DSGEs). The three model classes can be interpreted as different specifications of the system of stochastic difference equations that define the supply side of a medium-term macroeconometric model. This calls for an appraisal of the different wage models, in particular in relation to the concept of the non-accelerating inflation rate of unemployment (NAIRU, or natural rate of unemployment), and of the methods and research strategies used. The construction of macroeconomic model used to be based on the combination of theoretical and practical skills in economic modeling. Wage formation was viewed as being forged between the forces of markets and national institutions. In the age of DSGE models, macroeconomics has become more of a theoretical discipline. Nevertheless, producers of DSGE models make use of hybrid forms if an initial theoretical specification fails to meet a benchmark for acceptable data fit. A common ground therefore exists between the NKPC, WP‑ECM, and PCM, and it is feasible to compare the model types empirically.
Article
Score-Driven Models: Methods and Applications
Mariia Artemova, Francisco Blasques, Janneke van Brummelen, and Siem Jan Koopman
The flexibility, generality, and feasibility of score-driven models have contributed much to the impact of score-driven models in both research and policy. Score-driven models provide a unified framework for modeling the time-varying features in parametric models for time series.
The predictive likelihood function is used as the driving mechanism for updating the time-varying parameters. It leads to a flexible, general, and intuitive way of modeling the dynamic features in the time series while the estimation and inference remain relatively simple. These properties remain valid when models rely on non-Gaussian densities and nonlinear dynamic structures. The class of score-driven models has become even more appealing since the developments in theory and methodology have progressed rapidly. Furthermore, new formulations of empirical dynamic models in this class have shown their relevance in economics and finance. In the context of macroeconomic studies, the key examples are nonlinear autoregressive, dynamic factor, dynamic spatial, and Markov-switching models. In the context of finance studies, the major examples are models for integer-valued time series, multivariate scale, and dynamic copula models. In finance applications, score-driven models are especially important because they provide particular updating mechanisms for time-varying parameters that limit the effect of the influential observations and outliers that are often present in financial time series.
Article
Social Interactions in Health Behaviors and Conditions
Ana Balsa and Carlos Díaz
Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.
Article
The Spatial Dimension of Health Systems
Elisa Tosetti, Rita Santos, Francesco Moscone, and Giuseppe Arbia
The spatial dimension of supply and demand factors is a very important feature of healthcare systems. Differences in health and behavior across individuals are due not only to personal characteristics but also to external forces, such as contextual factors, social interaction processes, and global health shocks. These factors are responsible for various forms of spatial patterns and correlation often observed in the data, which are desirable to include in health econometrics models.
This article describes a set of exploratory techniques and econometric methods to visualize, summarize, test, and model spatial patterns of health economics phenomena, showing their scientific and policy power when addressing health economics issues characterized by a strong spatial dimension. Exploring and modeling the spatial dimension of the two-sided healthcare provision may help reduce inequalities in access to healthcare services and support policymakers in the design of financially sustainable healthcare systems.
Article
Spatial Models in Econometric Research
Luc Anselin
Since the late 1990s, spatial models have become a growing addition to econometric research. They are characterized by attention paid to the location of observations (i.e., ordered spatial locations) and the interaction among them. Specifically, spatial models formally express spatial interaction by including variables observed at other locations into the regression specification. This can take different forms, mostly based on an averaging of values at neighboring locations through a so-called spatially lagged variable, or spatial lag. The spatial lag can be applied to the dependent variable, to explanatory variables, and/or to the error terms. This yields a range of specifications for cross-sectional dependence, as well as for static and dynamic spatial panels.
A critical element in the spatially lagged variable is the definition of neighbor relations in a so-called spatial weights matrix. Historically, the spatial weights matrix has been taken to be given and exogenous, but this has evolved into research focused on estimating the weights from the data and on accounting for potential endogeneity in the weights.
Due to the uneven spacing of observations and the complex way in which asymptotic properties are obtained, results from time series analysis are not applicable, and specialized laws of large numbers and central limit theorems need to be developed. This requirement has yielded an active body of research into the asymptotics of spatial models.
Article
Statistical Significance and the Replication Crisis in the Social Sciences
Anna Dreber and Magnus Johannesson
The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.
Article
Stochastic Volatility in Bayesian Vector Autoregressions
Todd E. Clark and Elmar Mertens
Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.
Article
Structural Breaks in Time Series
Alessandro Casini and Pierre Perron
This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.
Article
The 1918–1919 Influenza Pandemic in Economic History
Martin Karlsson, Daniel Kühnle, and Nikolaos Prodromidis
Due to the similarities with the COVID–19 pandemic, there has been a renewed interest in the 1918–1919 influenza pandemic, which represents the most severe pandemic of the 20th century with an estimated total death toll ranging between 30 and 100 million. This rapidly growing literature in economics and economic history has devoted attention to contextual determinants of excess mortality in the pandemic; to the impact of the pandemic on economic growth, inequality, and a range of other outcomes; and to the impact of nonpharmaceutical interventions.
Estimating the effects of the pandemic, or the effects of countermeasures, is challenging. There may not be much exogenous variation to go by, and the historical data sets available are typically small and often of questionable quality. Yet the 1918–1919 pandemic offers a unique opportunity to learn how large pandemics play out in the long run.
The studies evaluating effects of the pandemic, or of policies enacted to combat it, typically rely on some version of difference-in-differences, or instrumental variables. The assumptions required for these designs to achieve identification of causal effects have rarely been systematically evaluated in this particular historical context. Using a purpose-built dataset covering the entire Swedish population, such an assessment is provided here. The empirical analysis indicates that the identifying assumptions used in previous work may indeed be satisfied. However, the results cast some doubt on the general external validity of previous findings as the analysis fails to replicate several results in the Swedish context. These disagreements highlight the need for additional studies in other populations and contexts which puts the spotlight on further digitization and linkage of historical datasets.
Article
The Economics of Identity and Conflict
Subhasish M. Chowdhury
Conflicts are a ubiquitous part of our life. One of the main reasons behind the initiation and escalation of conflict is the identity, or the sense of self, of the engaged parties. It is hence not surprising that there is a consistent area of academic literature that focuses on identity, conflict, and their interaction. This area models conflicts as contests and focuses on the theoretical, experimental, and empirical literature from economics, political science, and psychology. The theoretical literature investigates the behavioral aspects—such as preference and beliefs—to explain the reasons for and the effects of identity on human behavior. The theoretical literature also analyzes issues such as identity-dependent externality, endogenous choice of joining a group, and so on. The applied literature consists of laboratory and field experiments as well as empirical studies from the field. The experimental studies find that the salience of an identity can increase conflict in a field setting. Laboratory experiments show that whereas real identity indeed increases conflict, a mere classification does not do so. It is also observed that priming a majority–minority identity affects the conflict behavior of the majority, but not of the minority. Further investigations explain these results in terms of parochial altruism. The empirical literature in this area focuses on the various measures of identity, identity distribution, and other economic variables on conflict behavior. Religious polarization can explain conflict behavior better than linguistic differences. Moreover, polarization is a more significant determinants of conflict when the winners of the conflict enjoy a public good reward; but fractionalization is a better determinant when the winners enjoy a private good reward. As a whole, this area of literature is still emerging, and the theoretical literature can be extended to various avenues such as sabotage, affirmative action, intra-group conflict, and endogenous group formation. For empirical and experimental research, exploring new conflict resolution mechanisms, endogeneity between identity and conflict, and evaluating biological mechanisms for identity-related conflict will be of interest.
Article
The Value of a Statistical Life
Thomas J. Kniesner and W. Kip Viscusi
The value of a statistical life (VSL) is the local tradeoff rate between fatality risk and money. When the tradeoff values are derived from choices in market contexts the VSL serves as both a measure of the population’s willingness to pay for risk reduction and the marginal cost of enhancing safety. Given its fundamental economic role, policy analysts have adopted the VSL as the economically correct measure of the benefit individuals receive from enhancements to their health and safety. Estimates of the VSL for the United States are around $10 million ($2017), and estimates for other countries are generally lower given the positive income elasticity of the VSL. Because of the prominence of mortality risk reductions as the justification for government policies the VSL is a crucial component of the benefit-cost analyses that are part of the regulatory process in the United States and other countries. The VSL is also foundationally related to the concepts of value of a statistical life year (VSLY) and value of a statistical injury (VSI), which also permeate the labor and health economics literatures. Thus, the same types of valuation approaches can be used to monetize non-fatal injuries and mortality risks that pose very small effects on life expectancy. In addition to formalizing the concept and measurement of the VSL and presenting representative estimates for the United States and other countries our Encyclopedia selection addresses the most important questions concerning the nuances that are of interest to researchers and policymakers.
Article
Time-Domain Approach in High-Dimensional Dynamic Factor Models
Marco Lippi
High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows:
(*)
x
i
t
=
α
i
u
t
+
β
i
u
t
−
1
+
ξ
i
t
,
i
=
1
,
…
,
n
,
t
=
1
,
…
,
T
,
where the observable variables
x
i
t
are driven by the white noise
u
t
, which is common to all the variables, the common shock, and by the idiosyncratic component
ξ
i
t
. The common shock
u
t
is orthogonal to the idiosyncratic components
ξ
i
t
, the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock
u
t
affect the variable
x
i
t
dynamically, that is, through the lag polynomial
α
i
+
β
i
L
. Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both
n
and
T
tending to infinity.
The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation (
∗
) becomes
x
i
t
=
α
i
F
1
t
+
β
i
F
2
t
+
ξ
i
t
,
F
1
t
=
u
t
,
F
2
t
=
u
t
−
1
.
Instead of the dynamic equation (
∗
) there is now a static equation, while instead of the white noise
u
t
there are now two factors, also called static factors, which are dynamically linked:
F
1
t
=
u
t
,
F
2
t
=
F
1,
t
−
1
.
This transformation into a static representation, whose general form is
x
i
t
=
λ
i
1
F
1
t
+
⋯
+
λ
i
r
F
r
t
+
ξ
i
t
,
is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors
F
j
t
and the loadings
λ
i
j
can be consistently estimated from the principal components of the observable variables
x
i
t
.
Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.
Article
Unobserved Components Models
Joanne Ercolani
Unobserved components models (UCMs), sometimes referred to as structural time-series models, decompose a time series into its salient time-dependent features. These typically characterize the trending behavior, seasonal variation, and (nonseasonal) cyclical properties of the time series. The components are usually specified in a stochastic way so that they can evolve over time, for example, to capture changing seasonal patterns. Among many other features, the UCM framework can incorporate explanatory variables, allowing outliers and structural breaks to be captured, and can deal easily with daily or weekly effects and calendar issues like moving holidays.
UCMs are easily constructed in state space form. This enables the application of the Kalman filter algorithms, through which maximum likelihood estimation of the structural parameters are obtained, optimal predictions are made about the future state vector and the time series itself, and smoothed estimates of the unobserved components can be determined. The stylized facts of the series are then established and the components can be illustrated graphically, so that one can, for example, visualize the cyclical patterns in the time series or look at how the seasonal patterns change over time. If required, these characteristics can be removed, so that the data can be detrended, seasonally adjusted, or have business cycles extracted, without the need for ad hoc filtering techniques. Overall, UCMs have an intuitive interpretation and yield results that are simple to understand and communicate to others. Factoring in its competitive forecasting ability, the UCM framework is hugely appealing as a modeling tool.