Marcus Berliant and Ping Wang
General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.
Brant Abbott and Giovanni Gallipoli
This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income.
Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them.
Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components.
A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck).
A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.
Punishment has been regarded as an important instrument to sustain human cooperation. A great deal of experimental research has been conducted to understand human punishment behavior, in particular, informal peer punishment. What drives individuals to incur cost to punish others? How does punishment influence human behavior?
Punishment behavior has been observed when the individual does not expect to meet the wrongdoers again in the future and thus has no monetary incentive to punish. Several reasons for such retributive punishment have been proposed and studied. Punishment can be used to express certain values, attitudes, or emotions. Egalitarianism triggers punishment when the transgression leads to inequality. The norm to punish the wrongdoers may also lead people to incur costs to punish even when it is not what they intrinsically want to do.
Individuals sometimes punish wrongdoers even when they are not the victim. The motivation underlying the third-party punishment can be different than the second-party punishment. In addition, restricting the punishment power to a third party can be important to mitigate antisocial punishment when unrestricted second-party peer punishment leads to antisocial punishments and escalating retaliation.
It is important to note that punishment does not always promote cooperation. Imposing fines can crowd out intrinsic motivation to cooperate when it changes people’s perception of social interactions from a generous, non-market activity to a market commodity and leads to more selfish profit-maximizing behavior. To avoid the crowding-out effect, it is important to implement the punishment in a way that it sends a clear signal that the punished behavior violates social norms.
Jacob K. Goeree, Philippos Louis, and Jingjing Zhang
Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.
Economists have long regarded healthcare as a unique and challenging area of economic activity on account of the specialized knowledge of healthcare professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in healthcare. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Noémi Kreif and Karla DiazOrdaz
While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).
This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.
Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.
Charles Ka Yui Leung and Cho Yiu Joe Ng
This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.
Simon van Norden
Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics.
Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities.
Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.
The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.
Pieter van Baal and Hendriek Boshuizen
In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases.
This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.
Karla DiazOrdaz and Richard Grieve
Health economic evaluations face the issues of noncompliance and missing data. Here, noncompliance is defined as non-adherence to a specific treatment, and occurs within randomized controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss-to-follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling noncompliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods with which to handle these with application to health economic evaluation that uses data from an RCT.
In an RCT the random assignment can be used as an instrument-for-treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals’ costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, which assume the data are Missing At Random, but also sensitivity analyses that recognize the data may be missing according to the true, unobserved values, that is, Missing Not at Random.
Future studies should subject the assumptions behind methods for handling noncompliance and missing data to thorough sensitivity analyses. Modern machine-learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of noncompliance and missing data.
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Detection of outliers is an important explorative step in empirical analysis. Once detected, the investigator will have to decide how to model the outliers depending on the context. Indeed, the outliers may represent noisy observations that are best left out of the analysis or they may be very informative observations that would have a particularly important role in the analysis. For regression analysis in time series a number of outlier algorithms are available, including impulse indicator saturation and methods from robust statistics. The algorithms are complex and their statistical properties are not fully understood. Extensive simulation studies have been made, but the formal theory is lacking. Some progress has been made toward an asymptotic theory of the algorithms. A number of asymptotic results are already available building on empirical process theory.
Jesús Gonzalo and Jean-Yves Pitarakis
Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.
James Lake and Pravin Krishna
In recent decades, there has been a dramatic proliferation of preferential trade agreements (PTAs) between countries that, while legal, contradict the non-discrimination principle of the world trade system. This raises various issues, both theoretical and empirical, regarding the evolution of trade policy within the world trade system and the welfare implications for PTA members and non-members. The survey starts with the Kemp-Wan-Ohyama and Panagariya-Krishna analyses in the literature that theoretically show PTAs can always be constructed so that they (weakly) increase the welfare of members and non-members. Considerable attention is then devoted to recent developments on the interaction between PTAs and multilateral trade liberalization, focusing on two key incentives: an “exclusion incentive” of PTA members and a “free riding incentive” of PTA non-members. While the baseline presumption one should have in mind is that these incentives lead PTAs to inhibit the ultimate degree of global trade liberalization, this presumption can be overturned when dynamic considerations are taken into account or when countries can negotiate the degree of multilateral liberalization rather than facing a binary choice over global free trade. Promising areas for pushing this theoretical literature forward include the growing use of quantitative trade models, incorporating rules of origin and global value chains, modeling the issues surrounding “mega-regional” agreements, and modelling the possibility of exit from PTAs. Empirical evidence in the literature is mixed regarding whether PTAs lead to trade diversion or trade creation, whether PTAs have significant adverse effects on non-member terms-of-trade, whether PTAs lead members to lower external tariffs on non-members, and the role of PTAs in facilitating deep integration among members.
Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli
In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries.
The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features.
The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.
Iñigo Hernandez-Arenaz and Nagore Iriberri
Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations.
This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications.
An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.
Ana Balsa and Carlos Díaz
Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.
Anna Dreber and Magnus Johannesson
The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.