21-40 of 46 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.

Article

Brant Abbott and Giovanni Gallipoli

This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.

Article

Punishment has been regarded as an important instrument to sustain human cooperation. A great deal of experimental research has been conducted to understand human punishment behavior, in particular, informal peer punishment. What drives individuals to incur cost to punish others? How does punishment influence human behavior? Punishment behavior has been observed when the individual does not expect to meet the wrongdoers again in the future and thus has no monetary incentive to punish. Several reasons for such retributive punishment have been proposed and studied. Punishment can be used to express certain values, attitudes, or emotions. Egalitarianism triggers punishment when the transgression leads to inequality. The norm to punish the wrongdoers may also lead people to incur costs to punish even when it is not what they intrinsically want to do. Individuals sometimes punish wrongdoers even when they are not the victim. The motivation underlying the third-party punishment can be different than the second-party punishment. In addition, restricting the punishment power to a third party can be important to mitigate antisocial punishment when unrestricted second-party peer punishment leads to antisocial punishments and escalating retaliation. It is important to note that punishment does not always promote cooperation. Imposing fines can crowd out intrinsic motivation to cooperate when it changes people’s perception of social interactions from a generous, non-market activity to a market commodity and leads to more selfish profit-maximizing behavior. To avoid the crowding-out effect, it is important to implement the punishment in a way that it sends a clear signal that the punished behavior violates social norms.

Article

Jacob K. Goeree, Philippos Louis, and Jingjing Zhang

Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.

Article

Economists have long regarded healthcare as a unique and challenging area of economic activity on account of the specialized knowledge of healthcare professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in healthcare. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.

Article

Peter Robinson

Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry. As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.

Article

Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

Charles Ka Yui Leung and Cho Yiu Joe Ng

This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.

Article

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.

Article

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance. Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.

Article

Eric Ghysels

The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.

Article

Pieter van Baal and Hendriek Boshuizen

In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role, as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases. This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.

Article

Karla DiazOrdaz and Richard Grieve

Health economic evaluations face the issues of noncompliance and missing data. Here, noncompliance is defined as non-adherence to a specific treatment, and occurs within randomized controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss-to-follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling noncompliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods with which to handle these with application to health economic evaluation that uses data from an RCT. In an RCT the random assignment can be used as an instrument-for-treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals’ costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, which assume the data are Missing At Random, but also sensitivity analyses that recognize the data may be missing according to the true, unobserved values, that is, Missing Not at Random. Future studies should subject the assumptions behind methods for handling noncompliance and missing data to thorough sensitivity analyses. Modern machine-learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of noncompliance and missing data.

Article

Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.

Article

Bent Nielsen

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article. Detection of outliers is an important explorative step in empirical analysis. Once detected, the investigator will have to decide how to model the outliers depending on the context. Indeed, the outliers may represent noisy observations that are best left out of the analysis or they may be very informative observations that would have a particularly important role in the analysis. For regression analysis in time series a number of outlier algorithms are available, including impulse indicator saturation and methods from robust statistics. The algorithms are complex and their statistical properties are not fully understood. Extensive simulation studies have been made, but the formal theory is lacking. Some progress has been made toward an asymptotic theory of the algorithms. A number of asymptotic results are already available building on empirical process theory.

Article

Jesús Gonzalo and Jean-Yves Pitarakis

Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.

Article

In recent decades, there has been a dramatic proliferation of preferential trade agreements (PTAs) between countries that, while legal, contradict the non-discrimination principle of the world trade system. This raises various issues, both theoretical and empirical, regarding the evolution of trade policy within the world trade system and the welfare implications for PTA members and non-members. The survey starts with the Kemp-Wan-Ohyama and Panagariya-Krishna analyses in the literature that theoretically show PTAs can always be constructed so that they (weakly) increase the welfare of members and non-members. Considerable attention is then devoted to recent developments on the interaction between PTAs and multilateral trade liberalization, focusing on two key incentives: an “exclusion incentive” of PTA members and a “free riding incentive” of PTA non-members. While the baseline presumption one should have in mind is that these incentives lead PTAs to inhibit the ultimate degree of global trade liberalization, this presumption can be overturned when dynamic considerations are taken into account or when countries can negotiate the degree of multilateral liberalization rather than facing a binary choice over global free trade. Promising areas for pushing this theoretical literature forward include the growing use of quantitative trade models, incorporating rules of origin and global value chains, modeling the issues surrounding “mega-regional” agreements, and modelling the possibility of exit from PTAs. Empirical evidence in the literature is mixed regarding whether PTAs lead to trade diversion or trade creation, whether PTAs have significant adverse effects on non-member terms-of-trade, whether PTAs lead members to lower external tariffs on non-members, and the role of PTAs in facilitating deep integration among members.

Article

Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli

In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries. The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features. The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.

Article

Iñigo Hernandez-Arenaz and Nagore Iriberri

Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations. This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications. An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.