31-40 of 74 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Bootstrapping in Macroeconometrics  

Helmut Herwartz and Alexander Lange

Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.

Article

Human Capital Inequality: Empirical Evidence  

Brant Abbott and Giovanni Gallipoli

This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.

Article

Improving on Simple Majority Voting by Alternative Voting Mechanisms  

Jacob K. Goeree, Philippos Louis, and Jingjing Zhang

Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.

Article

Measurement Error: A Primer for Macroeconomists  

Simon van Norden

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.

Article

Age-Period-Cohort Models  

Zoë Fannon and Bent Nielsen

Outcomes of interest often depend on the age, period, or cohort of the individual observed, where cohort and age add up to period. An example is consumption: consumption patterns change over the lifecycle (age) but are also affected by the availability of products at different times (period) and by birth-cohort-specific habits and preferences (cohort). Age-period-cohort (APC) models are additive models where the predictor is a sum of three time effects, which are functions of age, period, and cohort, respectively. Variations of these models are available for data aggregated over age, period, and cohort, and for data drawn from repeated cross-sections, where the time effects can be combined with individual covariates. The age, period, and cohort time effects are intertwined. Inclusion of an indicator variable for each level of age, period, and cohort results in perfect collinearity, which is referred to as “the age-period-cohort identification problem.” Estimation can be done by dropping some indicator variables. However, dropping indicators has adverse consequences such as the time effects are not individually interpretable and inference becomes complicated. These consequences are avoided by instead decomposing the time effects into linear and non-linear components and noting that the identification problem relates to the linear components, whereas the non-linear components are identifiable. Thus, confusion is avoided by keeping the identifiable non-linear components of the time effects and the unidentifiable linear components apart. A variety of hypotheses of practical interest can be expressed in terms of the non-linear components.

Article

Human Punishment Behavior  

Erte Xiao

Punishment has been regarded as an important instrument to sustain human cooperation. A great deal of experimental research has been conducted to understand human punishment behavior, in particular, informal peer punishment. What drives individuals to incur cost to punish others? How does punishment influence human behavior? Punishment behavior has been observed when the individual does not expect to meet the wrongdoers again in the future and thus has no monetary incentive to punish. Several reasons for such retributive punishment have been proposed and studied. Punishment can be used to express certain values, attitudes, or emotions. Egalitarianism triggers punishment when the transgression leads to inequality. The norm to punish the wrongdoers may also lead people to incur costs to punish even when it is not what they intrinsically want to do. Individuals sometimes punish wrongdoers even when they are not the victim. The motivation underlying the third-party punishment can be different than the second-party punishment. In addition, restricting the punishment power to a third party can be important to mitigate antisocial punishment when unrestricted second-party peer punishment leads to antisocial punishments and escalating retaliation. It is important to note that punishment does not always promote cooperation. Imposing fines can crowd out intrinsic motivation to cooperate when it changes people’s perception of social interactions from a generous, non-market activity to a market commodity and leads to more selfish profit-maximizing behavior. To avoid the crowding-out effect, it is important to implement the punishment in a way that it sends a clear signal that the punished behavior violates social norms.

Article

A Review of Gender Differences in Negotiation  

Iñigo Hernandez-Arenaz and Nagore Iriberri

Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations. This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications. An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.

Article

Experimental Economics and Experimental Sociology  

Johanna Gereke and Klarita Gërxhani

Experimental economics has moved beyond the traditional focus on market mechanisms and the “invisible hand” by applying sociological and socio-psychological knowledge in the study of rationality, markets, and efficiency. This knowledge includes social preferences, social norms, and cross-cultural variation in motivations. In turn, the renewed interest in causation, social mechanisms, and middle-range theories in sociology has led to a renaissance of research employing experimental methods. This includes laboratory experiments but also a wide range of field experiments with diverse samples and settings. By focusing on a set of research topics that have proven to be of substantive interest to both disciplines—cooperation in social dilemmas, trust and trustworthiness, and social norms—this article highlights innovative interdisciplinary research that connects experimental economics with experimental sociology. Experimental economics and experimental sociology can still learn much from each other, providing economists and sociologists with an opportunity to collaborate and advance knowledge on a range of underexplored topics of interest to both disciplines.

Article

General Equilibrium Theories of Spatial Agglomeration  

Marcus Berliant and Ping Wang

General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.

Article

Predictive Regressions  

Jesús Gonzalo and Jean-Yves Pitarakis

Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.