1-10 of 50 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Yong Song and Tomasz Woźniak

Markov switching models are a family of models that introduces time variation in the parameters in the form of their state, or regime-specific values. This time variation is governed by a latent discrete-valued stochastic process with limited memory. More specifically, the current value of the state indicator is determined by the value of the state indicator from the previous period only implying the Markov property. A transition matrix characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period conditionally on the state in the current period. This setup decides on the two main advantages of the Markov switching models: the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities. The most commonly applied models from this family are those that presume a finite number of regimes and the exogeneity of the Markov process, which is defined as its independence from the model’s unpredictable innovations. In many such applications, the desired properties of the Markov switching model have been obtained either by imposing appropriate restrictions on transition probabilities or by introducing the time dependence of these probabilities determined by explanatory variables or functions of the state indicator. One of the extensions of this basic specification includes infinite hidden Markov models that provide great flexibility and excellent forecasting performance by allowing the number of states to go to infinity. Another extension, the endogenous Markov switching model, explicitly relates the state indicator to the model’s innovations, making it more interpretable and offering promising avenues for development.

Article

The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.

Article

Integrated assessment models (IAMs) of the climate and economy aim to analyze the impact and efficacy of policies that aim to control climate change, such as carbon taxes and subsidies. A major characteristic of IAMs is that their geophysical sector determines the mean surface temperature increase over the preindustrial level, which in turn determines the damage function. Most of the existing IAMs assume that all of the future information is known. However, there are significant uncertainties in the climate and economic system, including parameter uncertainty, model uncertainty, climate tipping risks, and economic risks. For example, climate sensitivity, a well-known parameter that measures how much the equilibrium temperature will change if the atmospheric carbon concentration doubles, can range from below 1 to more than 10 in the literature. Climate damages are also uncertain. Some researchers assume that climate damages are proportional to instantaneous output, while others assume that climate damages have a more persistent impact on economic growth. The spatial distribution of climate damages is also uncertain. Climate tipping risks represent (nearly) irreversible climate events that may lead to significant changes in the climate system, such as the Greenland ice sheet collapse, while the conditions, probability of tipping, duration, and associated damage are also uncertain. Technological progress in carbon capture and storage, adaptation, renewable energy, and energy efficiency are uncertain as well. Future international cooperation and implementation of international agreements in controlling climate change may vary over time, possibly due to economic risks, natural disasters, or social conflict. In the face of these uncertainties, policy makers have to provide a decision that considers important factors such as risk aversion, inequality aversion, and sustainability of the economy and ecosystem. Solving this problem may require richer and more realistic models than standard IAMs and advanced computational methods. The recent literature has shown that these uncertainties can be incorporated into IAMs and may change optimal climate policies significantly.

Article

Giuseppe Cavaliere, Heino Bohn Nielsen, and Anders Rahbek

While often simple to implement in practice, application of the bootstrap in econometric modeling of economic and financial time series requires establishing validity of the bootstrap. Establishing bootstrap asymptotic validity relies on verifying often nonstandard regularity conditions. In particular, bootstrap versions of classic convergence in probability and distribution, and hence of laws of large numbers and central limit theorems, are critical ingredients. Crucially, these depend on the type of bootstrap applied (e.g., wild or independently and identically distributed (i.i.d.) bootstrap) and on the underlying econometric model and data. Regularity conditions and their implications for possible improvements in terms of (empirical) size and power for bootstrap-based testing differ from standard asymptotic testing, which can be illustrated by simulations.

Article

Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

Article

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance. Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.

Article

Helmut Herwartz and Alexander Lange

Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.

Article

Brant Abbott and Giovanni Gallipoli

This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.

Article

Jacob K. Goeree, Philippos Louis, and Jingjing Zhang

Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.

Article

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.