21-40 of 75 Results  for:

  • Econometrics, Experimental and Quantitative Methods x
Clear all

Article

Estimation and Inference for Cointegrating Regressions  

Martin Wagner

Widely used modified least squares estimators for estimation and inference in cointegrating regressions are discussed. The standard case with cointegration in the I(1) setting is examined and some relevant extensions are sketched. These include cointegration analysis with panel data as well as nonlinear cointegrating relationships. Extensions to higher order (co)integration, seasonal (co)integration and fractional (co)integration are very briefly mentioned. Recent developments and some avenues for future research are discussed.

Article

Estimation Error in Optimal Portfolio Allocation Problems  

Jose Olmo

Markowitz showed that an investor who cares only about the mean and variance of portfolio returns should hold a portfolio on the efficient frontier. The application of this investment strategy proceeds in two steps. First, the statistical moments of asset returns are estimated from historical time series, and second, the mean-variance portfolio selection problem is solved separately, as if the estimates were the true parameters. The literature on portfolio decision acknowledges the difficulty in estimating means and covariances in many instances. This is particularly the case in high-dimensional settings. Merton notes that it is more difficult to estimate means than covariances and that errors in estimates of means have a larger impact on portfolio weights than errors in covariance estimates. Recent developments in high-dimensional settings have stressed the importance of correcting the estimation error of traditional sample covariance estimators for portfolio allocation. The literature has proposed shrinkage estimators of the sample covariance matrix and regularization methods founded on the principle of sparsity. Both methodologies are nested in a more general framework that constructs optimal portfolios under constraints on different norms of the portfolio weights including short-sale restrictions. On the one hand, shrinkage methods use a target covariance matrix and trade off bias and variance between the standard sample covariance matrix and the target. More prominence has been given to low-dimensional factor models that incorporate theoretical insights from asset pricing models. In these cases, one has to trade off estimation risk for model risk. Alternatively, the literature on regularization of the sample covariance matrix uses different penalty functions for reducing the number of parameters to be estimated. Recent methods extend the idea of regularization to a conditional setting based on factor models, which increase with the number of assets, and apply regularization methods to the residual covariance matrix.

Article

The Evolution of Forecast Density Combinations in Economics  

Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk

Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.

Article

Experimental Economics and Experimental Sociology  

Johanna Gereke and Klarita Gërxhani

Experimental economics has moved beyond the traditional focus on market mechanisms and the “invisible hand” by applying sociological and socio-psychological knowledge in the study of rationality, markets, and efficiency. This knowledge includes social preferences, social norms, and cross-cultural variation in motivations. In turn, the renewed interest in causation, social mechanisms, and middle-range theories in sociology has led to a renaissance of research employing experimental methods. This includes laboratory experiments but also a wide range of field experiments with diverse samples and settings. By focusing on a set of research topics that have proven to be of substantive interest to both disciplines—cooperation in social dilemmas, trust and trustworthiness, and social norms—this article highlights innovative interdisciplinary research that connects experimental economics with experimental sociology. Experimental economics and experimental sociology can still learn much from each other, providing economists and sociologists with an opportunity to collaborate and advance knowledge on a range of underexplored topics of interest to both disciplines.

Article

Financial Frictions in Macroeconomic Models  

Alfred Duncan and Charles Nolan

In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.

Article

Fractional Integration and Cointegration  

Javier Hualde and Morten Ørregaard Nielsen

Fractionally integrated and fractionally cointegrated time series are classes of models that generalize standard notions of integrated and cointegrated time series. The fractional models are characterized by a small number of memory parameters that control the degree of fractional integration and/or cointegration. In classical work, the memory parameters are assumed known and equal to 0, 1, or 2. In the fractional integration and fractional cointegration context, however, these parameters are real-valued and are typically assumed unknown and estimated. Thus, fractionally integrated and fractionally cointegrated time series can display very general types of stationary and nonstationary behavior, including long memory, and this more general framework entails important additional challenges compared to the traditional setting. Modeling, estimation, and testing in the context of fractional integration and fractional cointegration have been developed in time and frequency domains. Related to both alternative approaches, theory has been derived under parametric or semiparametric assumptions, and as expected, the obtained results illustrate the well-known trade-off between efficiency and robustness against misspecification. These different developments form a large and mature literature with applications in a wide variety of disciplines.

Article

Frequency-Domain Approach in High-Dimensional Dynamic Factor Models  

Marco Lippi

High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T , the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n , a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows: x i t = α i u t + β i u t − 1 + ξ i t , i = 1, … , n , t = 1, … , T , (*) where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common factor, and by the idiosyncratic component ξ i t . The common factor u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor u t affect the variable x i t dynamically, that is through the lag polynomial α i + β i L . Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity. Model ( ∗ ) , generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter. We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector ( x 1 t x 2 t ⋯ x n t ) . Dynamic principal components, based on the spectral density of the x ’s, are then used to construct estimators of the common factors. These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x ’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors. Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q -dimensional white noise, with q < n , that is, singular vector stochastic processes. The main features of this literature are described with some detail. Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.

Article

From Clinical Outcomes to Health Utilities: The Role of Mapping to Bridge the Evidence Gap  

Mónica Hernández Alava

The assessment of health-related quality of life is crucially important in the evaluation of healthcare technologies and services. In many countries, economic evaluation plays a prominent role in informing decision making often requiring preference-based measures (PBMs) to assess quality of life. These measures comprise two aspects: a descriptive system where patients can indicate the impact of ill health, and a value set based on the preferences of individuals for each of the health states that can be described. These values are required for the calculation of quality adjusted life years (QALYs), the measure for health benefit used in the vast majority of economic evaluations. The National Institute for Health and Care Excellence (NICE) has used cost per QALY as its preferred framework for economic evaluation of healthcare technologies since its inception in 1999. However, there is often an evidence gap between the clinical measures that are available from clinical studies on the effect of a specific health technology and the PBMs needed to construct QALY measures. Instruments such as the EQ-5D have preference-based scoring systems and are favored by organizations such as NICE but are frequently absent from clinical studies of treatment effect. Even where a PBM is included this may still be insufficient for the needs of the economic evaluation. Trials may have insufficient follow-up, be underpowered to detect relevant events, or include the wrong PBM for the decision- making body. Often this gap is bridged by “mapping”—estimating a relationship between observed clinical outcomes and PBMs, using data from a reference dataset containing both types of information. The estimated statistical model can then be used to predict what the PBM would have been in the clinical study given the available information. There are two approaches to mapping linked to the structure of a PBM. The indirect approach (or response mapping) models the responses to the descriptive system using discrete data models. The expected health utility is calculated as a subsequent step using the estimated probability distribution of health states. The second approach (the direct approach) models the health state utility values directly. Statistical models routinely used in the past for mapping are unable to consider the idiosyncrasies of health utility data. Often they do not work well in practice and can give seriously biased estimates of the value of treatments. Although the bias could, in principle, go in any direction, in practice it tends to result in underestimation of cost effectiveness and consequently distorted funding decisions. This has real effects on patients, clinicians, industry, and the general public. These problems have led some analysts to mistakenly conclude that mapping always induces biases and should be avoided. However, the development and use of more appropriate models has refuted this claim. The need to improve the quality of mapping studies led to the formation of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Mapping to Estimate Health State Utility values from Non-Preference-Based Outcome Measures Task Force to develop good practice guidance in mapping.

Article

General Equilibrium Theories of Spatial Agglomeration  

Marcus Berliant and Ping Wang

General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.

Article

Growth Econometrics  

Jonathan R. W. Temple

Growth econometrics is the application of statistical methods to the study of economic growth and levels of national output or income per head. Researchers often seek to understand why growth rates differ across countries. The field developed rapidly in the 1980s and 1990s, but the early work often proved fragile. Cross-section analyses are limited by the relatively small number of countries in the world and problems of endogeneity, parameter heterogeneity, model uncertainty, and cross-section error dependence. The long-term prospects look better for approaches using panel data. Overall, the quality of the evidence has improved over time, due to better measurement, more data, and new methods. As longer spans of data become available, the methods of growth econometrics will shed light on fundamental questions that are hard to answer any other way.

Article

Happiness and Productivity in the Workplace  

Mahnaz Nazneen and Daniel Sgroi

Happiness has become an important concept in economics as a target for government policy at the national level. This is mirrored in an increasing understanding of the microeconomic effects of increased happiness. While correlational studies have for many years documented a relationship between individual-level happiness and productivity, more recent work provides causal evidence that a positive shock to happiness can boost productivity significantly. These studies include three strands of research. The first provides a number of longitudinal surveys that have generated evidence linking happiness to productivity but run the risk of confounding happiness with other related variables that may be driving the relationship. The second includes laboratory experiments that simulate a workplace under tightly controlled conditions, and this strand has established a clear relationship between positive happiness shocks and rises in productivity. The third involves examining experimental field data, which sacrifices the control of laboratory experiments but offers greater realism. However, there is still work to be done generalizing these findings to more complex work environments, especially those that involve cooperative and team-based tasks where increases in happiness may have other consequences.

Article

Human Capital Inequality: Empirical Evidence  

Brant Abbott and Giovanni Gallipoli

This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors.

Article

Human Punishment Behavior  

Erte Xiao

Punishment has been regarded as an important instrument to sustain human cooperation. A great deal of experimental research has been conducted to understand human punishment behavior, in particular, informal peer punishment. What drives individuals to incur cost to punish others? How does punishment influence human behavior? Punishment behavior has been observed when the individual does not expect to meet the wrongdoers again in the future and thus has no monetary incentive to punish. Several reasons for such retributive punishment have been proposed and studied. Punishment can be used to express certain values, attitudes, or emotions. Egalitarianism triggers punishment when the transgression leads to inequality. The norm to punish the wrongdoers may also lead people to incur costs to punish even when it is not what they intrinsically want to do. Individuals sometimes punish wrongdoers even when they are not the victim. The motivation underlying the third-party punishment can be different than the second-party punishment. In addition, restricting the punishment power to a third party can be important to mitigate antisocial punishment when unrestricted second-party peer punishment leads to antisocial punishments and escalating retaliation. It is important to note that punishment does not always promote cooperation. Imposing fines can crowd out intrinsic motivation to cooperate when it changes people’s perception of social interactions from a generous, non-market activity to a market commodity and leads to more selfish profit-maximizing behavior. To avoid the crowding-out effect, it is important to implement the punishment in a way that it sends a clear signal that the punished behavior violates social norms.

Article

The Implications of Pupil Rank for Achievement  

Richard Murphy and Felix Weinhardt

The significance of social interaction has become an increasingly important part of economic thought and models through the work on peer effects, social norms, and networks. Within this literature, a novel focus of ranking within groups has emerged. The rank of an individual is usually defined as the ordinal position within a specific group. This could be the work environment or a classroom, and much of this literature focuses on rank effects in education settings. The literature studies rank effects for various age groups. There is evidence that a rank position even during early life phases, such as in elementary education, has lasting effects on education outcomes such as test scores or subject specializations, choices during college, and wages. A first-order challenge in the study of rank effects is to separate them from other highly correlated effects. For example, individuals with a high rank academic rank in a group will likely have high academic ability in absolute terms. Papers in this field directly account for measured ability, and so rely on the variation in rank that exists across groups for any given ability measure, that is, a score of 80 in one group would rank the student top, while near the bottom in another. The comparability of achievement measures across settings is key; one commonly employed solution is to account for level differences across settings. While the literature has now established the importance of rank, there are several—potentially non-competing—ideas for the precise behavioral mechanisms of why rank matters so much. Future work will most likely focus on integrating rank effects into the literature on social interactions to discuss implications for optimal group formation.

Article

The Implications of School Assignment Mechanisms for Efficiency and Equity  

Atila Abdulkadiroğlu

Parental choice over public schools has become a major policy tool to combat inequality in access to schools. Traditional neighborhood-based assignment is being replaced by school choice programs, broadening families’ access to schools beyond their residential location. Demand and supply in school choice programs are cleared via centralized admissions algorithms. Heterogeneous parental preferences and admissions policies create trade-offs among efficiency and equity. The data from centralized admissions algorithms can be used effectively for credible research design toward better understanding of school effectiveness, which in turn can be used for school portfolio planning and student assignment based on match quality between students and schools.

Article

Improving on Simple Majority Voting by Alternative Voting Mechanisms  

Jacob K. Goeree, Philippos Louis, and Jingjing Zhang

Majority voting is the predominant mechanism for collective decision making. It is used in a broad range of applications, spanning from national referenda to small group decision making. It is simple, transparent, and induces voters to vote sincerely. However, it is increasingly recognized that it has some weaknesses. First of all, majority voting may lead to inefficient outcomes. This happens because it does not allow voters to express the intensity of their preferences. As a result, an indifferent majority may win over an intense minority. In addition, majority voting suffers from the “tyranny of the majority,” i.e., the risk of repeatedly excluding minority groups from representation. A final drawback is the “winner-take-all” nature of majority voting, i.e., it offers no compensation for losing voters. Economists have recently proposed various alternative mechanisms that aim to produce more efficient and more equitable outcomes. These can be classified into three different approaches. With storable votes, voters allocate a budget of votes across several issues. Under vote trading, voters can exchange votes for money. Under linear voting or quadratic voting, voters can buy votes at a linear or quadratic cost respectively. The properties of different alternative mechanisms can be characterized using theoretical modeling and game theoretic analysis. Lab experiments are used to test theoretical predictions and evaluate their fitness for actual use in applications. Overall, these alternative mechanisms hold the promise to improve on majority voting but have their own shortcomings. Additional theoretical analysis and empirical testing is needed to produce a mechanism that robustly delivers efficient and equitable outcomes.

Article

Incentives and Performance of Healthcare Professionals  

Martin Chalkley

Economists have long regarded healthcare as a unique and challenging area of economic activity on account of the specialized knowledge of healthcare professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in healthcare. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.

Article

An Introduction to Bootstrap Theory in Time Series Econometrics  

Giuseppe Cavaliere, Heino Bohn Nielsen, and Anders Rahbek

While often simple to implement in practice, application of the bootstrap in econometric modeling of economic and financial time series requires establishing validity of the bootstrap. Establishing bootstrap asymptotic validity relies on verifying often nonstandard regularity conditions. In particular, bootstrap versions of classic convergence in probability and distribution, and hence of laws of large numbers and central limit theorems, are critical ingredients. Crucially, these depend on the type of bootstrap applied (e.g., wild or independently and identically distributed (i.i.d.) bootstrap) and on the underlying econometric model and data. Regularity conditions and their implications for possible improvements in terms of (empirical) size and power for bootstrap-based testing differ from standard asymptotic testing, which can be illustrated by simulations.

Article

Limited Dependent Variables and Discrete Choice Modelling  

Badi H. Baltagi

Limited dependent variables considers regression models where the dependent variable takes limited values like zero and one for binary choice mowedels, or a multinomial model where there is a few choices like modes of transportation, for example, bus, train, or a car. Binary choice examples in economics include a woman’s decision to participate in the labor force, or a worker’s decision to join a union. Other examples include whether a consumer defaults on a loan or a credit card, or whether they purchase a house or a car. This qualitative variable is recoded as one if the female participates in the labor force (or the consumer defaults on a loan) and zero if she does not participate (or the consumer does not default on the loan). Least squares using a binary choice model is inferior to logit or probit regressions. When the dependent variable is a fraction or proportion, inverse logit regressions are appropriate as well as fractional logit quasi-maximum likelihood. An example of the inverse logit regression is the effect of beer tax on reducing motor vehicle fatality rates from drunken driving. The fractional logit quasi-maximum likelihood is illustrated using an equation explaining the proportion of participants in a pension plan using firm data. The probit regression is illustrated with a fertility empirical example, showing that parental preferences for a mixed sibling-sex composition in developed countries has a significant and positive effect on the probability of having an additional child. Multinomial choice models where the number of choices is more than 2, like, bond ratings in Finance, may have a natural ordering. Another example is the response to an opinion survey which could vary from strongly agree to strongly disagree. Alternatively, this choice may not have a natural ordering like the choice of occupation or modes of transportation. The Censored regression model is motivated with estimating the expenditures on cars or estimating the amount of mortgage lending. In this case, the observations are censored because we observe the expenditures on a car (or the mortgage amount) only if the car is bought or the mortgage approved. In studying poverty, we exclude the rich from our sample. In this case, the sample is not random. Applying least squares to the truncated sample leads to biased and inconsistent results. This differs from censoring. In the latter case, no data is excluded. In fact, we observe the characteristics of all mortgage applicants even those that do not actually get their mortgage approved. Selection bias occurs when the sample is not randomly drawn. This is illustrated with a labor participating equation (the selection equation) and an earnings equation, where earnings are observed only if the worker participates in the labor force, otherwise it is zero. Extensions to panel data limited dependent variable models are also discussed and empirical examples given.

Article

Long Memory Models  

Peter Robinson

Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j ) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry. As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j . Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.