1-12 of 12 Results

  • Keywords: time x
Clear all

Article

Stochastic Volatility in Bayesian Vector Autoregressions  

Todd E. Clark and Elmar Mertens

Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.

Article

Score-Driven Models: Methods and Applications  

Mariia Artemova, Francisco Blasques, Janneke van Brummelen, and Siem Jan Koopman

The flexibility, generality, and feasibility of score-driven models have contributed much to the impact of score-driven models in both research and policy. Score-driven models provide a unified framework for modeling the time-varying features in parametric models for time series. The predictive likelihood function is used as the driving mechanism for updating the time-varying parameters. It leads to a flexible, general, and intuitive way of modeling the dynamic features in the time series while the estimation and inference remain relatively simple. These properties remain valid when models rely on non-Gaussian densities and nonlinear dynamic structures. The class of score-driven models has become even more appealing since the developments in theory and methodology have progressed rapidly. Furthermore, new formulations of empirical dynamic models in this class have shown their relevance in economics and finance. In the context of macroeconomic studies, the key examples are nonlinear autoregressive, dynamic factor, dynamic spatial, and Markov-switching models. In the context of finance studies, the major examples are models for integer-valued time series, multivariate scale, and dynamic copula models. In finance applications, score-driven models are especially important because they provide particular updating mechanisms for time-varying parameters that limit the effect of the influential observations and outliers that are often present in financial time series.

Article

Health Status Measurement  

John Mullahy

Health status measurement issues arise across a wide spectrum of applications in empirical health economics research as well as in public policy, clinical, and regulatory contexts. It is fitting that economists and other researchers working in these domains devote scientific attention to the measurement of those phenomena most central to their investigations. While often accepted and used uncritically, the particular measures of health status used in empirical investigations can have sometimes subtle but nonetheless important implications for research findings and policy action. How health is characterized and measured at the individual level and how such individual-level measures are summarized to characterize the health of groups and populations are entwined considerations. Such measurement issues have become increasingly salient given the wealth of health data available from population surveys, administrative sources, and clinical records in which researchers may be confronted with competing options for how they go about characterizing and measuring health. While recent work in health economics has seen significant advances in the econometric methods used to estimate and interpret quantities like treatment effects, the literature has seen less focus on some of the central measurement issues necessarily involved in such exercises. As such, increased attention ought to be devoted to measuring and understanding health status concepts that are relevant to decision makers’ objectives as opposed to those that are merely statistically convenient.

Article

Age-Period-Cohort Models  

Zoë Fannon and Bent Nielsen

Outcomes of interest often depend on the age, period, or cohort of the individual observed, where cohort and age add up to period. An example is consumption: consumption patterns change over the lifecycle (age) but are also affected by the availability of products at different times (period) and by birth-cohort-specific habits and preferences (cohort). Age-period-cohort (APC) models are additive models where the predictor is a sum of three time effects, which are functions of age, period, and cohort, respectively. Variations of these models are available for data aggregated over age, period, and cohort, and for data drawn from repeated cross-sections, where the time effects can be combined with individual covariates. The age, period, and cohort time effects are intertwined. Inclusion of an indicator variable for each level of age, period, and cohort results in perfect collinearity, which is referred to as “the age-period-cohort identification problem.” Estimation can be done by dropping some indicator variables. However, dropping indicators has adverse consequences such as the time effects are not individually interpretable and inference becomes complicated. These consequences are avoided by instead decomposing the time effects into linear and non-linear components and noting that the identification problem relates to the linear components, whereas the non-linear components are identifiable. Thus, confusion is avoided by keeping the identifiable non-linear components of the time effects and the unidentifiable linear components apart. A variety of hypotheses of practical interest can be expressed in terms of the non-linear components.

Article

Score-Driven Models: Methodology and Theory  

Mariia Artemova, Francisco Blasques, Janneke van Brummelen, and Siem Jan Koopman

Score-driven models belong to a wider class of observation-driven time series models that are used intensively in empirical studies in economics and finance. A defining feature of the score-driven model is its mechanism of updating time-varying parameters by means of the score function of the predictive likelihood function. The class of score-driven models contains many other well-known observation-driven models as special cases, and many new models have been developed based on the score-driven principle. Score-driven models provide a general way of parameter updating, or filtering, in which all relevant features of the observation density function are considered. In models with fat-tailed observation densities, the score-driven updates are robust to large observations in time series. This kind of robustness is a convenient feature of score-driven models and makes them suitable for applications in finance and economics, where noisy data sets are regularly encountered. Parameter estimation for score-driven models is straightforward when the method of maximum likelihood is used. In many cases, theoretical results are available under rather general conditions.

Article

The Biological Foundations of Economic Preferences  

Nikolaus Robalino and Arthur Robson

Modern economic theory rests on the basic assumption that agents’ choices are guided by preferences. The question of where such preferences might have come from has traditionally been ignored or viewed agnostically. The biological approach to economic behavior addresses the issue of the origins of economic preferences explicitly. This approach assumes that economic preferences are shaped by the forces of natural selection. For example, an important theoretical insight delivered thus far by this approach is that individuals ought to be more risk averse to aggregate than to idiosyncratic risk. Additionally the approach has delivered an evolutionary basis for hedonic and adaptive utility and an evolutionary rationale for “theory of mind.” Related empirical work has studied the evolution of time preferences, loss aversion, and explored the deep evolutionary determinants of long-run economic development.

Article

Time Preferences for Health  

Marjon van der Pol and Alastair Irvine

The interest in eliciting time preferences for health has increased rapidly since the early 1990s. It has two main sources: a concern over the appropriate methods for taking timing into account in economics evaluations, and a desire to obtain a better understanding of individual health and healthcare behaviors. The literature on empirical time preferences for health has developed innovative elicitation methods in response to specific challenges that are due to the special nature of health. The health domain has also shown a willingness to explore a wider range of underlying models compared to the monetary domain. Consideration of time preferences for health raises a number of questions. Are time preferences for health similar to those for money? What are the additional challenges when measuring time preferences for health? How do individuals in time preference for health experiments make decisions? Is it possible or necessary to incentivize time preference for health experiments?

Article

Measuring Health Utility in Economics  

José Luis Pinto-Prades, Arthur Attema, and Fernando Ignacio Sánchez-Martínez

Quality-adjusted life years (QALYs) are one of the main health outcomes measures used to make health policy decisions. It is assumed that the objective of policymakers is to maximize QALYs. Since the QALY weighs life years according to their health-related quality of life, it is necessary to calculate those weights (also called utilities) in order to estimate the number of QALYs produced by a medical treatment. The methodology most commonly used to estimate utilities is to present standard gamble (SG) or time trade-off (TTO) questions to a representative sample of the general population. It is assumed that, in this way, utilities reflect public preferences. Two different assumptions should hold for utilities to be a valid representation of public preferences. One is that the standard (linear) QALY model has to be a good model of how subjects value health. The second is that subjects should have consistent preferences over health states. Based on the main assumptions of the popular linear QALY model, most of those assumptions do not hold. A modification of the linear model can be a tractable improvement. This suggests that utilities elicited under the assumption that the linear QALY model holds may be biased. In addition, the second assumption, namely that subjects have consistent preferences that are estimated by asking SG or TTO questions, does not seem to hold. Subjects are sensitive to features of the elicitation process (like the order of questions or the type of task) that should not matter in order to estimate utilities. The evidence suggests that questions (TTO, SG) that researchers ask members of the general population produce response patterns that do not agree with the assumption that subjects have well-defined preferences when researchers ask them to estimate the value of health states. Two approaches can deal with this problem. One is based on the assumption that subjects have true but biased preferences. True preferences can be recovered from biased ones. This approach is valid as long as the theory used to debias is correct. The second approach is based on the idea that preferences are imprecise. In practice, national bodies use utilities elicited using TTO or SG under the assumptions that the linear QALY model is a good enough representation of public preferences and that subjects’ responses to preference elicitation methods are coherent.

Article

Time Consistent Policies and Quasi-Hyperbolic Discounting  

Łukasz Balbus, Kevin Reffett, and Łukasz Woźny

In dynamic choice models, dynamic inconsistency of preferences is a situation in which a decision-maker’s preferences change over time. Optimal plans under such preferences are time inconsistent if a decision-maker has no incentive to follow in the future the (previously chosen) optimal plan. A typical example of dynamic inconsistency is the case of present bias preferences, where there is a repeated preference toward smaller present rewards versus larger future rewards. The study of dynamic choice of decision-makers who possess dynamically inconsistent preferences has long been the focal point of much work in behavioral economics. Experimental and empirical literatures both point to the importance of various forms of present-bias. The canonical model of dynamically inconsistent preferences exhibiting present-bias is a model of quasi-hyperbolic discounting. A quasi-hyperbolic discounting model is a dynamic choice model, in which the standard exponential discounting is modified by adding an impatience parameter that additionally discounts the immediately succeeding period. A central problem with the analytical study of decision-makers who possess dynamically inconsistent preferences is how to model their choices in sequential decision problems. One general answer to this problem is to characterize and compute (if they exist) constrained optimal plans that are optimal among the set of time consistent sequential plans. Time consistent plans are those among the set of feasible plans that will actually be followed, or not reoptimized, by agents whose preferences change over time. These are called time consistent plans or policies (TCPs). Many results of the existence, uniqueness, and characterization of stationary, or time invariant, TCPs in a class of consumption-savings problems with quasi-hyperbolic discounting, as well as provide some discussion of how to compute TCPs in some extensions of the model are presented, and the role of the generalized Bellman equation operator approach is central. This approach provides sufficient conditions for the existence of time consistent solutions and facilitates their computation. Importantly, the generalized Bellman approach can also be related to a common first-order approach in the literature known as the generalized Euler equation approach. By constructing sufficient conditions for continuously differentiable TCPs on the primitives of the model, sufficient conditions under which a generalized Euler equation approach is valid can be provided. There are other important facets of TCP, including sufficient conditions for the existence of monotone comparative statics in interesting parameters of the decision environment, as well as generalizations of the generalized Bellman approach to allow for unbounded returns and general certainty equivalents. In addition, the case of multidimensional state space, as well as a general self generation method for characterizing nonstationary TCPs must be considered as well.

Article

Nonlinear Models in Macroeconometrics  

Timo Teräsvirta

Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.

Article

Corporate Credit Derivatives  

George Batta and Fan Yu

Corporate credit derivatives are over-the-counter (OTC) contracts whose payoffs are determined by a single corporate credit event or a portfolio of such events. Credit derivatives became popular in the late 1990s and early 2000s as a way for financial institutions to reduce their regulatory capital requirement, and early research treated them as redundant securities whose pricing is tied to the underlying corporate bonds and equities, with liquidity and counterparty risk factors playing supplementary roles. Research in the 2010s and beyond, however, increasingly focused on the effects of market frictions on the pricing of CDSs, how CDS trading has impacted corporate behaviors and outcomes as well as the price efficiency and liquidity of other related markets, and the microstructure of the CDS market itself. This was made possible by the availability of market statistics and more granular trade and quote data as a result of the broad movement of the OTC derivatives market toward central clearing.

Article

Sparse Grids for Dynamic Economic Models  

Johannes Brumm, Christopher Krause, Andreas Schaab, and Simon Scheidegger

Solving dynamic economic models that capture salient real-world heterogeneity and nonlinearity requires the approximation of high-dimensional functions. As their dimensionality increases, compute time and storage requirements grow exponentially. Sparse grids alleviate this curse of dimensionality by substantially reducing the number of interpolation nodes, that is, grid points needed to achieve a desired level of accuracy. The construction principle of sparse grids is to extend univariate interpolation formulae to the multivariate case by choosing linear combinations of tensor products in a way that reduces the number of grid points by orders of magnitude relative to a full tensor-product grid and doing so without substantially increasing interpolation errors. The most popular versions of sparse grids used in economics are (dimension-adaptive) Smolyak sparse grids that use global polynomial basis functions, and (spatially adaptive) sparse grids with local basis functions. The former can economize on the number of interpolation nodes for sufficiently smooth functions, while the latter can also handle non-smooth functions with locally distinct behavior such as kinks. In economics, sparse grids are particularly useful for interpolating the policy and value functions of dynamic models with state spaces between two and several dozen dimensions, depending on the application. In discrete-time models, sparse grid interpolation can be embedded in standard time iteration or value function iteration algorithms. In continuous-time models, sparse grids can be embedded in finite-difference methods for solving partial differential equations like Hamilton-Jacobi-Bellman equations. In both cases, local adaptivity, as well as spatial adaptivity, can add a second layer of sparsity to the fundamental sparse-grid construction. Beyond these salient use-cases in economics, sparse grids can also accelerate other computational tasks that arise in high-dimensional settings, including regression, classification, density estimation, quadrature, and uncertainty quantification.