Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Economics and Finance. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 02 March 2021

Bootstrapping in Macroeconometricslocked

  • Helmut HerwartzHelmut HerwartzEconomic Science, Georg-August-Universitat Gottingen
  •  and Alexander LangeAlexander LangeEconomic Science, Georg-August-Universitat Gottingen

Summary

Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.

The Bootstrap as a Conditional Sampling Scheme in Multiple Dynamic Models

Regression models with time indexation of observations and/or explicit time series models are common frameworks for subjecting theoretical macroeconomic models to econometric analysis. In particular, vector autoregressive (VAR) models have become widely applied workhorse models in macroeconometrics to analyze the interaction between multiple jointly endogenous variables in small to medium sized systems. Owing to the dynamic and complex nature of data generation on the one hand and stylized features of macroeconomic data on the other, macroeconometric model estimation and inference has often to rely on complicated laws of large numbers or central limit theory (White, 2001). As empirical analysis is typically conditional on short time windows of sample information, parameter estimates might be subjected to sizeable finite sample biases. Moreover, inferential analysis might suffer from complicated analytical assessments of estimation uncertainty or slow convergence of diagnostic statistics to their asymptotic distributions. With the emergence and availability of powerful computational facilities, so-called bootstrap techniques have shown their potential for purposes of bias reduction, for an improved matching of empirical and nominal levels in hypothesis testing and interval estimation as well as to cope with nonpivotalness of test statistics in a feasible manner.

Originating with Efron (1979) the bootstrap approach consists of using an estimated model for repetitive sampling. If applied properly, the resampling scheme allows approximating the distribution for a given statistical object of interest (e.g. a parameter estimate, a test statistic, or an impulse response estimate) conditional on the sample information. Consistent bootstrap schemes establish an analogy between two distributions: (1) the unknown distribution of the difference between the statistic of interest and its population counterpart; and (2) the feasible distribution of the difference between the statistic of interest and its bootstrap counterparts. It is important to notice, however, that this analogy is of asymptotic nature. Hence, the application of bootstrap schemes deserves rigorous underpinning in terms of statistical theory on the one hand. On the other hand, the performance of bootstrap-based bias reduction or inference is not exact in finite samples. For a manifold of macroeconometric models or tests, however, resampling schemes have been shown to outperform unadjusted ordinary least squares (OLS) or maximum likelihood (ML) parameter estimates or inference based on first order asymptotic approximations of the distribution of a test statistic under scrutiny.

One purpose of this article is to provide a condensed review of bootstrap methods, which have been suggested in the context of (multiple) time series models as empirical counterparts of macroeconomic theory. A further objective is to sketch stylized bootstrap approaches and selected applications in macroeconometrics to provide the reader with some guidance for choosing and implementing suitable bootstrap schemes in a data-based manner. Formal derivations of bootstrap consistency and bootstrap central limit theory are typically specific to particular statistics of interest and conditional on distributional assumptions. As such, the statistical foundations of bootstrap techniques in macroeconometrics are beyond the scope of this article and are left for further reading (e.g., Hall, 1992; and Horowitz, 2001). Throughout, the procedures and models described in this article aim at a better understanding of conditional mean dynamics. Hence, resampling techniques that are applied in financial econometrics for purposes of volatility modeling are not of central interest in the present context and the reader may consult the appropriate literature (e.g., Ruiz & Pascual, 2002). Apart from covering the most frequently used bootstrap methods in macroeconometrics for common purposes like diagnosing “Granger Noncausality,” “Unit Roots,” “Nonlinearity” and “Cointegration,” this article contains a brief sketch of more complicated and specialized cases of bootstrap-based inference (“Panel Unit Roots” and structural “Impulse Response Functions”). Complementing a set of considered stylized bootstrap variants (“Independent Identically Distributed (iid) Resampling,” “Wild Bootstrap,” “Moving-Block Resampling”) the article also provides some guidance for further reading on more specialized sampling designs (“Pairwise Bootstrap,” “Sieve Bootstrap,” and “Factor-Based Bootstrap”).

Bootstrap Methods

When introducing the design of bootstrap schemes for macroeconometric analysis, it is instructive to illustrate their implementation for stylized VARs generated from homoskedastic model residuals. Apart from their analytical convenience VARs have several theoretical merits, which explain their widespread use in classical and bootstrap-based macroeconometric analysis. Firstly, with the suitable choice of their dynamic order, VARs can approximate an even larger space of (stationary) vector autoregressive moving average models. Secondly, model augmentation with exogenous information is straightforward. Thirdly, factor-augmented VARs (FAVARs) (Bernanke, Boivin, & Eliasz, 2005) can embed rich sample information without violating principles of model parsimony. Fourthly, the VAR model allows a reformulation in vector error correction (VECM) form which provides a dynamic representation for trending time series subject to stationary equilibrium relationships with macroeconomic interpretation. Finally, augmented with assumptions on structural (i.e. contemporaneous) relations among the considered variables, VARs allow for a generalization toward structural VARs (SVARs). For more details on VARs and SVARs the reader may consult Lütkepohl (2005) and Kilian and Lütkepohl (2017), respectively. After considering stylized homoskedastic VARs, more complicated and realistic data structures (heteroskedasticity or serial correlation) will be addressed in the context of corresponding robust bootstrap schemes, as well as bootstrap analysis in VECMs and SVARs to discuss specific modeling purposes (tests for cointegration, structural modeling).

Let yt=(y1t,,yKt) denote a (K×1) dimensional vector comprising jointly endogenous time series observations at time t. The VAR model of order p(VAR(p)) reads as

yt=v+A1yt1++Apytp+ut,t=0,±1,±2,±3,,(1)

with (K×K) coefficient matrices Ai,i=1,,p, and v=(v1,,vK) denoting a (K×1) vector of intercept terms. Moreover, ut=(u1t,,uKt) is a (K×1) vector of random innovations, where E(ut)=0, E(utut)=Σ and E(utus)=0 for st. The covariance matrix Σ is nonsingular by assumption. The VAR model in equation (1) is assumed to be stable, i.e., det(IKA1zApzp)0 for |z|1. By assumption, all fourth order moments are bounded such that for some finite constant c, E[uitujtuktult]c for 1i,j,k,lK and all t. For notational convenience the (K×(pK+1)) matrix A=[v,A1,,Ap] collects the model parameters.

Conditional on a sequence of time series observations denoted Y={yt}t=1pT, the model in (1) can be estimated, for instance, by means of OLS. The estimated VAR(p) model reads as

yt=v^+A^1yt1++A^pytp+u^t,(2)

where A^=[v^,A^1,,A^p] collects the model parameters conditional on the sample. Apart from parameter estimation, the sample information could also be used to extract a test statistic of interest (or an impulse response estimate) which is denoted henceforth as θ^. Accordingly, the corresponding population quantity characterizing the model in (1) is θ.

Empirical estimates retrieved from dynamic models such as A^orθ^ are subject to estimation uncertainty and finite sample biases. Bootstrap schemes have become popular to control the stochastic properties of such estimators in a computational manner by means of simulation techniques. For the context of time series models, Li and Madalla (1997) have shown that it is more convenient to resample residuals u^t instead of observed data yt. The authors argue that information about the dynamic structure of the model should be exploited in the generation of bootstrap samples, which can hardly be accomplished by resampling the data directly. Assume that the model disturbances ut exhibit a distribution F,ut~F. Given that empirical estimates u^t are often consistent approximations of the true errors ut, the intuition behind bootstrap methods is to approximate the unknown distribution of ut by the bootstrap counterpart ut*~F^T and, accordingly, imitate the dynamic model with sufficiently many repetitions as

yt*=v^+A^1yt1*++A^pytp*+ut*,t=1,,T.(3)

Realizations from the bootstrap data-generating process (DGP) are marked with the superscript *. Conditional on the data, consistent bootstrap schemes mimic the deviation of test statistics (or parameter estimates) from their population counterparts. Without reference to explicit probability measures, bootstrap consistency might be stated informally as

L(θ^θ)L(θ^*θ^|Y)asT.

Depending on the matter of interest (e.g., bias adjustment or hypothesis testing), the "" notation in (4) signifies convergence in distribution or probability.1 From the result in (4) it is possible to determine critical values for the distribution of the test statistic θ^ conditional on the data (Y). It is worth pointing out that for many inferential problems in macroeconometrics powerful central limit theory is available to establish asymptotic pivotalness of the distribution in the left-hand side of (4). Depending on the complexity of data generation and/or the test statistic of interest this might, however, come at the cost of rather complicated assessments of estimation uncertainty. Hence, bootstrap consistency as stated in (4) is interesting from two perspectives. First, it provides a strong motivation for feasible bootstrap inference applied to intrinsically nonpivotal statistics. Second, in light of equation (4) empirical model building could avoid overly complicated assessments of estimation uncertainty. Weakening the latter merit, however, powerful theoretical results show that bootstrap approximations of asymptotically pivotal distributions are often more accurate than those of their unscaled and nonpivotal counterparts (Horowitz, 2001). In analogy to the approximation in (4), bootstrap schemes allow to estimate finite sample biases of parameter estimates as Bias(A^)=A¯*A^, where A¯* denotes parameter estimates averaged over the number of bootstrap iterations. Accordingly, bias corrected parameter estimates read as A^adj=2A^A¯*.

Having outlined potential merits and the scope of bootstrap schemes, the next step is to describe their explicit implementation. Due to quite heterogeneous data properties, which characterize macroeconomic time series, it is instructive to first provide a general bootstrap algorithm, which leaves the concrete and often data dependent determination of ut* open. More specific depictions of three stylized sampling designs, namely, “iid Resampling,” the “Wild Bootstrap” and “Moving-Block” schemes follow subsequently.

A General Bootstrap Algorithm

Bootstrap samples can be generated according to the following general algorithm (B)

1.

Estimate the VAR parameters to obtain [v^,A^1,,A^p] and θ^. Calculate the residuals as u^t=ytv^A^1yt1A^pytp.

2.

Generate bootstrap errors ut*. To establish consistency of the bootstrap scheme as stated in (4), it is essential that the bootstrap sample {ut*}t=1T provides a sufficiently close approximation of its population counterpart {ut}t=1T (e.g., in terms of distributional or moment characteristics). As broad categories one might distinguish the following stylized resampling schemes to generate ut*:

a.

Residual-based iid bootstrap,

b.

Wild bootstrap,

c.

Moving-block bootstrap.

3.

Construct bootstrap sample

a.

Recursive design: Bootstrap samples are generated recursively as formalized in (3), where initial values for yt* are either taken as the first p observations from the data, or drawn randomly from blocks of p consecutive observations in Y.

b.

Fixed design: Bootstrap samples are determined conditional on the original regressors as yt*=ν^^+A^1tt1++A^pytp+ut*,t=1,,T.

4.

Bootstrap estimation

a.

Recursive design: Bootstrap parameter estimates, denoted as A* or statistics θ*, are determined from the bootstrap sample {yt*}t=p+1T and the bootstrap regressors (1,yt1*,,ytp*).

b.

Fixed design: Bootstrap parameter estimates A* or statistics θ* are determined from the bootstrap sample {yt*}t=p+1T and the original regressors (1,yt1,,ytp).

5.

Repeat steps 2 to 4 independently to obtain S random bootstrap realizations, denoted {As*}s=1S={A1*,A2*,AS*} and {θs*}s=1S={θ1*,θ2*,θS*}, and pursue the inferential analysis conditional on the sampled distribution. For instance, if θ^ is used to test a specific null hypothesis and large values of θ^ are considered to be in favor of the alternative hypothesis, the null hypothesis is rejected with significance level δ, if θ^ exceeds the (1δ)-quantile of the bootstrap distribution {θs*}s=1S.

Some remarks on the implementation of step 3 and the choice of the size parameter S in step 5 are worth mentioning. Firstly, the implementation in step 3 distinguishes recursive and fixed design generations of bootstrap samples {yt*}t=p+1T. The fixed design is throughout conditional on the original sequence of regressors, whereas each repetition within the recursive scheme obtains a distinct regression design. While the recursive design appears as a natural and original means of imitating the dynamic data structure, the fixed design, which was first mentioned by Wu (1986), has the merit of requiring less restrictive assumptions on the second and fourth order moments of the errors ut (see, e.g., Gonçalves & Kilian, 2004; Hafner & Herwartz, 2009). Based on simulation studies, Gonçalves and Kilian (2004) find that the fixed design is slightly less accurate than the recursive design in finite order autoregressive models. Results in Ahlgren and Catani (2016) point to performance leads of fixed designs in small samples and higher dimensional VARs (K5). According to Gonçalves and Kilian (2007) building on the less restrictive theoretical underpinning of the fixed design is particularly useful under more complicated distributional settings as, for instance, conditional heteroskedasticity combined with infinite autoregressive model orders.

Secondly, if the bootstrap is used to obtain critical values for a test statistic θ^, it is important that the data generation in step 3 respects the null hypothesis subjected to testing. Specifically, the null hypothesis of interest might correspond to restrictions applying to the parameters in A which deserve consideration in the resampling step 3. While bootstrap sampling under the null hypothesis is essential for hypothesis testing, it is optional to draw ut* from empirical counterparts u^t which are obtained in step 1 from model estimation under the null hypothesis (see, e.g., Li & Madalla, 1996; Paparoditis & Politis, 2005). Generating ut* from restricted model residuals promises a close approximation of the DGP if the null hypothesis is actually true and could benefit the empirical size of testing. With sampling from restricted residuals, however, the bootstrap test could suffer from power deteriorations, since restricted model residuals might be far away from their true values under the alternative hypothesis.

Thirdly, the choice of the number of bootstrap iterations S may depend on the statistic of interest and the purposes of the analyst. For instance, bias adjustments in small order models might work well for small-sized bootstrap samples (S=100, say). However, inferential analysis at significance levels as small as 1 % or 5 % likely requires more repetitions to obtain accurate approximations of the relevant tail probabilities of the test statistic of interest (S=500, say). To calculate confidence intervals of impulse response functions with controlled pointwise or global nominal coverage, economists often use between S=1000 or S=2000 bootstrap replications.

As outlined, the general bootstrap algorithm B is unspecific with regard to the choice of the resampling scheme in step 2. In fact, this choice is crucial for bootstrap consistency, since ignoring data characteristics like heteroskedasticity or (unmodeled) serial correlation of model disturbances likely invokes bootstrap biases and uncontrolled effects on the empirical size of bootstrap inference. The next sections provide outlines of three stylized resampling schemes.

Residual-Based Bootstrap in IID Settings

Providing a most restrictive framework for both central limit theory and bootstrap resampling, the VAR representation in (1) has been outlined for the case of serially uncorrelated, homoskedastic disturbances ut(0,Σ). To establish a feasible resampling scheme assume that the errors are iid random variables with unknown distribution (ut~iidF).2 Runkle (1987) shows how to adapt the iid residual-based bootstrap derived in Efron (1979) for the classical linear regression model within the context of VAR models. In an iid setting step 2 of the general algorithm B reads as

2a. Bootstrap errors u1*,,uT* are drawn with replacement from centered regression residuals {u^tu¯}t=1T, where u¯=1Tt=1Tu^t.

The centering ensures that the bootstrap innovations ut* have zero expectation. Resampled model innovations {ut*}t=1T are iid conditional on the data. Hence, bootstrap errors have the same distribution as the actual model disturbances asymptotically (Bose, 1988). However, the assumption of iid innovations might be overly restrictive, as it precludes any form of second order heterogeneity or (remaining) correlation of the model residuals.

Wild Bootstrap

Assuming iid innovations to shape the stochastic properties of macroeconometric time series limits the practical scope of both respective central limit theory and of resampling ut* with replacement from residual estimates. A prominent deviation from the iid assumption toward more informative distributions is heteroskedasticity. Sensier and van Dijk (2004) document that about 80 % of 214 investigated U.S. macroeconomic time series on real and price variables exhibited shifts in variances during the period 1959–1999. Moreover, it is well-known that business cycle variations were significantly lower during the so-called Great Moderation (1984–2007) in comparison with earlier periods (Stock & Watson, 2003). Similarly, models of conditional heteroskedasticity are well established in financial econometrics to take account of stylized patterns of volatility clustering (Gonçalves & Kilian, 2004). Seeing unstructured or unconditional variation changes in macroeconomic data, it becomes crucial for robust inference in macroeconometrics to account for heteroskedasticity of unknown form. The so-called wild bootstrap scheme allows for inference in VARs with heteroskedasticity of unknown form and contemporaneous correlation.

Wu (1986) introduced the wild bootstrap in the context of the classical regression model. Gonçalves and Kilian (2004) show the asymptotic validity of the wild bootstrap in the context of univariate autoregressions. Hafner and Herwartz (2009) generalize the validity of wild bootstrap schemes for the case of VARs. As a stylized approach to mimic second order heterogeneities, the wild bootstrap in step 2 of algorithm B reads as

2b. Bootstrap errors are determined as ut*=u^tηt, t=1,,T, where ηt is a random variable with zero mean and unit variance (ηt(0,1)) and independent of the observed data.

A prominent distribution for sampling ηt is the Gaussian. Two other frequently considered methods are to draw ηt (i) from the so-called Rademacher distribution with ηt being either 1 or +1 with probability 0.5 (Liu, 1988), or (ii) from the distribution suggested by Mammen (1993), where ηt=(51)/2 with probability (5+1)/(25) or ηt=(51)/2 with probability (51)/(25). All suggested distributions mimic the first and second order characteristics of ut. Giving rise to potential performance differentials in practice, it is worth noticing that E(ηt3)=1 and E(ηt4)=1 hold, respectively, for the distribution suggested by Mammen (1993) and the Rademacher distribution. Davidson and Flachaire (2008) show by means of simulation studies that the resampling scheme of Mammen (1993) allows to imitate unconditional skewness of ut, while the Rademacher distribution mimics fourth order properties and gives most favorable results across a variety of scenarios (Herwartz & Walle, 2018). Even though the wild bootstrap might be overly flexible under an iid distribution of ut, the efficiency loss of using wild bootstrap schemes is small in such cases (Davidson & Flachaire, 2008).

The wild bootstrap is also robust under parametric forms of conditional heteroskedasticity, such as univariate GARCH3 (Bollerslev, 1986) or multivariate GARCH (MGARCH, Bollerslev, Engle, & Wooldridge (1988); see Bauwens, Laurent, and Rombouts (2006) for a review). Providing a parametric alternative to the wild bootstrap, the formalization of (M)GARCH residual dynamics requires typically strong ad hoc assumptions. Hence, the nonparametric wild bootstrap appears preferable for conditional mean modeling (Kilian & Lütkepohl, 2017). It is worth noticing, however, that parametric volatility models might also be approximated by means of iid resampling or the wild bootstrap to mimic the underlying (M)GARCH model innovations (Gonçalves & Kilian, 2004).

In the stylized wild bootstrap setup ηt(0,1) are drawn independently at each time instance. Since ηt are scalar random variables, the wild bootstrap also mimics contemporaneous cross equation correlation among the elements of ut. Formally one can show that

Cov(ut*|Y)=E(ηt2u^tu^t|Y)=E(u^tu^t)Cov(ut)asT(5)

(Herwartz & Neumann, 2005). Noticing that the ηt are drawn without any time structuring, however, the stylized form of the wild bootstrap does not account for serially correlated error terms or remaining (unmodeled) serial correlation in ut.Cavaliere and Taylor (2008) and Smeekes and Urbain (2014) provide several modifications of the stylized wild bootstrap scheme to capture serial correlation structures.

Moving-Block Bootstrap

Moving-block bootstrap methods have been developed to allow for consistent resampling in case of unknown or misspecified serial correlation structures in the model disturbances (Li & Madalla, 1996). The first versions of moving-block bootstrap procedures as suggested by Carlstein (1986), Künsch (1989), and Liu and Singh (1992) aimed at resampling the observations y1,y2,,yT directly instead of resampling model residuals u^1,u^2,,u^T. A particular disadvantage of these original moving-block variants is that the resulting bootstrap samples {yt*}t=p+1T are not necessarily stationary even if the observed data samples {yt}t=p+1T stem from a stationary process. Li and Madalla (1997) provide a version of the block bootstrap in the context of time series modeling that builds upon blocks of residuals rather than blocks of observations. Paparoditis and Politis (2003) show its consistency for univariate time series models, while Jentsch, Paparoditis, and Politis (2014) provide asymptotic theory for residual-based moving-block schemes in multivariate integrated and cointegrated models. Moreover, Brüggemann, Jentsch, and Trenkler (2016) provide theoretical results for inference in stationary VAR models characterized by conditional heteroskedasticity.

The implementation of moving-block bootstrap samples {yt*}t=p+1T requires the choice of a block length l<T where m is the minimum number of blocks such that lmT. As a stylized approach to mimic serial dependence structures, moving-block schemes in step 2 of algorithm Bread as

2c. Define (K×l)-dimensional blocks Bi,l=(u^i+1,,u^i+l),i=0,,Tl, and draw with replacement from these blocks to put them end-to-end together. Discard the last mlT values to obtain (u˜1*,,u˜T*). The generated errors deserve centering according to the rule ujl+g*=u˜jl+g*E(u˜jl+g*)=u˜jl+g*1Tl+1r=0Tlu^g+r for g=1,2,,l and j=0,1,,m1 to ensure E(ut*)=0.

As a global parameter the moving-block scheme builds upon a suitably chosen block length l. The asymptotic validity of moving-block bootstrap schemes is typically derived under the assumption that the block length grows with the sample size (i.e., l as T). In particular, the chosen block length should ensure that innovations being more than l time instances apart from each other are uncorrelated. In general, there is a trade-off between too-short blocks, which cannot mimic the original structure of the serially correlated data and overly long blocks, which might result in excessive variability of the bootstrap-based statistics of interest. Unfortunately, there is no consensus yet in the literature on the choice of l in finite samples. Lahiri (2003) discusses the optimal block length for direct versions of moving-block approaches and concludes that l should be at around T1/3. In the context of resampling impulse response functions in SVARs, Brüggemann, Jentsch, and Trenkler (2016) rely on blocks of model residuals of length 0.1T.

Bootstrap Approaches to Prominent Inferential Issues in Macroeconometrics

To illustrate the practical use of bootstrap schemes in macroeconometrics the next step is to consider in some detail prominent issues in empirical analysis, namely, “Testing on Granger Noncausality,” “Testing on Unit Roots,” and “Testing for Cointegration.” Moreover, this section contains a brief sketch of further bootstrap procedures to more specialized issues in macroeconometrics, namely “Testing for Linearity,” “Panel Unit Root Testing,” “Impulse Response Analysis in SVARs,” and “Inference in Factor-Augmented VARs.”

Testing on Granger Noncausality

Testing the null hypothesis of Granger noncausality is a typical step of empirical VAR analysis. If the history of one variable does not affect other variables within a VAR system, it is considered as Granger noncausal for these variables (Granger, 1969). For diagnosing Granger noncausality one has to test the joint insignificance of off-diagonal elements/blocks in the autoregressive parameter matrices Ai,i=1,,p.

To test parameter restrictions in VAR models, Hafner and Herwartz (2009) have suggested the wild bootstrap, which is robust under heteroskedasticity of unknown form. Specifically, for testing the null hypothesis of Granger noncausality against the alternative of Granger causality, they advocate to resample the statistic

θ^GC=Tγ^R[R(ΞT1Σ^)R]1Rγ^.(6)

In (6) Σ^=1Tt=1Tu^tu^t,ΞT=1Tt=1TZt1Zt1 with Zt=(1,yt,,ytp+1), γ^ is the vectorized LS estimator of the VAR parameters γ^=vec(A^), and R is a (N×(pK2+K)) matrix of rank N formalizing the parameter restrictions under the null hypothesis (H0:Rγ=0). The authors show that the test statistic θ^GC has a nonstandard limit distribution in the presence of heteroskedasticity, which implies that critical values for the asymptotic distribution of θ^GC are difficult to obtain in practice. Hence, bootstrap methods are convenient to approximate the distribution of θ^GC if the resampling scheme allows for conditional heteroskedasticity. To quantify the restricted VAR under H0, it is worth to notice that OLS estimation of restricted VARs lacks a ML interpretation such that feasible generalized LS procedures (combined with bootstrap-based bias correction) might be suggested for efficient estimation of the restricted model. For the practical implementation of the general bootstrap algorithm B, Hafner and Herwartz (2009) use the Gaussian distribution ηtN(0,1) in step 2b.

Hafner and Herwartz (2009) derive the asymptotic properties of their approach and compare it with conventional OLS based test statistics, weighted least squares (WLS, White, 1980) and QML inference assuming a MGARCH model of conditional heteroskedasticity. By means of a Monte Carlo study the authors show that the bootstrap approach outperforms competing test strategies in terms of empirical size. Under conditional volatility WLS or quasi ML (QML) are slightly superior in terms of size adjusted power but face rsks of covariance misspecification. As a result, Hafner and Herwartz (2009) suggest using bootstrap versions of OLS-based statistics in small and medium samples, if there is evidence for heteroskedasticity in the data.

Hafner and Herwartz (2009) investigate empirically if so-called break even inflation rates are characterized by cross market patterns of Granger causality. The break even inflation rate is an implied future inflation rate equalizing ex-ante the real returns of bonds generating nominal and inflation indexed cash flows. Formally, it is defined as πte=itrt, where it and rt are the nominal and ex-ante real interest rate, respectively. After applying a set of alternative test procedures to bivariate systems of breakeven inflation rates, the null hypothesis that inflation changes in the United Kingdom are Granger noncausal for the corresponding rates prevailing in France cannot be rejected. Subjecting the inverse relation to Granger causality testing, however, provides evidence for potential spillovers from France, as a representative for the Euro Area, to the United Kingdom.

Testing for the Presence of Unit Roots

The question if macroeconomic time series exhibit stochastic trends is highly relevant for economic and econometric theory and policy implications. Characterizing, for instance, GDP per capita as a nonstationary process implies that shocks to GDP per capita will have permanent effects. On the contrary, if real GDP per capita is a stationary process around a deterministic path, shocks to it will only have transitory effects (Nelson & Plosser, 1982). Policy-wise, a nonstationary real GDP per capita might imply that stabilization policies could lead to substantial social welfare improvements (Durlauf, 1989).

To illustrate bootstrap approaches to diagnosing nonstationary trending it is instructive to consider the univariate autoregressive (AR) model of order one as a special case of the multivariate VAR model in (1)

yt=ϕyt1+ut,t=1,,T,(7)

where yt and ut are scalars (K=1). For simplicity it is assumed that yt does not depend on deterministic components. The characteristic polynomial of the model in (7) is (1ϕz) with a root at z=1/ϕ. Hence, for |ϕ|<1 the process yt is stationary. Setting ϕ=1, yt exhibits a stochastic trend formalized as a so-called unit root, i.e., (1ϕz)=0 for z=1.4

The risks of using standard univariate unit root tests for time series with volatility shifts have been documented in Hamori and Tokihisa (1997), who show that Dickey–Fuller (DF, Dickey & Fuller, 1979) and augmented DF (ADF) tests suffer from substantial size distortions under variance breaks. Bootstrap-based unit root testing which is robust under heteroskedastic innovations might follow two alternative strategies. On the one hand, an analyst could aim to evaluate the conditional distribution of the ADF statistic by means of the stylized wild bootstrap scheme (step 2b of algorithm B). On the other hand, being less explicit with regard to the dynamic model specification, one could resample the DF statistic directly. Owing to unmodeled serial correlation, however, resampling the DF test is likely subject to additional nuisance. Accordingly, resampling this statistic has to rely on moving-block designs (Paparoditis & Politis, 2003) or on suitably modified wild bootstrap sampling schemes.5

Smeekes and Urbain (2014) provide several modifications of the wild bootstrap which account for serial correlation. In particular, they consider a stylized DF test of H0:ϕ=1 against H1:ϕ<1 with the corresponding test statistic

θ^DF=t=2Tyt1Δytt=2T(yt1)2,(8)

where Δyt=ytyt1. After estimating the AR coefficient ϕ in (7) by means of OLS and obtaining the residuals as u^t=ytϕ^yt1 for t=2,,T, and u^1=y1, bootstrap errors are obtained as ut*=ηtu^t and the bootstrap samples are derived under the null hypothesis as yt*=yt1*+ut*. In light of the restrictive dynamic formalization the residuals u^t are likely to exhibit remaining serial correlation which is not captured by means of independent sampling of ηt. Surveying alternative approaches for handling serial correlation, Smeekes and Urbain (2014) compare the moving-block bootstrap approach of Paparoditis and Politis (2003) with various modified versions of the wild bootstrap, i.e. the block wild bootstrap (Shao, X., 2011), the dependent wild bootstrap (Shao, X., 2010), the so-called sieve wild bootstrap (Cavaliere & Taylor, 2008; see “Sieve Bootstrap” for a brief characterization) and the autoregressive wild bootstrap (Smeekes & Urbain, 2014), which all differ with respect to the construction of ηt. For the sake of brevity consider the autoregressive wild bootstrap as a representive for these alternative approaches. For the autoregressive wild bootstrap the ηt are drawn as

ηt=γηt1+νt,t=2,,T,η1N(0,1),(9)

where γ=γT[0,1) is a tuning parameter that controls the persistence of ηt and vtiidN(0,1γ2). The parameter γ has to increase with the sample size to achieve bootstrap consistency. Smeekes and Urbain (2014) document that all modified wild bootstrap methods perform accurately in various scenarios of heteroskedasticity paired with serial correlation. The autoregressive wild bootstrap delivers most favorable results joint with the sieve wild bootstrap, while the moving-block bootstrap suffers from size distortions under volatility shifts.

Testing for Cointegration

Cointegrated variables are characterized by common stochastic trends and long-run equilibrium relationships which are often in the core interest of economic analysis. Formally, a process is cointegrated with cointegration rank r, denoted ytCI(1,r), if the characteristic polynomial det(IKA1zApzp) has Kr roots equal to unity and all other roots outside the unit circle.

Of particular interest in macroeconometric analysis is the diagnosis of a specific cointegration rank. Under the assumption of Gaussian innovations the so-called likelihood ratio trace statistic of Johansen (1996) is widely applied for this purpose. To outline the testing problem and the trace statistic the VAR(p) model in (1) can be rewritten in VECM(p1) form

Δyt=αβyt1+Γ1Δyt1++Γp1Δytp+1+ut,αβ=(IKA1Ap),(9)

where α and β are (K×r) matrices, and Γi=(Ai+1++Ap) for i=1,,p1.6 The matrix β is often called the cointegration matrix and α the loading matrix. Given the cointegration property of yt(CI(1,r)), and noticing that α and β have full column rank, the cointegration relations βytE(βyt) are stationary. For testing H0:rank(αβ)=r0 against the alternative hypothesis H1:r0<rank(αβ)K, the trace statistic reads as

θ^J=Ti=r+1Klog(1λ^i),

where λ^1>>λ^K are K ordered solutions of a generalized eigenvalue problem. The test suffers from size distortions in finite samples if critical values are taken from the asymptotic limit distribution (Johansen, 2002). Various authors have proposed bootstrap procedures to improve empirical size properties of the trace test (e.g., van Giersbergen, 1996; Swensen, 2006; Trenkler, 2009). A key concern for bootstrapping the trace statistic is, however, that the generated bootstrap data need to satisfy the CI(1,r0) condition under the null hypothesis.

Cavaliere, Rahbek, and Taylor (2012) suggest an approach which relies on estimating (9) under H0. Bootstrap samples are constructed recursively by means of iid resampling (step 2a of algorithm B). The authors prove that (for large samples) their bootstrap method results in generated data, which are stochastically trending with cointegration rank r0. This ensures that the bootstrap algorithm is asymptotically correctly sized. By means of a simulation study they further show that this approach obtains favorable empirical size properties in comparison with using either the asymptotic limit distribution or alternative bootstrap algorithms that involve the estimation of the model in (9) under both hypotheses H0 and H1 (Swensen, 2006). Robustifying bootstrap inference on cointegration toward heteroskedastic residual sequences is straightforward, for instance, by means of adopting the wild bootstrap (step 2b of algorithm B; Cavaliere, Rahbek, & Taylor, 2010; Cavaliere, Rahbek, & Taylor, 2014). As a complement to diagnosing cointegration, a further macroeconometric modeling step consists of subjecting the equilibrium relationships in β to inferential analysis. Surveying different bootstrap methods for systems of cointegrated time series variables, Li and Madalla (1997) show by means of a Monte Carlo investigation that empirical size properties of likelihood ratio tests on long-run relationships (Johansen, 1996) can be substantially improved by means of bootstrap schemes.

The bootstrap implementation of the Johansen trace test has been frequently used in macroeconomic research. For instance, Benati (2015) analyses the long-run Phillips curve (LRPC) for several industrialized countries, including Western Europe, Northern America, and Japan. The LRPC describes alternative equilibrium combinations of inflation and the unemployment rate that an economy can achieve. A cointegration relation linking inflation and unemployment rates provides evidence for a strong-form LRPC. Benati (2015) finds evidence for cointegration in bivariate systems of inflation and unemployment for the Euro Area and the United Kingdom.

Testing for Linearity

The macroeconometric analysis of conditional expectations proceeds typically under the assumption that the structure of a dynamic system can be described by linear models. Nevertheless, there are occasions when economic theory or data characteristics suggest that linear models are insufficient for modeling the dynamics in the considered system. Since opting for nonlinear model approaches often complicates parameter estimation and inference, most economists would agree that empirical analysis should rely on linear models unless there is convincing evidence to support a specific nonlinear model approach (Hansen, 2011). Against this background, a statistical issue of primary importance is testing the null hypothesis of linearity against the alternative of a nonlinear model variant.

Under a broad set of nonlinear econometric models (see, e.g., Teräsvirta, Tjøstheim, & Granger, 2010, for a thorough overview) the threshold autoregressive (TAR) model proposed by Tong (1977) and Tong and Lim (1980), and its generalization the self-exciting TAR (SETAR) (Tong, 1983) are among the first developed nonlinear time series models. (SE)TARs have been frequently applied in empirical economics and finance (see, e.g., Hansen, 2011; and Tong, 2011 for a survey). Furthermore, these models can be viewed as parsimonious approximations to general nonlinear autoregressions. Starting from the AR model in (7) the simplest version of a TAR model is

yt=ϕyt1+ψyt1I(yt1ξ)+ut,(11)

where I() is the indicator function and ξ is a threshold parameter. The objective in examining linearity is to evaluate if the term ψyt1I(yt1ξ) enters the autoregression, i.e., testing H0:ψ=0 against the alternative hypothesis H1:ψ0. As linear models are nested in (SE)TARs, the procedure is based on classical F statistics

θ^NL=Tu^ou^ou^1u^1u^1u^1,(12)

where u^0 and u^1 are the vectors of the residuals from the model under H0 and H1, respectively. The testing problem for linearity in (SE)TAR models falls in the class of tests in the presence of nuisance parameters, which are only identified under the alternative hypothesis (Davies, 1977). More specifically, when ψ=0 the threshold parameter ξ is not identified and the distribution of θ^NL is non pivotal.

Hansen (1996) firstly suggests a model-based simulation procedure to approximate the asymptotic distribution of the test statistic under the null hypothesis. Due to the widespread availability of powerful computational facilities, he further develops a bootstrap technique, which leads to a better approximation of the distribution of the test statistic in comparison with the model based simulation approach. At the implementation side, Hansen (1999) suggests a recursive-design bootstrap. The errors are generated by iid sampling (step 2a of algorithm B) from OLS residuals obtained from the model under the null hypothesis. Hansen (1999) further suggests an augmented version of this procedure, which is robust to conditional heteroscedasticity.

Hansen (1999) uses this testing scheme to analyze annual sunspot means and monthly U.S. industrial production. He comes to the conclusion that both series should be modeled as nonlinear SETAR instead of linear AR models. It is worth noticing that the described bootstrap-based testing procedure can be easily adapted to smooth transition and Markov-switching models, which suffer from unidentified nuisance parameters under the null hypothesis in a similar manner (Teräsvirta, 2006).

Panel Unit Root and Cointegration Tests

Econometric research on diagnosing stochastic trends in univariate autoregressive models yet provides a set of diagnostics featuring sizeable power enhancements beyond classical (A)DF tests (Elliott, Rothenberg, & Stock, 1996). In empirical practice, however, univariate unit root testing suffers from low power of available diagnostics in close stationary neighborhoods of the null hypothesis of nonstationarity (H0:ϕ=1 in (7)). Panel variants of unit root tests have been suggested to improve diagnostic power, since panel data analysis typically comes with marked enhancements of available sample information.

The issues of heteroskedasticity or unmodeled serial residual correlation described for testing the presence of unit roots in univariate autoregressions apply as well to panel unit root tests (PURTs). For instance, Demetrescu and Hanck (2012a, 2012b) have shown that frequently used PURTs such as those proposed by Levin, Lin, and Chu (2002) are not robust to unconditional volatility shifts. Addressing panel unit root detection under variance breaks, a few robust PURTs have been suggested, e.g., by Demetrescu and Hanck (2012a, 2012b) and Herwartz, Maxand, and Walle (2019). With the exception of the test in Herwartz, Maxand, and Walle (2019), a particular disadvantage of the available heteroskedasticity robust PURTs is that their application is restricted to time series which do not show any form of deterministic trending. If these tests are applied on de-trended data, their limiting distributions will be affected by nuisance parameters (Herwartz, Siedenburg, & Walle, 2016). Hence, the practical relevance of the tests is rather limited noticing that many macroeconomic data (e.g., GDP, money supply, or consumer prices) exhibit trending behavior and unconditional shifts of residual.

Bootstrap approaches promise consistent panel unit root testing for macroeconomic variables featuring volatility shifts and linear trends. For the implementation of PURTs, bootstrap corrections have been originally used to deal with cross-sectional correlation (Herwartz & Siedenburg, 2008). Smeekes and Urbain (2014) provide general invariance principles to establish consistent bootstrap variants of several PURTs under unconditional heteroskedasticity. At the implementation side they suggest a multivariate version of the autoregressive wild bootstrap in (6). Herwartz and Walle (2018) provide simulation evidence that recursive detrending followed by a stylized wild bootstrap correction leads to very good size precision for a variety of PURTs designed for testing against panel stationarity formalized with a cross-sectionally homogenous autoregressive parameter (Levin, Lin, & Chu, 2002).

As an empirical illustration, Herwartz and Walle (2018) examine the order of integration of GDP per capita (Nelson & Plosser, 1982) using data from 107 economies spanning the period from 1960 until 2011. Conditional on a panel of detrended time series, they find that GDP per capita is better characterized as an unit root process rather than a trend stationary process.

Compared with bootstrap versions of prominent PURTs under nonstandard data characteristics and cross-sectional correlation, the macroeconometric literature on bootstrap based panel cointegration analysis is scant. As notable exceptions, Jacobsen, Lyhagen, Larsson, and Nessén (2008) suggest a parametric bootstrap to mimic a panel counterpart of the trace statistic of Johansen (1996). Westerlund (2007) employs iid resampling to immunize an error correction based test against nuisance entering through cross-sectional correlation. Westerlund and Edgerton (2007) propose a combination of sieve bootstraps and iid resampling to obtain critical values for the test of panel cointegration suggested by McCoskey and Kao (1998) under cross-sectional correlation. For purposes of residual based cointegration testing (Pedroni, 1999), Di Iorio and Fachin (2014) show consistency of moving block resampling adapted to the panel framework. Herwartz and Neumann (2005) derive a wild bootstrap approximation for testing parameter restrictions in panels of single equation error correction models.

Impulse Response Functions in SVARs

Apart from the determination of critical values for a given test statistic, bootstrap schemes are also widely used in structural VAR analysis. SVARs have yet become a popular tool for unraveling the effects of isolated economic shocks (e.g., monetary policy shocks, aggregate supply and demand shocks) in a system of variables. In particular, bootstrap methods are common practice for the construction of confidence bands to account for estimation uncertainty inherent in impulse response functions (IRFs) (Chapter 12 of Kilian & Lütkepohl, 2017). Unlike the available analytical results on asymptotic covariances of estimated IRFs, bootstrap approximations are feasible under non-Gaussian or heteroskedastic SVAR innovations. Brüggemann, Jentsch, and Trenkler (2016) show consistency of moving-block resampling for the assessment of estimation uncertainty of estimated IRFs under weak assumptions on the residual distribution. After comparing several bootstrap methods and first order asymptotic approximations for the construction of confidence bands for IRFs analytically, Brüggemann and colleagues (2016) provide Monte Carlo evidence on finite sample performances. As it turns out, wild bootstrap schemes tend to overestimate the asymptotic variance in an iid setup which leads to wider confidence bands. In the presence of conditional heteroskedasticity, however, the wild bootstrap tends to build upon underestimated variances issuing too narrow confidence bands. Moreover, the theoretical merits of the moving-block bootstrap show up in very large samples, while small sample performances of the wild bootstrap and moving-block sampling are comparable. Hence, both approaches have sufficient justification for applied work (e.g., Lütkepohl & Netsunajev, 2017; Lange, Dalheimer, Herwartz, & Maxand, 2019). Complementing the wild and moving-block bootstrap schemes, the macroeconometric literature yet provides numerous alternative approaches to determine confidence bands for IRFs (see, e.g., Chapter 12 of Kilian & Lütkepohl, 2017).

Beyond the question of how to generate bootstrap samples of estimated IRFs, another issue is how to construct the actual confidence intervals. Let θ^ signify a particular IRF estimate, and θδ/2* and θ1δ/2* denote the δ/2 and (1δ/2) quantiles of the respective bootstrap distribution of {θs*}s=1S. Then, so-called percentile confidence intervals suggested by Efron (1979) and Hall (1992) read as CI=[θδ/2*,θ1δ/2*] and CI=[2θ^θ1δ/2*,2θ^θδ/2*], respectively. Unlike the former, the latter interval builds implicitly upon bias adjusted point estimates θ^adj=2θ^θ¯*. As a potential caveat, both percentile confidence intervals might fail to cover the data-based estimate θ^. Instead of using percentile intervals, an analyst might construct pointwise confidence intervals around the estimated statistics θ^ using either bootstrap standard errors or suitable quantiles of the distribution of bootstrap statistics centered around the bootstrap mean, i.e. {θs*θ¯*}s=1S. Kilian (1999) provides simulation based comparisons of alternative approaches to confidence interval construction (see also DiCiccio & Efron, 1996, for a survey of composition techniques). Noticing that IRFs provide response patterns to one isolated shock over consecutive time instances, an analyst might be interested in assessing the joint significance of a set of displayed responses. Lütkepohl, Staszewska-Bystrova, and Winker (2015) compare alternative methods to construct bootstrap based joint confidence bands and suggest adjusted Bonferroni bands.

Inference in Factor-Augmented VARs

Typical macroeconomic series are relatively short, covering, for instance, 20 to 40 years of quarterly data. Nevertheless, it is often necessary to consider a large set of economic time series variables to account for all important information. A specific issue in standard VARs (as defined in (1)) is that the number of parameters increases quadratically with the number of variables included, which might result in a lack of degrees of freedom in situations of small sample size T and large dimension K.

Bernanke, Boivin, and Eliasz (2005) and Stock and Watson (2005) have established FAVAR models.7 The FAVAR is characterized by a small number of unobservable factors (J) which govern a large number (L) of response variables. Typically FAVARs are estimated in two steps. In the first step, the factors are determined, for instance, by means of principal component estimation. In the second step, the estimated factors are included in a VAR model. A particular difficulty related to inferential analysis in FAVARs is to account for the uncertainty in the factor estimation. Bai and Ng (2006) show that if TL0, i.e., L is large relative to T, the uncertainty in the factor estimates can be ignored. However, in applied work L might be too small relative to T and the assumption TL0 might not hold. Gonçalves and Perron (2014) and Shintani and Guo (2015) show the first order asymptotic validity of general residual-based bootstrap procedures to construct confidence intervals in reduced-form FAVARs under the less restrictive assumption of TLc, with 0c<. In order to obtain valid inference in this more general framework, the factors have to be reestimated within the bootstrap algorithm and, hence, are not treated as observed regressors in the second step of the estimation procedure.

Yamamoto (2019) introduces a bootstrap method with factor estimation in the context of structural IRFs and studies its asymptotic validity under the condition TLc, 0c<. Moreover, he replicates the study of Bernanke, Boivin, and Eliasz (2005), who do not account for uncertainty in factor estimation. As a remarkable difference to Bernanke and colleagues (2005) he finds no significant reduction of the price level in response to a contractionary monetary policy shock.

Further Bootstrap Methods

Being specific about the implementation step 2 of the general bootstrap algorithm B three prominent and widely used bootstrap methods have been illustrated in detail. However, the econometric literature provides a variety of further methods, which may be useful for specific purposes of macroeconometric analysis. The following paragraphs provide brief outlines of three such specialized bootstrap schemes.

Pairwise Bootstrap

Unlike residual-based bootstrap methods, the idea of the pairwise bootstrap (Freedman, 1981) is to resample the data directly. Specifically, the pairwise bootstrap consists of sampling with replacement from tuples of dependent and explanatory variables. Similar to the wild and moving-block approach, the pairwise bootstrap can accommodate heteroskedasticity. Although the pairwise bootstrap has been used in various fields, several simulation studies have revealed that the wild and moving-block bootstrap schemes often provide more favorable results (e.g., Gonçalves & Kilian, 2004; Davidson & Flachaire, 2008; Brüggemann, Jentsch, & Trenkler, 2016).

Sieve Bootstrap

The sieve bootstrap is a purely nonparametric procedure, which is used in time series econometrics in case of an underlying autoregressive process of infinite order. Based on Grenander’s method of sieves (Grenander, 1981), Bühlmann (1995, 1997) suggested the sieve bootstrap for univariate autoregressive models. Paparoditis (1996) and Inoue and Kilian (2002) extended it to the multivariate case. The sieve bootstrap consists of resampling randomly the residuals of an estimated truncated autoregression. The order of the applied autoregression grows with the sample size. Bootstrap data are generated recursively from an estimated autoregressive model. Noticing that sieve bootstraps rely on the assumption of iid innovations, they are not valid for (V)AR() models with conditional heteroskedasticity (Gonçalves & Kilian, 2007). Cavaliere and Taylor (2008) suggest a bootstrap which is robust to heteroskedasticity and remaining serial correlation that combines the merits of the sieve and wild bootstrap approach.

Factor-Based Bootstrap

A natural specification issue in the context of so-called functional coefficient models (Cai, Fan, & Yao, 2000) is to infer if the functional coefficients are constant. For a test statistic comparing the residual sum of squares from parametric and semiparametric functional regressions, Cai, Fan, and Yao (2000) advocate an iid residual-based bootstrap approach. Herwartz and Xu (2009) propose the factor-based bootstrap which copes with various forms of heteroskedasticity, as it preserves the relationship between the error term variance and the corresponding regressors. In the framework of semiparametric regressions the factor-based bootstrap seems preferable to wild, pairs or residual-based bootstrap inference, as it is likely better immunized against adverse effects of under- or oversmoothing in nonparametric regressions.

Final Comments

Several stylized bootstrap approaches for macroeconometric models have been briefly illustrated in this article. Depending on the model and the statistic of interest, the choice of the appropriate bootstrap method and its implementation is crucial for bootstrap consistency. The Further Reading section lists some books and articles discussing the methodological foundations of the bootstrap and general resampling issues in more depth.

Further Reading

  • Efron, B., & Tibshirani, R. (1998). An introduction to the bootstrap. Boca Raton, FL: Chapman & Hall.
  • Hall, P. (1992). The bootstrap and Edgeworth expansion. New York, NY: Springer.
  • Horowitz, J. L. (2001). The bootstrap. In J. J. Heckman & E. Leamer (Eds.), Handbook of Econometrics (Vol. 5, S. 3160–3223). Amsterdam, The Netherlands: North Holland.
  • Kilian, L., & Lütkepohl, H. (2017). Structural vector autoregressive analysis. Cambridge, UK: Cambridge University Press.
  • Lahiri, S. N. (2003). Resampling methods for dependent data. New York, NY: Springer.
  • Li, H., & Madalla, G. S. (1996). Bootstrapping time series models. Econometric Reviews, 5(2), 115–158.
  • Ruiz, E., & Pascual, L. (2002). Bootstrapping financial time series. Journal of Economic Surveys, 16(3), 271–300.
  • Shao, J., & Tu, D. (1995). The jackknife and bootstrap. New York, NY: Springer
  • Teräsvirta, T. (2006). Forecasting economic variables with nonlinear models. In G. Elliott, C. Granger, & A. Timmermann (Eds.), Handbook of Economic Forecasting (S. 413–457). Amstersam, The Netherlands: North Holland.

References

Notes

  • 1. For a rigorous theoretical discussion of bootstrap consistency we refer the reader to Beran & Ducharme (1991), Hall (1992), and Horowitz (2001).

  • 2. Here we follow the convention to avoid parametric assumptions for F. As an alternative, one might assume a specific distribution and estimate the underlying distributional parameters. Subsequently, bootstrap errors are sampled from this distribution. Minor efficiency gains could be expected from such a parametric bootstrap in case of a correct distributional assumption. However, as a consequence of imposing a false distributional model, there efficiency losses or biases may accrue; hence, parametric bootstrap schemes are rarely employed (Kilian, 1998).

  • 3. Indicating a class of time series processes that cope with variance clustering, GARCH is short for generalized autoregressive conditional heteroskedasticity.

  • 4. For more details see Chapter 15 of Hamilton (1994).

  • 5. Either choice, ADF coupled with the stylized wild bootstrap or DF coupled with a modified wild bootstrap, enjoys similar merits for testing in univariate autoregressive models. However, as a particular advantage in panel unit root testing, the DF approach does not deserve a parametric handling of higher-order dynamics if the modification of the wild bootstrap takes account of the panel dimension of the data.

  • 6. To simplify the exposition of the VECM and the testing problem we assume a purely stochastic model v=0. For more details see Chapter 6 of Lütkepohl (2005).

  • 7. Comprehensive overviews about recent developments in FAVAR models can be found in Stock and Watson (2016) and Chapter 16 of Kilian and Lütkepohl (2017).