# Structural Breaks in Time Series

## Summary and Keywords

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Keywords: change-point, linear models, testing, confidence intervals, trends, stationary and integrated regressors, factor models, Lasso, forecasts

This chapter covers methodological issues related to estimation, testing, and computation for models involving structural changes. The amount of work on this subject is truly voluminous and any survey is bound to focus on specific aspects. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots, and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, testing, and methods to determine the number of changes present. The first part summarizes and updates developments described in an earlier review, Perron (2006), with the exposition following heavily that of Perron (2008). Additions are included for recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is solely on linear models and deals with so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. Given the space constraint, our review is obviously selective. The aim is to provide an overview of methods that are of direct usefulness in practice as opposed to issues that are mostly of theoretical interest.

The Basic Setup

We consider the linear regression with $m$ breaks (or $m+1$ regimes):

for $j=1,...,m+1$, following Bai and Perron (1998) (henceforth BP). In this model, ${y}_{t}$ is the observed dependent variable; ${x}_{t}$ ($p\times 1$) and ${z}_{t}$ ($q\times 1$) are vectors of covariates and $\beta $ and ${\delta}_{j}(j=1,...,m+1)$ are the corresponding vectors of coefficients; ${u}_{t}$ is the disturbance. The break dates $({T}_{1}\mathrm{,...,}{T}_{m})$ are unknown (the convention that ${T}_{0}=0$ and ${T}_{m+1}=T$ is used). The aim is to estimate the regression coefficients and the break points when $T$ observations on $({y}_{t}\mathrm{,}{x}_{t}\mathrm{,}{z}_{t})$ are available. This is a partial structural change model since the parameter vector $\beta $ is not subject to shifts. When $p=0$, we obtain a pure structural change model with all coefficients subject to change. A partial structural change model can be beneficial in terms of obtaining more precise estimates and more powerful tests. The method of estimation is standard least-squares (OLS), i.e., minimizing the overall sum of squared residuals (SSR) $\sum}_{i=1}^{m+1}{\displaystyle \sum}_{t={T}_{i-1}+1}^{{T}_{i}}{[{y}_{t}-{x}_{t}^{\prime}\beta -{z}_{t}^{\prime}{\delta}_{i}]}^{2$. Let $\widehat{\beta}(\{{T}_{j}\})$ and $\widehat{\delta}(\{{T}_{j}\})$ denote the estimates based a partition $({T}_{1}\mathrm{,...,}{T}_{m})$ denoted $\left\{{T}_{j}\right\}$. Substituting these in the objective function and denoting the resulting SSR as ${S}_{T}({T}_{1}\mathrm{,...,}{T}_{m})$, the estimated break points are

with the minimization taken over a set of admissible partitions (see below). The parameter estimates are those associated with the partition $\left\{{\widehat{T}}_{j}\right\}$, i.e., $\widehat{\beta}=\widehat{\beta}\left(\right\{{\widehat{T}}_{j}\left\}\right)$, $\widehat{\delta}=\widehat{\delta}\left(\right\{{\widehat{T}}_{j}\left\}\right)$. Since estimation is based on OLS, even if changes in the variance of ${u}_{t}$ are allowed, provided they occur at the same dates $({T}_{1}\mathrm{,...,}{T}_{m})$, they are not exploited to increase the precision of the break date estimators unless a quasi-likelihood framework is adopted, see below.

The Theoretical Framework, the Assumptions and Their Relevance

To obtain theoretical results about the consistency and limit distribution of the estimates of the break dates, some conditions need to be imposed on the asymptotic framework, the regressors, the errors, the set of admissible partitions, and the break dates. By far the most common asymptotic framework is one whereby as $T$ increases the total span of the data increases such that the length of the regimes increases proportionately, which implies that the break dates are asymptotically distinct, that is, ${T}_{i}^{0}=[T{\lambda}_{i}^{0}]$, where $0<{\lambda}_{1}^{0}<...<{\lambda}_{m}^{0}<1$ (a recent alternative framework proposed is to let the span fixed and increase the number of observations by letting the sampling interval decrease; see section “Continuous Record Asymptotics” below). To our knowledge, the most general set of assumptions in the case of weakly stationary, or mixing, regressors and errors are those in Perron and Qu (2006). Along with the asymptotic framework, this implies that what is relevant for inference about a break date is only the neighborhood around the break date considered. Some conditions are technical while others restrict the potential applicability of the results. The assumptions on the regressors specify that for ${w}_{t}=({x}_{t}^{\prime},{z}_{t}^{\prime}{)}^{\prime}$, $(1/{l}_{i}){\displaystyle \sum}_{t={T}_{i}^{0}+1}^{{T}_{i}^{0}+[{l}_{i}v]}{w}_{t}{w}_{t}^{\prime}{\to}_{p}{Q}_{i}(v)$ a non-random positive definite matrix uniformly in $v\in \left[\mathrm{0,1}\right]$. It allows their distribution to vary across regimes but requires the data to be weakly stationary stochastic processes. This can be relaxed on a case-by-case basis, although the proofs then depend on the nature of the relaxation. For instance the scaling used forbids trending regressors, unless they are of the form $\{1,(t/T),...,(t/T{)}^{p}\}$, say, for a polynomial trend of order $p$. Casting trend functions in this form can deliver useful results in many cases. However, there are instances where specifying trends in unscaled form such as $\{1,t\mathrm{,...,}{t}^{p}\}$ can deliver much better results, especially if level and trend slope changes occur jointly. Results using unscaled trends with $p=1$ are presented in Perron and Zhu (2005). A comparison of their results with other trend specifications is presented in Deng and Perron (2006). A generalization with fractionally integrated errors can be found in Chang and Perron (2016). Another important restriction is implied by the requirement that the limit be a fixed, as opposed to stochastic, matrix. This precludes integrated processes as regressors (i.e., unit roots). In the single break case, this has been relaxed by Bai et al. (1998) who considered structural changes in cointegrated relationships in a system of equations. Kejriwal and Perron (2008) provided results for multiple structural changes in a single cointegrating vector. Consistency still applies, but the rate of convergence and limit distributions is different.

The assumptions on ${u}_{t}$ and $\left\{{w}_{t}{u}_{t}\right\}$ impose mild restrictions and permit a wide class of potential correlation and heterogeneity (including conditional heteroskedasticity) and lagged dependent variables. It rules out errors with unit roots, which can be of interest; for example, when testing for a change in the deterministic component of the trend function for an integrated series (Perron & Zhu, 2005). The set of conditions is not the weakest possible. For example, Lavielle and Moulines (2000) allowed the errors to be long memory processes but consider only the case of multiple changes in the mean. It is also assumed that the minimization problem is taken over all partitions such that ${T}_{i}-{T}_{i-1}\ge \epsilon T$ for some $\epsilon >0$. This is not restrictive in practice since $\epsilon $ can be small. Under these conditions, the break fractions ${\lambda}_{i}^{0}$ are consistently estimated, that is ${\widehat{\lambda}}_{i}\equiv ({\widehat{T}}_{i}/T){\to}_{p}{\lambda}_{i}^{0}$ and that the rate of convergence is $T$. The estimates of the break dates are not consistent themselves. The estimates of the other parameters have the same distribution as would prevail if the break dates were known. Kejriwal and Perron (2008) obtained similar results with $I(1)$ regressors for a cointegrated model subject to multiple changes, using the static regression or a dynamic regression augmented with leads and lags of the first differences of the $I(1)$ regressors.

Allowing for Restrictions on the Parameters

Perron and Qu (2006) considered a broader framework whereby linear restrictions on the parameters can be imposed. The class of models considered is ${y}_{t}={z}_{t}^{\prime}{\delta}_{j}+{u}_{t}$ ($t={T}_{j-1}+\mathrm{1,...,}{T}_{j}$) where $R\delta =r$, with $R$ a $k$ by $(m+1)q$ matrix with rank $k$ and $r$, a $k$ dimensional vector of constants. The assumptions are the same as discussed above. There is no need for a distinction between variables whose coefficients are allowed to change and those whose coefficients are not allowed to change. Restricting some coefficients to be identical across regimes can yield a partial structural change model. This is a useful generalization since it permits a wider class of models of practical interests; for example, a model with a specific number of states less than the number of regimes, or one where a subset of coefficients may be allowed to change over only a limited number of regimes. They show that the same consistency and rate of convergence results hold. Moreover, the limit distributions of the estimates of the break dates are unaffected by the imposition of valid restrictions, but improvements can be obtained in finite samples. The main advantages of imposing restrictions are that more powerful tests and more precise estimates are obtained.

Method to Compute Global Minimizers

To estimate the model, we need global minimizers of the objective function (2). A standard grid search requires least squares operations of order $O({T}^{m})$, which is prohibitive when $m>2$. Bai and Perron (2003a) discussed a method based on a dynamic programming algorithm that is efficient (see also Hawkins, 1976). Indeed, the additional computing time needed to estimate more than two break dates is marginal compared to the time needed to estimate a two-break model. Consider the case of a pure structural change model. The basic idea is that the total number of possible segments is at most $T(T+1)/2$ and is therefore of order $O({T}^{2})$. One then needs a method to select which combination of segments yields a minimal value of the objective function. This is achieved efficiently using a dynamic programming algorithm. For models with restrictions (including a partial structural change model), an iterative procedure is available, which in most cases requires few iterations. Hence, even with large samples, the computing cost is small. If the sample is large, various methods have been proposed that are of order $O(T)$ in computation time. Given that they are of less importance in economics, such procedures are not reviewed. Also, in the context of large data sets, methods using Lasso, discussed later, appear more promising.

The Limit Distribution of the Estimates of the Break Dates

With the assumptions on the regressors and errors, and given the asymptotic framework, the limit distributions of the estimates of the break dates are independent so that the analysis is the same as for a single break. This holds because the distance between each break increases at rate $T$, and the mixing conditions on the regressors and errors impose a short memory property so that distant events are independent. The main results for this case are those of Bai (1997a) for the single break case and the extension of BP (1998) for multiple breaks. The limit distribution depends on: (a) the magnitude of the change in coefficients (larger changes leading to higher precision), (b) the limit sample moment matrices of the regressors for the pre- and post-break segments; (c) the so-called long-run variance of $\left\{{w}_{t}{u}_{t}\right\}$, which accounts for serial correlation; and (d) whether the regressors are trending or not. In all cases, the nuisance parameters can be consistently estimated and confidence intervals constructed. For a change of fixed magnitude the limit distribution depends on the finite sample distribution of the errors. To get rid of this dependence, the asymptotic framework is modified with the change in parameters getting smaller as $T$ increases but slowly enough for the estimated break fraction to remain consistent. The limit distribution obtained in Bai (1997a) and BP (1998) applied to the case with no trending regressors. With trending regressors, a similar result is still possible (assuming trends of the form $(t/T)$); see Bai (1997a) when ${z}_{t}$ is a polynomial time trend. For an unscaled linear trend, see Perron and Zhu (2005). Deng and Perron (2006) showed that the shrinking shift asymptotic framework leads to a poor approximation for a change in a linear trend and that the limit distribution based on a fixed magnitude of shift is preferable. In a cointegrating regression with $I(1)$ variables, Kejriwal and Perron (2008) showed that the estimated break fractions are asymptotically dependent so that confidence intervals need to be constructed jointly. If only the intercept and/or the coefficients of the stationary regressors are allowed to change, the estimates of the break dates are asymptotically independent.

Besides the original asymptotic arguments used by Bai (1997a) and BP (1998), Elliott and Müller (2007) propose to invert Nyblom’s (1989) statistic to construct confidence sets, while Eo and Morley (2015) generalized Siegmund’s (1988) method thereby inverting the likelihood-ratio statistic of Qu and Perron (2007); henceforth ILR. The latter methods were mainly motivated by the fact that the empirical coverage rates of the confidence intervals obtained from Bai’s (1997a) method are below the nominal level with small breaks. The method of Ellliot and Müller (2007) delivered the most accurate coverage rates, though at the expense of increased average lengths of the confidence sets especially with large breaks. The length can be large (e.g., the whole sample) even with large breaks: for example, with serially correlated errors or with lagged dependent variables. Yamamoto (2016) proposed a modification of the long-run variance estimator that alleviates this problem, though the method does not apply when lagged dependent variables are present. The ILR-based confidence sets display a coverage probability often above the nominal level and this results in an average length larger than with Bai’s (1997a) method. See Chang and Perron (2017) for a review. Kurozumi and Yamamoto (2015) proposed confidence sets obtained by inverting a test that maximizes some weighted average power function. Overall, the findings suggest a need for a method that provides over a wide range of empirically relevant models both good coverage probabilities and reasonable lengths of confidence sets, especially for all break sizes, whether large or small. See below for a recent alternative using a continuous time asymptotic framework and the concept of Highest Density Regions (Casini & Perron, 2017a).

Estimating Breaks One at a Time

Bai (1997b) and BP (1998) show that one can consistently estimate all break fractions sequentially. When estimating a single-break model in the presence of multiple breaks, the estimate of the break fraction will converge to one of the true break fractions, the one that allows the greatest reduction in the SSR. Then, allowing for a break at the estimated value, a second one-break model can be applied that will consistently estimate the second dominating break, and so on. Interestingly, Yang (2017) showed that this result fails to hold for breaks in a linear trend model. Bai (1997b) considers the limit distribution of the estimates and shows that they are not the same as those obtained when estimating all break dates simultaneously. Except for the last break, the limit distributions depend on the parameters in all segments. He suggests a repartition procedure, which reestimates each break date conditional on the adjacent break dates. The limit distribution is then the same as when the break dates are estimated simultaneously.

Estimation in a System of Regressions

Substantial efficiency gains can be obtained by casting the analysis in a system of regressions. Bai et al. (1998) considered estimating a single break date in multivariate time series allowing stationary or integrated regressors as well as trends. Bai (2000) considered a segmented stationary VAR model with breaks occuring in the parameters of the conditional mean, the covariance matrix of the error term, or both. The most general framework is that of Qu and Perron (2007) who consider models of the form ${y}_{t}=(I\otimes {z}_{t}^{\prime})S{\beta}_{j}+{u}_{t}$, for ${T}_{j-1}+1\le t\le {T}_{j}$$(j=1,...,m+1$), where ${y}_{t}$ is an $n$-vector of dependent variables and ${z}_{t}$ is a $q$-vector that includes the regressors from all equations, and ${u}_{t}\sim (0,{\Sigma}_{j})$. The matrix $S$ is of dimension $nq$ by $p$ with full column rank (usually a selection matrix that specifies which regressors appear in each equation). They also allow for the imposition of a set of $r$ restrictions of the form $g(\beta \mathrm{,}vec(\Sigma ))=0$, where $\beta =({\beta}_{1}^{\prime},\mathrm{...},{\beta}_{m+1}^{\prime}{)}^{\prime}$, $\Sigma =({\Sigma}_{1}\mathrm{,...,}{\Sigma}_{m+1})$ and $g(\cdot )$ is an $r$ dimensional vector. Both within- and cross-equation restrictions are allowed, and in each case within or across regimes. The assumptions on the regressors and errors ${u}_{t}$ are similar to those discussed above. Hence, the framework permits a wide class of models including VAR, SUR, linear panel data, change in means of a vector of stationary processes, etc. Models with integrated regressors are not permitted. Allowing for restrictions on ${\beta}_{j}$ and ${\Sigma}_{j}$ permits a wide range of cases of practical interest such as partial structural change models and block partial structural change models where only a subset of the equations are subject to change; changes in only some element of the covariance matrix ${\Sigma}_{j}$; changes in only the covariance matrix ${\Sigma}_{j}$, while ${\beta}_{j}$ is the same for all segments; and models where the breaks occur in a particular order across subsets of equations.

The method of estimation is again QML (based on normal errors) subject to the restrictions. They derive the consistency, rate of convergence and the limit distributions of the estimated break dates. Though only root-$T$ consistent estimates of $(\beta \mathrm{,}\Sigma )$ are needed to construct asymptotically valid confidence intervals, more precise estimates will lead to better finite sample coverage rates. Hence, it is recommended to use the estimates obtained imposing the restrictions even though imposing restrictions does not have a first-order effect on the limiting distributions of the estimates of the break dates. To make estimation possible in practice, they present an algorithm which extends the one discussed in BP (2003a) using an iterative GLS procedure to construct the likelihood function for all possible segments.

Qu and Perron (2007) also considered “locally ordered breaks.” This applies when the breaks across different equations are “ordered” based on prior knowledge and are “local” since the time span between them is short. Hence, the breaks cannot be viewed as occurring simultaneously nor as asymptotically distinct. An estimation algorithm is presented and a framework to analyze the limit distribution of the estimates is introduced. Unlike the case with asymptotically distinct breaks, the distributions of the estimates of the break dates need to be considered jointly. Their analysis has been considerably extended to cover models with trends and integrated regresssors in Li and Perron (2017).

Tests that Allow for a Single Break

To test for a structural change at an unknown date, Quandt (1960) suggested the likelihood ratio test evaluated at the break date that maximizes it. This problem was treated under various degrees of specificity that culminated in the general treatment by Andrews (1993). The basic method is to use the maximum of the likelihood ratio test over all possible values of the parameter in some pre-specified set. For a single change, the statistic is ${{\displaystyle \mathrm{sup}}}_{{\lambda}_{1}\in {\text{\Lambda}}_{\epsilon}}L{R}_{T}({\lambda}_{1})$, where $L{R}_{T}({\lambda}_{1})$ denotes the likelihood ratio evaluated at ${T}_{1}=[T{\lambda}_{1}]$ and the maximization is restricted over break fractions in the set ${\text{\Lambda}}_{\epsilon}=[{\epsilon}_{1}\mathrm{,1}-{\epsilon}_{2}]$ with ${\epsilon}_{1}$,${\epsilon}_{2}>0$. The limit distribution depends on ${\text{\Lambda}}_{\epsilon}$. Andrews (1993) also considers tests based on the maximal value of the Wald and LM tests and shows that they are asymptotically equivalent under a sequence of local alternatives. This does not mean, however, that they all have the same properties in finite samples. The simulations of Vogelsang (1999), for a change in mean with serially correlated errors, showed the $\mathrm{sup}L{M}_{T}$ test to be seriously affected by the problem of non-monotonic power, in the sense that, for a fixed $T$, the power of the test can decrease to zero as the change in mean increases.

For Model (1) with $i\mathrm{.}i\mathrm{.}d\mathrm{.}$ errors, the LR and Wald tests have similar properties, so we shall discuss the Wald test. For a single change, it is defined by (up to a scaling by $q$):

where $\overline{Z}=diag({Z}_{1}\mathrm{,...,}{Z}_{m+1})$ with ${Z}_{i}=({z}_{{T}_{i-1}+1}\mathrm{,...,}{z}_{{T}_{i}}{)}^{\prime}$, $H$ is such that $(H\delta {)}^{\prime}=({\delta}_{1}^{\prime}-{\delta}_{2}^{\prime})$ and ${M}_{X}=I-X{({X}^{\text{'}}X)}^{-1}{X}^{\text{'}}$. Here $SS{R}_{k}$ is the SSR under the alternative hypothesis. The break point that maximizes the Wald test is the same as the estimate obtained by minimizing the SSR provided the minimization problem (2) is restricted to the set ${\text{\Lambda}}_{\epsilon}$, that is, ${{\displaystyle \mathrm{sup}}}_{{\lambda}_{1}\in {\text{\Lambda}}_{\epsilon}}{W}_{T}({\lambda}_{1};q)={W}_{T}({\widehat{\lambda}}_{1};q)$. When serial correlation and/or heteroskedasticity in the errors is permitted, the Wald test must be adjusted to account for this, that is,

where $\widehat{V}(\widehat{\delta})$ is robust to serial correlation and heteroskedasticity; a consistent estimate of

Note that it can be constructed allowing identical or different distributions for the regressors and the errors across segments. This is important because if an unaccounted variance shift occurs at the same time inference can be distorted (Pitarakis, 2004). The computation of the robust version of the Wald test (4) can be involved. Since the estimate of ${\lambda}_{1}$ is $T$-consistent even with correlated errors, an asymptotically equivalent version is to first take the supremum of the original Wald test (3) to obtain the break points, that is imposing $\Omega ={\sigma}^{2}I$. The robust version is obtained by evaluating (4) and (5) at these estimates: that is, using ${W}_{T}^{\ast}({\widehat{\lambda}}_{1};q)$ instead of ${{\displaystyle \mathrm{sup}}}_{{\lambda}_{1}\in {\text{\Lambda}}_{\epsilon}}{W}_{T}^{\ast}({\lambda}_{1};q)$. An issue of concern for such tests is the adequacy of the asymptotic distribution as an approximation to the finite sample distribution. The tests can exhibit size distortions, especially when the regressors and/or the errors are strongly serially correlated. A potential solution is to use a bootstrap method (e.g., Prodan, 2008; Chang & Perron, 2017). Alternatively, given that the estimation of the long-run variance and, consequently, the choice of the bandwidth, play an essential role, Cho and Vogelsang (2017) proposed a fixed-bandwidth theory. It is shown to improve upon the standard asymptotic distribution theory, whereby the bandwidth is negligible relative to $T$. However, while their results are convincing for a given choice of the bandwidth, when the latter is chosen endogenously, for example using Andrews’s (1991) method, the improvements are not as important (see also Casini, 2018b, for additional results on long-run variance estimation in the context of nonstationarity).

The vast majority of tests considered in the econometrics literature imposes some trimming that does not consider the possibility of a break occurring near the beginning or end of the sample. This is not so in the statistics literature. Such tests lead to a different limiting distribution, mostly inducing a log-log rate of divergence that needs to be accounted for (e.g., Csörgö & Horváth, 1997). These tests usually have poor finite sample properties. An application for general models in econometrics is Hidalgo and Seo (2013). Their results are, however, restricted to LM tests in order to have decent size properties in finite samples.

Optimal Tests

Andrews and Ploberger (1994) considered a class of tests that maximize a weighted average local asymptotic power function. They are weighted functions of the standard Wald, LM, or LR statistics for all permissible break dates. Using either of the three basic statistics leads to tests that are asymptotically equivalent, and we proceed with the Wald test. Assuming equal weights are given to all break fractions in some interval $[{\epsilon}_{1}\mathrm{,1}-{\epsilon}_{2}]$, the optimal test for distant alternatives is the so-called Exp-type test: $Exp-{W}_{T}=\mathrm{log}({T}^{-1}{\sum}_{{T}_{1}=[T{\epsilon}_{1}]+1}^{T-[T{\epsilon}_{2}]}\mathrm{exp}((1/2){W}_{T}\left({T}_{1}/T\right)))$. For alternatives close to the null value of no change the optimal test is the $Mean-{W}_{T}$ test: $Mean-{W}_{T}={T}^{-1}{\sum}_{{T}_{1}=[T{\epsilon}_{1}]+1}^{T-[T{\epsilon}_{2}]}{W}_{T}\left({T}_{1}/T\right)$. Kim and Perron (2009) approached the optimality issue from a different perspective using the approximate Bahadur measure of efficiency. They show that tests based on the Mean functional are inferior to those based on the Sup and Exp (which are as efficient) when using the same base statistic. When considering tests that incorporate a correction for potential serial correlation in the errors: (a) for a given functional, using the LM statistic leads to tests with zero asymptotic relative efficiency compared to using the Wald statistic; (b) for a given statistic the Mean-type tests have zero relative efficiency compared to using the Sup and Exp versions, which are as efficient. Hence, the preferred test should be the Sup or Exp-Wald tests. Any test based on the LM statistic should be avoided. Such results, and more discussed below, call into question the usefulness of a local asymptotic criterion to evaluate the properties of testing procedures; on this issue, see also Deng and Perron (2008).

Non-monotonicity in Power

The issue of non-monotonicity of the power function of structural change tests was first analyzed in Perron (1991) for changes in a trend function. In more general contexts, the Sup-Wald and Exp-Wald tests have monotonic power when only one break occurs under the alternative. As shown in Vogelsang (1999), the Mean-Wald test can exhibit a non-monotonic power function, though the problem has not been shown to be severe. All of these, however, suffer from important power problems when the alternative involves two breaks (Vogelsang, 1997). This suggests that a test will exhibit a non-monotonic power function if the number of breaks present is greater than the number accounted for. Hence, though a single break test is consistent against multiple breaks, power gains can result using tests for multiple structural changes (e.g., the $UDmax$ test of BP [1998]). Crainiceanu and Vogelsang (2007) also showed how the problem is exacerbated when using estimates of the long-run variance that allows for correlation. Accordingly, there are problems with tests based on some local asymptotic arguments (e.g., LM or CUSUM) or that do no try to model the breaks explicitly. An example is the test of Elliott and Müller (2006), which is deemed optimal for a class of models involving “small” non-constant coefficients. The suggested procedure does not explicitly model breaks and the test is then akin to a “partial sums type” test. Perron and Yamamoto (2016) showed that their test suffers from severe non-monotonic power, while offering only modest gains for small breaks. Methods to overcome non-monotonic power problems have been suggested by Altissimo and Corradi (2003) and Juhl and Xiao (2009). They suggest using non-parametric methods for the estimation of the mean. The resulting estimates and tests are, however, sensitive to the bandwidth used. There is currently no reliable method to appropriately choose this parameter in the context of structural changes. Kejriwal (2009) proposed to use the residuals under the alternative to select the bandwidth and those under the null to compute the long-run variance for the case of a change in mean. Yang and Vogelsang (2011) showed that this can be viewed as an LM test with a long-run variance constructed with a constrained small bandwidth. They provide asymptotic critical values based on the fixed-bandwidth asymptotics of Kiefer and Vogelsang (2005). None of these remedials work when lagged dependent variables are present.

Tests for Multiple Structural Changes

A problem with the $Mean\text{-}{W}_{T}$ and $Exp\text{-}{W}_{T}$ tests is that they require computations of order $O({T}^{m})$. Consider instead the Sup-Wald test. With $i\mathrm{.}i\mathrm{.}d\mathrm{.}$ errors, maximizing the Wald statistic is equivalent to minimizing the SSR, which can be solved efficiently and the Wald test for $k$ changes is:

with $H$ such that $(H\delta {)}^{\prime}=({\delta}_{1}^{\prime}-{\delta}_{2}^{\prime}\mathrm{,...,}{\delta}_{k}^{\prime}-{\delta}_{k+1}^{\prime})$. The sup-Wald test is ${{\displaystyle \mathrm{sup}}}_{({\lambda}_{1}\mathrm{,...,}{\lambda}_{k})\in {\text{\Lambda}}_{k\mathrm{,}\epsilon}}$${W}_{T}({\lambda}_{1}\mathrm{,...,}{\lambda}_{k};q)={W}_{T}({\widehat{\lambda}}_{1}\mathrm{,...,}{\widehat{\lambda}}_{k};q)$, where ${\text{\Lambda}}_{\epsilon}=\{({\lambda}_{1}\mathrm{,...,}{\lambda}_{k});|{\lambda}_{i+1}-{\lambda}_{i}|\ge \epsilon \mathrm{,}{\lambda}_{1}\ge \epsilon \mathrm{,}{\lambda}_{k}\le 1-\epsilon \}$ and $({\widehat{\lambda}}_{1}\mathrm{,...,}{\widehat{\lambda}}_{k})=({\widehat{T}}_{1}/T\mathrm{,...,}{\widehat{T}}_{k}/T)$, with $({\widehat{T}}_{1}\mathrm{,...,}{\widehat{T}}_{k})$ obtained minimizing the SSR over ${\text{\Lambda}}_{\epsilon}$. With serial correlation and heteroskedasticity in the errors, the test is

with $\widehat{V}(\widehat{\delta})$ as defined by (5). Again, the asymptotically equivalent version with the Wald test evaluated at the estimates $({\widehat{\lambda}}_{1}\mathrm{,...,}{\widehat{\lambda}}_{k})$ is used to make the problem tractable. Critical values are presented in BP (1998) and Bai and Perron (2003b). The importance of the choice of $\epsilon $ for the size and power of the test is discussed in BP (2003a) and Bai and Perron (2006). Often, one may not wish to pre-specify a particular number of breaks. Then a test of the null hypothesis of no structural break against an unknown number of breaks given some upper bound $M$ can be used. These are called “double maximum tests.” The first is an equal-weight version defined by $UD\mathrm{max}{W}_{T}(M\mathrm{,}q)={{\displaystyle \mathrm{max}}}_{1\le m\le M}{W}_{T}({\widehat{\lambda}}_{1}\mathrm{,...,}{\widehat{\lambda}}_{m};q)$. The second test applies weights to the individual tests such that the marginal p-values are equal across values of $m$ denoted $WD\mathrm{max}{F}_{T}(M\mathrm{,}q)$ (see BP, 1998). The choice $M=5$ should be sufficient for most applications. In any event, the critical values vary little as $M$ is increased beyond 5. The double maximum tests are arguably the most useful to determine if structural changes are present: (1) there are types of multiple structural changes that are difficult to detect with a single break test change (e.g., two breaks with the first and third regimes the same); (2) the non-monotonic power problem when the number of changes is greater than specified is alleviated; (3) the power of the double maximum tests is almost as high as the best power that can be achieved using a test with the correct number of breaks (BP, 2006).

Sequential Tests

BP (1998) also discuss a test of $\mathcal{l}$ versus $\mathcal{l}+1$ breaks, which can be used to estimate the number of breaks using a sequential testing procedure. For the model with $\mathcal{l}$ breaks, the estimated break points denoted by $({\widehat{T}}_{1}\mathrm{,...,}{\widehat{T}}_{\mathcal{l}})$ are obtained by a global minimization of the SSR. The strategy proceeds by testing for the presence of an additional break in each of the $(\mathcal{l}+1)$ segments obtained using the partition ${\widehat{T}}_{1}\mathrm{,...,}{\widehat{T}}_{\mathcal{l}}$. We reject in favor of a model with $(\mathcal{l}+1)$ breaks if the minimal value of the SSR over all segments where an additional break is included is sufficiently smaller than that from the $\mathcal{l}$ breaks model. The break date selected is the one associated with this overall minimum. The limit distribution of the test is related to that of a test for a single change. Bai (1999) considered the same problem allowing the breaks to be global minimizers of the SSR under both the null and alternative hypotheses. The limit distribution of the test is different. A method to compute the asymptotic critical values is discussed and the results extended to the case of trending regressors. These tests can form the basis of a sequential testing procedure by applying them successively starting from $\mathcal{l}=0$, until a non-rejection occurs. The simulation results of BP (2006) showed that such estimate of the number of breaks is better than those obtained using information criteria as suggested by, for example, Liu et al. (1997) (see also Perron, 1997). But this sequential procedure should not be applied mechanically. In several cases, it stops too early. The recommendation is to first use a double maximum test to ascertain if any break is at all present. The sequential tests can then be used starting at some value greater than 0. Kurozumi and Tuvaandorj (2011) considered useful information criteria explicitly tailored to structural change problems, which should complement a sequential testing procedure.

Tests for Restricted Structural Changes

Consider testing the null hypothesis of 0 break versus an alternative with $k$ breaks in a model that imposes the restrictions $R\delta =r$. In this case, the limit distribution of the Sup-Wald test depends on the nature of the restrictions so that it is not possible to tabulate critical values valid in general. Perron and Qu (2006) discussed a simulation algorithm to compute the relevant critical values given some restrictions. Imposing valid restrictions results in tests with much improved power.

Tests for Structural Changes in Multivariate Systems

Bai et al. (1998) considered a Sup-Wald test for a single common change in a multivariate system. Qu and Perron (2007) extend the analysis to multiple structural changes. They consider the case where only a subset of the coefficients is allowed to change, whether in the parameters of the conditional mean, the covariance matrix of the errors, or both. The tests are based on the maximized likelihood ratio over permissible partitions assuming $i\mathrm{.}i\mathrm{.}d\mathrm{.}$ errors. They can be corrected for serial correlation and heteroskedasticity when testing for changes in the parameters of the conditional mean assuming no change in the covariance matrix of the errors.

An advantage of the framework of Qu and Perron (2007) is that it allows studying changes in the variance of the errors in the presence of simultaneous changes in the parameters of the conditional mean, thereby avoiding inference problem when changes in variance are studied in isolation. Also, it allows for the two types of changes to occur at different dates, thereby avoiding problems related to tests for changes in the parameters when a change in variance occurs at some other date. Their results are, however, only valid in the case of normally distributed errors when testing for changes in variances (or covariances). This problem was remedied by Perron and Zhou (2008) who propose tests for changes in the variances of the errors allowing for changes in the parameters of the regression in the context of a single equation model. They also consider various extensions including testing for changes in the parameter allowing for change in variances and testing for joint changes. These tests are especially important in light of Hansen’s (2000) analysis. First note that the limit distribution of the tests in a single equation system are valid under the assumption that the regressors and the variance of the errors have distributions that are stable across the sample. He shows that when this condition is not satisfied the limit distribution changes and the test can be distorted. If the errors are homoskedastic, the size distortions are quite mild, but they can be severe when a change in variance occurs. Both problems of changes in the distribution of the regressors and the variance of the errors can be handled using the framework of Qu and Perron (2007) and Perron and Zhou (2008). If a change in the variance of the residuals is a concern, one can perform a test for no change in some parameters of the conditional model allowing for a change in variance. If changes in the marginal distribution of some regressors is a concern, one can use a multi-equations system with equations for these regressors.

Tests Valid With I(1) Regressors

With $I(1)$ regressors, a case of interest is a system of cointegrated variables. For testing, Hansen (1992) considered the null hypothesis of no change in both coefficients and proposed Sup and Mean-LM tests for a one-time change. He also considers a version of the LM test directed against the alternative that the coefficients are random walk processes. Kejriwal and Perron (2010a) provided a comprehensive treatment of issues related to testing for multiple structural changes at unknown dates in cointegrated regression models using the Sup-Wald test. They allow both $I(0)$ and $I(1)$ variables and derive the limiting distribution of the Sup-Wald test for a given number of cointegrating regimes. They also consider the double maximum tests and provide critical values for a wide variety of models that are expected to be relevant in practice. The asymptotic results have important implications for inference. It is shown that in models involving both $I(1)$ and $I(0)$ variables, inference is possible as long as the intercept is allowed to change across regimes. Otherwise, the limiting distributions of the tests depend on nuisance parameters. They propose a modified Sup-Wald test that has good size and power properties. Note, however, that the Sup and Mean-Wald test will also reject when no structural change is present, and the system is not cointegrated. Hence, the application of such tests should be interpreted with caution. No test is available for the null hypothesis of no change in the coefficients allowing the errors to be $I(0)$ or $I(1)$. This is because when the errors are $I(1)$, we have a spurious regression and the parameters are not identified. To be able to properly interpret the tests, they should be used in conjunction with tests for the presence or absence of cointegration allowing shifts in the coefficients (see Perron, 2006).

Tests Valid Whether the Errors are I(1) or I(0)

The issue of testing for structural changes in a linear model with errors that are either $I(0)$ or $I(1)$ is of interest when the regression is a polynomial time trend (e.g., testing for a change in the slope of a linear trend). The problem is to devise a procedure that has the same limit distribution in both the $I(0)$ and $I(1)$ cases. The first to provide such a solution is Vogelsang (2001). He also accounts for correlation with an autoregressive approximation so that the Wald test has a non-degenerate limit distribution in both the $I(0)$ and $I(1)$ cases. The novelty is that he weights the statistic by a unit root test scaled by some parameter. For any given significance level, a value of this scaling parameter can be chosen so that the asymptotic critical values will be the same. His simulations show, however, the test to have little power in the $I(1)$ case so that he resorts to advocating the joint use of that test and a normalized Wald test that has good properties in the $I(1)$ case but has otherwise very little power in the $I(0)$ case. Perron and Yabu (2009b) and Harvey et al. (2009) have independently proposed procedures that achieve the same goal and that were shown to have better size and power than that of Vogelsang (2001). The approach of Harvey et al. (2009) built on the work of Harvey et al. (2007). It is based on a weighted average of the regression t-statistic for a change in the slope of the trend appropriate for the case of $I(0)$ and $I(1)$ errors. In the former case a regression in levels is used, while in the latter a regression in first differences is used. With an unknown break date, the supremum over a range of possible break dates is taken. As in Vogelsang (2001), a correction is required to ensure that, for a given significance level, the weighted test has the same asymptotic critical value in both the $I(0)$ and $I(1)$ cases.

Perron and Yabu (2009b) built on Perron and Yabu (2009a), which analyzed hypothesis testing on the slope coefficient of a linear trend model. The method is based on a Feasible Quasi Generalized Least Squares approach that uses a superefficient estimate of the sum of the autoregressive parameters $\alpha $ when $\alpha =1$. The estimate of $\alpha $ is the OLS estimate from an autoregression applied to detrended data and is truncated to take a value 1 whenever it is in a ${T}^{-\delta}$ neighborhood of 1. This makes the estimate “super-efficient” when $\alpha =1$. Theoretical arguments and simulation evidence show that $\delta =1/2$ is the appropriate choice. Perron and Yabu (2009b) analyzed the case of testing for changes in level or slope of the trend function of a univariate time series. When the break dates are unknown, the limit distribution is nearly the same in the $I(0)$ and $I(1)$ cases using the Exp-Wlad test. Hence, it is possible to have tests with nearly the same size in both cases. To improve the finite sample properties, use is made of a bias-corrected version of the OLS estimate. The Perron-Yabu test has greater power overall; see Chun and Perron (2013). Kejriwal and Perron (2010b) extend the results to show that the test of Perron and Yabu (2009b) can be applied in a sequential manner using the same critical values. An alternative perspective was provided by Sayginsoy and Vogelsang (2011) who use a fixed-bandwith asymptotic theory. Extensions that allow the errors to be fractionally integrated have been considered by Iacone et al. (2013a, 2013b).

Testing for Common Breaks

Oka and Perron (2017) considered testing for common breaks across or within equations in a multivariate system. The framework is general and allows stationary or integrated regressors and trends. The null hypothesis is that breaks in different parameters occur at common locations or are separated by some positive fraction of the sample size. Under the alternative hypothesis, the break dates are not the same and need not be separated by a positive fraction of the sample size across parameters. A quasi-likelihood ratio test assuming normal errors is used. The quantiles of the limit distribution need to be simulated and an efficient algorithm is provided. Kim, Oka, Estrada, and Perron (2017) extend this work to cover the case of testing for common breaks in a system of equations involving joint-segmented trends. The motivation was spurred by a need to test whether the breaks in the slope of the trend functions of temperatures and radiative forcing, occurring in 1960, are common (see, Estrada, Perron, & Martinez-Lopez, 2013).

Band Spectral Regressions and Low Frequency Changes

Perron and Yamamoto (2013) considered the issue of testing for structural change using a band-spectral analysis. They allow changes over time within some frequency bands, permitting the coefficients to be different across frequency bands. Using standard assumptions, the limit distributions obtained are similar to those in the time domain counterpart. They show that when the coefficients change only within some frequency band (e.g., the business cycle) we can have increased efficiency of the estimates of the break dates and increased power for the tests provided, of course, that the user chosen band contains the band at which the changes occur. They also discuss a useful application in which the data is contaminated by some low-frequency process and that the researcher is interested in whether the original non-contaminated model is stable. For example, the dependent variable may be affected by some random level shift process (a low-frequency contamination) but at the business cycle frequency the model of interest is otherwise stable. They show that all that is needed to obtain estimates of the break dates and tests for structural changes that are not affected by such low frequency contaminations is to truncate a low frequency band that shrinks to zero at rate $\mathrm{log}(T)/T$. Simulations show that the tests have good sizes for a wide range of truncations. The exact truncation does not really matter, as long as some of the low frequencies are excluded. Hence, the method is quite robust. They also show that the method delivers more precise estimates of the break dates and tests with better power compared to using filtered series via a band-pass filter or with a Hodrick-Prescott (1997) filter. This work is related to a recent strand in the literature that attempts to deliver methods robust to low-frequency contaminations. One example pertains to estimation of the long-memory parameter. It is by now well known that spurious long-memory can be induced by level shifts or various kinds of low frequency contaminations. Perron and Qu (2007, 2010), Iacone (2010), McCloskey and Perron (2013), and Hou and Perron (2014) exploit the fact that low-frequency contaminations will produce peaks in the periodograms at a few low frequencies and suggest robust procedures eliminating such low frequencies. Tests for spurious versus genuine long memory have been proposed by Qu (2011). McCloskey and Hill (2017) provided a method applicable to various time-series models, such as ARMA, GARCH, and stochastic volatility models.

Endogenous Regressors

Consider a model with errors correlated with the regressors:

where $\overline{X}=diag({X}_{1}\mathrm{,...,}{X}_{m+1})$, a $T$ by $(m+1)p$ matrix with ${X}_{i}=({x}_{{T}_{i-1}+1}\mathrm{,...,}{x}_{{T}_{i}}{)}^{\prime}$ ($i=1,...,m+1$). This includes partial structural change models imposing $R\delta =r$ with $R$ a $k$ by $(m+1)p$ matrix. When the regressors are correlated with the errors, we assume a set of $q$ variables ${z}_{t}$ that can serve as instruments, and define the $T$ by $q$ matrix $Z=({z}_{1}\mathrm{,...,}{z}_{T}{)}^{\prime}$. We consider a reduced form linking $Z$ and $X$ that itself exhibits ${m}_{z}$ changes, so that $X={\overline{W}}^{0}{\theta}^{0}+v$, with ${\overline{W}}^{0}=diag({W}_{1}^{0}\mathrm{,...,}{W}_{{m}_{z}+1}^{0})$, the diagonal partition of $W$ at the break dates $({T}_{1}^{z0}\mathrm{,...,}{T}_{{m}_{z}}^{z0})$ and ${\theta}^{0}=({\theta}_{1}^{0}\mathrm{,...,}{\theta}_{{m}_{z}+1}^{0})$. Also, $v=({v}_{1}\mathrm{,...,}{v}_{T}{)}^{\text{'}}$ is a $T$ by $q$ matrix, which can be correlated with ${u}_{t}$ but not with ${z}_{t}$. Given estimates $({\widehat{T}}_{1}^{z}\mathrm{,...,}{\widehat{T}}_{{m}_{z}}^{z})$ obtained using the method of BP (2003a), one constructs $\widehat{W}=diag({\widehat{W}}_{1}\mathrm{,...,}{\widehat{W}}_{{m}_{z+1}})$ , a $T$ by $({m}_{z}+1)q$ matrix with ${\widehat{W}}_{l}=({w}_{{\widehat{T}}_{l-1}^{z}+1}\mathrm{,...,}{w}_{{\widehat{T}}_{l}^{z}}{)}^{\prime}$ for $l=1,...,{m}_{z}+1$. Let $\widehat{\theta}$ be the OLS estimate in a regression of $X$ on $\widehat{W}$. The instruments are $\widehat{X}=\widehat{W}\widehat{\theta}=diag({\widehat{X}}_{1}^{\text{'}}\mathrm{,...,}{\widehat{X}}_{{m}_{z}+1}^{\text{'}}{)}^{\prime}$ where ${\widehat{X}}_{l}={\widehat{W}}_{l}{({\widehat{W}}_{l}^{\prime}{\widehat{W}}_{l})}^{-1}{\widehat{W}}_{l}^{\prime}{\stackrel{~}{X}}_{l}$ with ${\stackrel{~}{X}}_{l}=({x}_{{\widehat{T}}_{l-1}^{z}+1}\mathrm{,...,}{x}_{{\widehat{T}}_{l}^{z}}{)}^{\prime}$. The instrumental variable (IV) regression is

subject to the restrictions $R\delta =r$, where ${\overline{X}}^{\ast}=diag({\widehat{X}}_{1}\mathrm{,...,}{\widehat{X}}_{m+1})$, a $T$ by $(m+1)p$ matrix with ${\widehat{X}}_{j}=({\widehat{x}}_{{T}_{j-1}+1}\mathrm{,...,}{\widehat{x}}_{{T}_{j}}{)}^{\prime}$ ($j=1,...,m+1$). Also, $\stackrel{~}{u}=({\stackrel{~}{u}}_{1}\mathrm{,...,}{\stackrel{~}{u}}_{T}{)}^{\prime}$ with ${\stackrel{~}{u}}_{t}={u}_{t}+{\eta}_{t}$ where ${\eta}_{t}=({x}_{t}^{\prime}-{\widehat{x}}_{t}^{\prime}){\delta}_{j}$ for ${T}_{j-1}^{0}+1\le t\le {T}_{j}^{0}$. The estimates of the break dates are $({\widehat{T}}_{1}\mathrm{,...,}{\widehat{T}}_{{m}_{x}})=\mathrm{arg}{{\displaystyle \mathrm{min}}}_{{T}_{1}\mathrm{,...,}{T}_{{m}_{x}}}SS{R}_{T}^{R}({T}_{1}\mathrm{,...,}{T}_{m})$, where $SS{R}_{T}^{R}({T}_{1}\mathrm{,...,}{T}_{m})$ is the SSR from (7) evaluated at $\left\{{T}_{1}\mathrm{,...,}{T}_{m}\right\}$. Perron and Yamamoto (2014) provided a simple proof of the consistency and limit distribution of the estimates of the break dates showing that using generated (or second stage) regressors implies that the assumptions of Perron and Qu (2006) are satisfied. For an earlier, more elaborate though less comprehensive treatment, see Hall, Han, and Boldea (2012) and Boldea, Hall, and Han (2012). Hence, all results of BP (1998) carry through, but care must be applied when the structural and reduced forms contain non-common break.

Of substantive interest is that the IV approach is not necessary as discussed in Perron and Yamamoto (2015); one can simply still use OLS applied to (6). First, except for a knife-edge case, changes in the true parameters imply a change in the probability limits of the OLS estimates, which is equivalent in the leading case of regressors and errors having a homogenous distribution across segments. Second, one can reformulate the model with those limits as the basic parameters so that the regressors and errors are uncorrelated. We are then back to the standard framework. Importantly, using OLS involves the original regressors, while IV the second-stage regressors, which have less quadratic variation since $\left|\right|{P}_{Z}X\left|\right|\le \left|\right|X\left|\right|$. Hence, in most cases, a given change in the parameters will cause a larger change in the conditional mean of the dependent variable using OLS compared with IV. It follows that using OLS delivers consistent estimates of the break fractions and tests with the usual limit distributions and also improves on the efficiency of the estimates and the power of the tests in most cases. Also, OLS avoids weak identification problems inherent when using IV methods. Some care must, however, be exercised. Upon a rejection, one should verify that the change in the probability limit of the OLS parameter estimates is not due to a change in the bias terms. In most applications, there will be no change in the bias, but still one should be careful to assess the source of the rejection. This is easily done since after obtaining the OLS-based estimates of the break dates one would estimate the structural model based on such estimates. The relevant quantities needed to compute the change in bias across segments are then directly available. To elaborate, assume known break dates and let $p\phantom{\rule{0.2em}{0ex}}{\mathrm{lim}}_{T\to \infty}{(\Delta {T}_{i}^{0})}^{-1}{}_{t={T}_{i-1}^{0}+1}^{{T}_{i-1}^{0}+[s\Delta {T}_{i}^{0}]}{x}_{t}{x}_{t}^{\prime}={Q}_{XX}^{i}$ and $p{{\displaystyle \mathrm{lim}}}_{T\to \infty}E({X}_{i}u)={\varphi}_{i}$ ($i=1,...,m+1$), the probability limit of the OLS estimate is ${\delta}^{\ast}={\delta}^{0}+[({Q}_{XX}^{1}{)}^{-1}{\varphi}_{1},...,({Q}_{XX}^{m+1}{)}^{-1}{\varphi}_{m+1}){]}^{\prime}$. Any change in ${\delta}^{0}$ imply a change in ${\delta}^{*}$, except for a knife-edge case when the change in the bias exactly offsets the change in ${\delta}^{0}$. Hence, one can still identify parameter changes using OLS and a change in ${\delta}^{0}$ will, in general, cause a larger change in the conditional mean of ${y}_{t}$. Consider writing (6) as

where ${u}^{\ast}=(I-{P}_{{\overline{X}}_{0}})u$ and ${\delta}_{T}^{\ast}=[{\delta}^{0}+{({\overline{X}}_{0}^{\prime}\phantom{\rule{0.2em}{0ex}}{\overline{X}}_{0})}^{-1}{\overline{X}}_{0}^{\prime}u]$ for which ${\delta}_{T}^{\ast}{\to}_{p}{\delta}^{\ast}$. So we can consider a regression in terms of the population value of the parameters. Now, ${\overline{X}}_{0}$ is uncorrelated with ${u}^{\ast}$ so that the OLS estimate, say ${\widehat{\delta}}^{\ast}$, is consistent for ${\delta}^{\ast}$. This suggests estimating the break dates by minimizing the SSR from the regression $y=\overline{X}{\delta}^{\ast}+{u}^{\ast}$. OLS dominates IV except for a narrow case, unlikely in practice. The loss in efficiency when using 2SLS can be especially pronounced when the instruments are weak as is often the case. Of course, the ultimate goal is not to get estimates of the break dates per se but of the parameters within each regime, one should then use an IV regression but conditioning on the estimates of the break dates obtained using the OLS-based procedure. Their limit distributions will, as usual, be the same as if the break dates were known. Using the same logic, tests for structural change are more powerful when based on OLS rather than IV. This idea was used by Kurozumi (2017) to show that, with endogenous regressors, using OLS is better when monitoring online for a structural change using a CUSUM-type method.

Quantile Regressions

Following Oka and Qu (2011), assume that the $\tau $th conditional quantile function of ${y}_{t}$ given ${z}_{t}$ is linear in the parameters and given by ${Q}_{{y}_{t}}(\tau |{z}_{t})={F}_{{y}_{t}|{z}_{t}}^{-1}(\tau |{z}_{t})={z}_{t}\beta (\tau )$. The population coefficient $\beta (\tau )$ is known to be the minimizer of the criterion function ${Q}_{\tau}(\beta )=E[{\rho}_{\tau}(y-{z}^{\text{'}}\beta )]$ where ${\rho}_{\tau}(a)$ is the check function given by ${\rho}_{\tau}(a)=a(\tau -1(a<0))$; see Koenker (2005). Without structural changes, the quantile regression estimator of $\beta (\tau )$ is the minimizer of the empirical analog ${\widehat{Q}}_{\tau}(\beta )={\displaystyle {\sum}_{t=1}^{T}}{\rho}_{\tau}({y}_{t}-{z}_{t}^{\prime}\phantom{\rule{0.2em}{0ex}}\beta )$. Suppose that the $\tau $th quantile has $m$ structural changes, occurring at unknown dates $({T}_{1}\mathrm{,...,}{T}_{m})$, such that

for $j=1,...,m+1$. The vector ${\theta}_{j}(\tau )$ are the quantile dependent unknown parameters, with possible restrictions to allow for partial structural changes. Qu (2008) proposes a fluctuation type statistic based on the subgradient and a Wald type statistic based on comparing parameter estimates from different subsamples. They can be used to test for changes occurring in a prespecified quantile, or across quantiles. The limiting distributions under the null is nuisance parameter free and can be simulated. Oka and Qu (2011) consider the estimation of multiple structural changes at unknown dates in one or multiple conditional quantile functions. A procedure to determine the number of breaks is also discussed. The method can deliver more informative results than the analysis of the conditional mean function alone.

Lasso

A growing literature uses Lasso-type methods to address structural change problems, which can estimate the location and number of breaks simultaneously. Estimating structural changes can be viewed as a variable selection problem and Lasso estimates the regression coefficients by minimizing the usual SSR with a penalty for model complexity through the sum of the absolute values of the coefficients. Assume, for simplicity, $p=0$ and $q=1$, that is, a pure structural change model with a single regressor. When $m=1$, the model is:

When there is a break at $t={T}_{1}$, then ${\delta}_{2}\ne {\delta}_{1}\ne 0$, otherwise ${\delta}_{2}={\delta}_{1}$. Using a set of regressors $g({z}_{t})=\{{z}_{t}1(t\ge \stackrel{~}{t}),\forall \stackrel{~}{t}\in \underset{\_}{t}\mathrm{,}\overline{t}],1<\underset{\_}{t}<\overline{t}<T\}$, we can express the model as:

with ${b}_{i}=0$, $i=1,...,{T}_{1}-\underset{\_}{t}+\mathrm{1,}{T}_{1}-\underset{\_}{t}+\mathrm{3,...,}\overline{t}-\underset{\_}{t}+1$ and ${b}_{{T}_{1}-\underset{\_}{t}+2}=|{\delta}_{2}-{\delta}_{1}|\ne 0$. Denote by $g({x}_{\mathrm{0,}t})=\left\{{w}_{\mathrm{1,}t}\mathrm{,...,}{w}_{(\overline{t}-\underset{\_}{t}+1),t}\right\}$ the transformed regressors generated from ${w}_{\mathrm{0,}t}={z}_{t}$ whose coefficient is subject to change. Then ${w}_{t}=\{{w}_{\mathrm{0,}t}\mathrm{,}g({w}_{\mathrm{0,}t})\}$ is the complete set of regressors. If we can consistently estimate the coefficients associated with $g({x}_{\mathrm{0,}t})$ that are greater than zero, we can date the break point. OLS would not provide consistent estimates because the number of regressors is too large. The method is flexible; for example, if one has some prior knowledge that a change has occurred at some date, a dummy variable can be added without the associated generated regressors. A model with multiple structural changes in many regressors can be obtained as an extension. One simply let ${w}_{\mathrm{0,}t}$ be the vector of regressors whose coefficients are subject to change and $g({w}_{\mathrm{0,}t})$ be the vector of artificially constructed regressors obtained from the original ones. In general, the number of regressors is $n=p+q(\ddot{r}+1)$, where $\ddot{r}=\overline{t}-\underset{\_}{t}+1$ is the number of transformed regressors associated with the original regressors whose coefficients are allowed to change (fewer are possible if there is prior information about where the breaks cannot occur). Hence, $n$ can be very large, much larger than $T$. The structural break model has a sparse pattern since few coefficients are non-zero, namely $s=p+q(m+1)$. The Lasso estimator for sparse models is $\widehat{\theta}=\mathrm{arg}{\mathrm{min}}_{\theta \in {\mathbb{R}}^{q}}\widehat{Q}(\theta )+(\lambda /T){\Vert \widehat{\Upsilon}\theta \Vert}_{1}$, where $\lambda $ is the penalty level, $\widehat{Q}(\theta )={\displaystyle {\sum}_{t=1}^{T}{({y}_{t}-{w}_{t}^{\prime}\theta )}^{2}}$, $\widehat{\Upsilon}=diag({\widehat{\gamma}}_{1},\mathrm{...},{\widehat{\gamma}}_{p})$ is a diagonal matrix with the penalty loadings and $\parallel \widehat{\Upsilon}\theta {\parallel}_{1}={\displaystyle {\sum}_{j=1}^{p}\left|{\widehat{\gamma}}_{j}{\theta}_{j}\right|}$ is the ${\mathcal{l}}_{1}$-norm. Ideally, these are adapted to information about the error term, which is not feasible since ${u}_{t}$ is not observed. In practice, we can use the estimated residuals and proceed via iterations or simply assume homoskedastic Gaussian errors, in which case $\widehat{\Upsilon}$ is the identity matrix. Often, additional thresholding is applied to remove regressors with small estimated coefficients which may have been included due to estimation error. Then, the thresholded Lasso estimator is $\widehat{\theta}({t}_{L})=({\widehat{\theta}}_{j}1\{\left|{\widehat{\theta}}_{j}\right|>{t}_{L}\},j=1,...,q)$ where ${t}_{L}\ge 0$ is the threshold level. The problem is in the choice of $\lambda $ and ${t}_{L}$. Results are well established for a random sample of data. With serially correleated series, things are more complex.

The following is a partial list of some relevant papers using Lasso for models with structural changes. Harchaoui and Lévy-Leduc (2010) proposed a total variation penalty to estimate changes in the mean of a sequence of independent and identically distributed (*i.i.d.*) Gaussian random variables. Bleakley and Vert (2011) proposed a group-fused Lasso method for changes in the mean of a vector of *i.i.d.* Gaussian random variables assumed to share common break points. Chan, Yau, and Zhang (2014) considered using a group Lasso method for changes in an autoregressive model with heteroskedastic errors. They suggest a two-steps method involving the use of an information criterion to select the number of break points. Ciuperca (2014) considered multiple changes in a linear regression model with i.i.d. errors using Lasso with an information criterion or adaptive Lasso. Rojas and Wahlberg (2014) used penalized maximum likelihood estimator for changes in the mean of a sequence of *i.i.d.* Gaussian random variables. Aue, Cheung, Lee, and Zhong (2014) considered structural breaks in conditional quantiles using the minimum description length principle. While the framework is quite general, the method cannot consistently estimate the number of breaks jointly with their location. Qian and Su (2016) considered estimation and inference of common breaks in panel data models with endogenous regressors. The regressors and errors are, however, restricted to be *i.i.d.* processes, a common feature in this literature up to now. Allowing for general mixing regressors and errors has not, to our knowledge, been achieved. Work is needed to achieve the level of generality available using standard procedures. Nevertheless, it remains a promising approach, especially in the context of large datasets.

Factors

The issue of structural breaks in factor models has received considerable attention in the 21st century; for a more detailed survey see Bai and Han (2016). We will first discuss in some detail the methods of Baltagi, Kao, and Wang (2017) and Bai, Han, and Shi (2017) and then briefly mention other works. The high dimensional factor model with $m$ changes in the factor loadings considered by Baltagi, Kao, and Wang (2017) is given by

for $j=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}m+1,\phantom{\rule{0.1em}{0ex}}i=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}N$ and $t=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T$ where ${f}_{t}$ and ${f}_{b\mathrm{,}t}$ are vectors of factors without and with changes in the loadings, respectively, ${\varphi}_{i}$ and ${\varphi}_{i\mathrm{,}j}$ are the factor loadings of unit $i$ corresponding to ${f}_{t}$ and ${f}_{b\mathrm{,}t}$ in the $j$-th regime, respectively, and ${u}_{it}$ is a disturbance term that can have serial and cross-sectional dependence as well as heteroskedasticity. The problem is to estimate the break points, determine the number of factors, and estimate the factors and loadings in each regime. We first discuss a procedure when the number of breaks is known. It first estimates the break points using a simultaneous or sequential method, which leads to consistent estimates for ${\lambda}_{b}^{0}={T}_{b}^{0}/T$, though not for ${T}_{b}^{0}$. Secondly, it involves plugging-in the break points estimates and estimating the number of factors and the factor space in each regime. Since the factors are latent, one has to determine the number of pseudo factors that is akin to selecting moment conditions whereas in BP (1998) the model is parametric and the moment conditions are known a priori. Baltagi, Kao, and Wang (2017) proposed to convert the statistical problem from estimating multiple changes in the loadings to estimating changes in the pseudo factors. This relies on the fact that the mean of the second moment matrix of the pseudo factors have changes at the same dates as the loadings. After this conversion, the data become fixed dimensional with observable regressors and conceptually the problem becomes similar to that of Qu and Perron (2007).

Estimation of the break points involve three steps: (1) ignoring breaks, use any consistent estimator $\stackrel{~}{r}$ to estimate the number of factors; (2) estimate the first $\stackrel{~}{r}$ factors ${\stackrel{~}{g}}_{t}$ using the Principal Component (PC) method; (3) for any partition $\left({T}_{1}\mathrm{,}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{m}\right)$, split the sample into $m+1$ subsamples, estimate ${\stackrel{~}{\Sigma}}_{j}={\left({T}_{j}-{T}_{j-1}\right)}^{-1}{\displaystyle {\sum}_{t={T}_{j-1}+1}^{{T}_{j}}{\stackrel{~}{g}}_{t}{\stackrel{~}{g}}_{t}^{\prime}}$ and calculate the SSR,

The estimates of the break points are the minimizer of $\stackrel{~}{S}\left({T}_{1}\mathrm{,}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{m}\right)$. The motivation is that the second moment matrix of ${g}_{t}$ has breaks at the same dates as the factor loadings. The results are valid under general high-level conditions on the errors and the factors are allowed to be dynamic and to include lags. To estimate the number of factors, these can be correlated with the errors. The limiting distributions of ${\widehat{T}}_{b}-{T}_{b}^{0}$$\left(b=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}m\right)$ have the same form as the one for the single break case derived in Baltagi, Kao, and Wang (2016).

Consider now testing for multiple changes in the factor loadings. Following BP (1998), Baltagi, Kao, and Wang (2017) proposed two different tests: no change versus a fixed number of changes; $l$ versus $l+1$ changes. The limit distributions follow those in BP (1998). The first test loses power when the number of changes is mispecified. They propose adapting the UDmax and WDmax tests of BP (1998) allowing an unknown number of changes (up to some upper bound). For to the test of $m=l$ versus $m=l+\mathrm{1,}$ one first estimates the break points and once they are plugged in, testing for $m=l$ versus $m=l+\mathrm{1,}$ changes becomes equivalent to testing no change versus a single change in each regime jointly. The null limiting distribution is obtained by simulations and depends on the number of factors in each regime. When this number is stable, it is similar to that in BP (1998).

Bai, Han, and Shi (2017) studied the properties of the least-squares estimator of the single break point in a high dimensional factor model. The model is given by

for $i=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}N$ where ${T}_{b}^{0}=T{\lambda}_{b}^{0}$ is the unknown common break point. The estimation of ${T}_{b}^{0}$ involves the estimated latent factors. The model in matrix format is:

Model (9) is an observationally equivalent factor model, where the number of factors is doubled and the factor loadings are time invariant. Under (9), the factor process has structural breaks, which was the basis for the framework of Baltagi, Kao, and Wang (2017). The estimates of the break points are the minimizer of

over $T{\lambda}_{1}\le {T}_{b}\le T{\lambda}_{2}$, where $\stackrel{~}{f}$ are estimates of ${f}_{t}$ using a principal component method. Since the factors are estimated by a PC method, they are not efficient. However, the efficiency loss relative to the maximum likelihood estimator, vanishes as $N\mathrm{,}\phantom{\rule{0.1em}{0ex}}T\to \infty $. The errors are assumed to be independent from the factors and loadings, however, ${u}_{it}$ can be weakly correlated in both cross-sectional and time dimensions. There can be dependence between $\left\{{f}_{t}\right\}$ and $\left\{{\varphi}_{i1}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{\varphi}_{i2}\right\}$ and the break magnitudes ${\varphi}_{i2}-{\varphi}_{i1}$ can be dependent on the factors. Theoretical results are provided under both large and small breaks where the latter are modeled in two ways: (1) the magnitude of the change in each factor loading is of order ${N}^{\frac{\nu -1}{2}}$ for some $0<\nu \le 1$; 2) the magnitude of the change is fixed but only $O\left({N}^{\nu}\right)$ units have a break for some $0<\nu \le 1$. The case $\nu =1$ corresponds to large breaks studied by Chen, Dolado, and Gonzalo (2014); Han and Inoue (2015); Cheng, Liao, and Schrfheide (2016); and Baltagi, Kao, and Wang (2015, 2017). Discussing separately large and small breaks is useful given the asymptotic properties of the PC estimator of the factors. For small breaks, ${\stackrel{~}{T}}_{b}$ is consistent for ${T}_{b}^{0}$ as $N\mathrm{,}\phantom{\rule{0.1em}{0ex}}T\to \infty $ provided conditions on the ratio $N/T$ are satisfied. The consistency result for ${\stackrel{~}{T}}_{b}$ is strong and different from the univariate case for which only the break fraction is consistently estimated. For large breaks, ${T}_{b}^{0}$ is not consistently estimable. The framework under small breaks allows a non-degenerate asymptotic distribution similar to that of the OLS break point estimator in panel models (see Bai, 2010, and the section “Panels” below). It does not depend on the exact distribution of the errors but on the distribution of the factors ${f}_{t}$, being the same whether ${f}_{t}$ is observable or not. Hence, evaluating the limit distribution using the plug-in approach, replacing population quantities with consistent estimates, is not applicable due to rotations. Bai, Han, and Shi (2017) proposed a bootstrap method, which, however, lacks robustness to cross-sectional correlation.

Additional remarks on factor models follow. First, the analysis of Bai et al. (2017) assumes a known number of factors. This is a strong restriction that future research should relax. Su and Wang (2017) developed an innovative adaptive group Lasso estimator that can determine the number of factors and the break fraction simultaneously, but it is valid only under large breaks in the loading matrix. Therefore, joint estimation of the break points and the number of factors remains an open issue even from a computational perspective. It is also necessary to consider alternative inference methods for the break points because the bootstrap procedure does not work well for small breaks. Other authors have proposed alternative methods for estimating factor models with breaks. Cheng, Liao, and Schorfheide (2016) developed a shrinkage method that can consistently estimate the break fraction. Chen (2015) considered a least-squares estimator of the break points and proves the consistency of estimated break fractions while Massacci (2015) studied the least-squares estimation of structural changes in factor loadings under a threshold model setup. Additional tests for structural changes in factor models have been proposed; see Chen, Dolado, and Gonzalo (2014); Corradi and Swanson (2014); Han and Inoue (2015); Su and Wang (2017); and Cheng, Liao, and Schorfheide (2016). Breitung and Eickmeier (2011) proposed a test for dynamic factor models. Yamamoto and Tanaka (2015) showed that their test has nonmonotonic power and propose a modified version that solves the problem. The major restriction for most studies is their focus on testing for a common break date in the factor loadings. While a common break date is sometimes relevant, one cannot exclude the possibility that some of the loadings have breaks at different dates. Additional work is needed in that direction.

Panels

Panel data studies have become increasingly popular including inference about breaks. The literature on estimating panel structural breaks can be categorized as assuming whether the parameters of interest are allowed to be heterogenous across units or not. We focus on heterogeneous panels since they are more relevant in practice and refer the reader to De Watcher and Tzavalis (2012) and Qian and Su (2016) for corresponding methods for homogeneous panels. Bai (2010) considered the problem of estimating a common break point in a panel with $N$ units and $T$ observations for each unit. The model takes the form: ${y}_{it}={\mu}_{ij}+{u}_{it}$, for $\phantom{\rule{0.1em}{0ex}}t=1,\phantom{\rule{0.1em}{0ex}}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{b}^{0}$, if $j=1$ and $t={T}_{b}^{0}+\mathrm{1,}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T$, if $j=2$, where $i=1,\phantom{\rule{0.1em}{0ex}}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}N$, and ${u}_{it}$ is a disturbance term. The common break specification means that each unit has a break point at ${T}_{b}^{0}$. The model allows for heterogeneous means and break magnitudes ${\mu}_{i2}-{\mu}_{i1}$. Bai (2010) provided results for both fixed $T$ and $T$ going to infinity. Unlike in the univariate model where the break point is assumed to correspond to a positive fraction of the total sample size, the panel setup allows one to consider also the case where ${T}_{b}^{0}$ can take any value in $\left[\mathrm{1,}\phantom{\rule{0.1em}{0ex}}T-1\right]$. The latter case can be studied under the asymptotics framework with $N\to \infty $ and $T$ fixed or $N\mathrm{,}\phantom{\rule{0.1em}{0ex}}T\to \infty $ such that $T/N\to 0$. Importantly, only consistency can be established under the latter scenario. For the derivation of the limiting distribution one needs the standard assumption ${T}_{b}^{0}=T{\lambda}_{0}$. Turning to the assumptions on the errors, Bai (2010) required stationarity of $\left\{{u}_{it}\right\}$ in the time dimension and independence over $i$. It is argued that it can be relaxed without affecting the consistency result, though for the asymptotic distribution one requires the cross-sectional dependence to be not too strong. The assumption on the break sizes is ${{\displaystyle \mathrm{lim}}}_{N\to \infty}{N}^{-1/2}{\displaystyle {\sum}_{i=1}^{N}}{\left({\mu}_{i2}-{\mu}_{i1}\right)}^{2}=\infty $. To understand, note that if ${\mu}_{i2}-{\mu}_{i1}$ were $i\mathrm{.}i\mathrm{.}d\mathrm{.}$ random variables with positive variance then the above limit with ${N}^{-1}$ replacing ${N}^{-1/2}$ should converge to a positive constant. Thus, the condition does not require every unit to have a break. The estimation method involves least-squares. For a given $1\le {T}_{b}\le T-1$, define ${\overline{y}}_{i1}={T}_{b}^{-1}{\displaystyle {\sum}_{t=1}^{{T}_{b}}}{y}_{it}$ and ${\overline{y}}_{i2}={\left(T-{T}_{b}\right)}^{-1}{\displaystyle {\sum}_{t={T}_{b}+1}^{T}}{y}_{it}$ which are the estimates of ${\mu}_{i1}$ and ${\mu}_{i2}$, respectively. Define the sum of squared residuals for the ${i}^{th}$ equation as ${S}_{iT}\left({T}_{b}\right)={\displaystyle {\sum}_{t=1}^{{T}_{b}}}{\left({y}_{it}-{\overline{y}}_{i1}\right)}^{2}+{\displaystyle {\sum}_{t={T}_{b}+1}^{T}}{\left({y}_{it}-{\overline{y}}_{i2}\right)}^{2}$ for ${T}_{b}=1,\dots \mathrm{,}T-1$ and ${S}_{iT}\left({T}_{b}\right)={\displaystyle {\sum}_{t=1}^{T}}{\left({y}_{it}-{\overline{y}}_{i}\right)}^{2}$ for ${T}_{b}=T$ where ${\overline{y}}_{i}$ is the whole sample mean. The least-squares estimator of the break point is defined as ${\widehat{T}}_{b}={\text{argmin}}_{1\le {T}_{b}\le T-1}SSR\left({T}_{b}\right)$, where $SSR\left({T}_{b}\right)={\displaystyle {\sum}_{i=1}^{N}}{S}_{iT}\left({T}_{b}\right)$. The availability of panel data leads to a stronger result about the rate of convergence. In the univariate case, we have ${\widehat{T}}_{b}={T}_{b}^{0}+{O}_{p}\left(1\right)$. For panel data, under either fixed $T$ or $T\to \infty $, ${\widehat{T}}_{b}={T}_{b}^{0}+{o}_{p}\left(1\right)$ and results show that with $N\to \infty $ the common break point can be estimated precisely even with a regime having a single observation. The limiting distribution is derived under a small shifts assumption with ${\mu}_{i2}-{\mu}_{i1}={N}^{-1/2}{\Delta}_{i}$, ${\Delta}_{i}>0$. Construction of the confidence intervals requires simulations of the derived limiting distribution. Unlike in the univariate case, a change in variable allowing to express it as a function of quantities that can be consistently estimated cannot be immediately carried out. Nonetheless, noting that a Gaussian random walk and a standard Wiener process have the same distribution at integer times, one can apply the change in variable argument leading to the same inference procedure as in the univariate case, which now only holds approximately, so that inference works as in the univariate setting. Finally, Bai (2010) further considers the case of (possibly) simultaneous break in mean and variance and proposes a QML method.

Kim (2011) studied least-squares estimation of a common deterministic time trend break in large panels with a break either in the intercept, slope, or both with a general dependence structure in the errors. He models cross-sectional dependence with a common factor structure and allows the errors to be serially correlated in each equation: ${u}_{it}={{\gamma}^{\prime}}_{i}{F}_{t}+{e}_{ti}$ where ${F}_{t}$ is a vector of latent common factors, ${\gamma}_{i}$ is a factor loading and ${e}_{it}$ is a unit specific error. Under joint asymptotics $\left(T\mathrm{,}\phantom{\rule{0.1em}{0ex}}N\right)\to \infty $, serial correlation and cross-sectional dependence are shown to affect the rate of convergence and the limiting distribution of the break point estimator. As in Bai (2010), ${\widehat{T}}_{b}$ can be consistent for ${T}_{b}^{0}$ even when $T$ is fixed, though it only holds when the ${u}_{it}$ are independent across both $i$ and $t$. When the ${e}_{it}$ are serially correlated, and there are no common factors, the rate of convergence depends on $N$ and is faster than in the univariate case. With common factors generating strong cross-sectional dependence, the rate of convergence does not depend on $N$ and reduces to the univariate case (see Perron & Zhu, 2005). For fixed shifts, the limiting distribution depends on many elements: for example, the form of the break, the presence of common factors, and stationary versus integrated errors. For a joint broken trend, it can be normal. To obtain a limit theory not depending on the exact distribution of the errors, the break magnitudes need to converge to zero at a rate ${N}^{-1/2}$, in which case asymptotically valid confidence intervals can computed via simulations.

Results pertaining to regression models using stationary panels were obtained by Baltagi, Feng, and Kao (2016). They consider large heterogeneous panels with common correlated effects (CCE) and allowed for unknown common structural breaks in the slopes. The CCE setting takes the following form: ${y}_{it}={x}_{it}^{\prime}\beta ({T}_{b}^{0})+{u}_{it}$, where ${u}_{it}={\gamma}_{i}^{\prime}{F}_{t}+{e}_{it}$, ${x}_{it}={\text{\Gamma}}_{i}^{\prime}{F}_{t}+{v}_{it}$, ${e}_{it}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{v}_{it}$ are idiosyncratic errors and some or all components of $\beta ({T}_{b}^{0})$ differ pre- and post-break. Due to the correlation between ${x}_{it}$ and ${u}_{it}$, least-squares for each cross-sectional regression could be inconsistent. They use a least-squares method using augmented data and confirm the result in Bai (2010) that the break point ${T}_{b}^{0}$ can be consistently estimated as both $N$ and $T$ go to infinity. Common breaks in panels were also considered by Qian and Su (2016) and Li, Qian, and Su (2016) who studied estimation and inference with and without interactive fixed effects using Lasso methods. Kim (2014) generalized Kim (2011) by allowing a factor structure in the error terms. Baltagi, Feng, and Kao (2016) studied structural breaks in a heterogeneous large panel with interactive fixed effects. They show the consistency of the estimated break fraction and break date under some conditions.

Most of the work on structural breaks in panels focused on common breaks, in which case ${T}_{b}^{0}$ itself can be consistently estimated and not only ${\lambda}_{b}^{0}$. One may infer that simply adding a cross-sectional dimension yields more information and precise estimates. This is misleading because the result crucially relies on the assumption that the break is common to all units. Although this may be relevant in practice, the results should be interpreted carefully.

Continuous Record Asymptotics

Casini and Perron (2017a) considered an asymptotic framework based on a continuous-time approximation, that is, $T$ observations with a sampling interval $h$ over a fixed time span $\left[\mathrm{0,}\phantom{\rule{0.1em}{0ex}}N\right]$, where $N=Th$, with $T\to \infty $ and $h\to \infty $ with $N$ fixed. Liang, Wang, and Xu (2016) considered a similar framework for the simple case of a change in mean, and they do not provide feasible versions for the construction of the confidence sets, hence we follow the general approach of Casini and Perron (2017a), who consider the following partial structural change model with a single break point:

where ${\left\{{D}_{s}\mathrm{,}{Z}_{s}\mathrm{,}{e}_{s}\right\}}_{s\ge 0}$ are continuous-time processes, and we observe realizations at discrete points of time, namely $\left\{{Y}_{kh}\mathrm{,}{D}_{kh}\mathrm{,}{Z}_{kh};\phantom{\rule{0.1em}{0ex}}k=0,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T=N/h\right\}$. For any process $X$, we denote its “increments” by ${\Delta}_{h}{X}_{k}={X}_{kh}-{X}_{\left(k-1\right)h}$. For $k=1,\dots \mathrm{,}T$, let ${\Delta}_{h}{D}_{k}={\mu}_{D\mathrm{,}k}h+{\Delta}_{h}{M}_{D\mathrm{,}k}$ and ${\Delta}_{h}{Z}_{k}={\mu}_{Z\mathrm{,}k}h+{\Delta}_{h}{M}_{Z\mathrm{,}k}$, where ${\mu}_{D\mathrm{,}t}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{\mu}_{Z\mathrm{,}t}$ are the “drifts” and ${M}_{D\mathrm{,}k}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{M}_{Z\mathrm{,}k}$ are continuous local martingales. They consider the least-squares estimator of the break point, and the analysis is valid for general time-series regression models including errors with correlation and/or heteroskedasticity and lagged dependent variables. Under fixed shifts, ${\widehat{T}}_{b}-{T}_{b}^{0}={O}_{p}\left(1\right)$. Besides the usual small shifts assumption, the limiting distribution is derived under the additional assumption of increasing local variances around the true break date. The continuous record asymptotic distribution of the least-squares estimator is then given by

where $\u3008{Z}_{\Delta}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{Z}_{\Delta}\u3009\left(v\right)$ is the predictable quadratic variation process of ${Z}_{t}$, $W\left(v\right)$ is a two-sided centered Gaussian process and ${\overline{\sigma}}^{2}$ is the limit of an estimate of the error innovation variance over $\left[\mathrm{0,}\phantom{\rule{0.1em}{0ex}}N\right]$. The results (11) is defined on a new “fast time scale.” The latter provides a better approximation to the properties of the finite-sample distribution when $h$ is actually fixed. It is shown that the continuous record asymptotic distribution provides a much better approximation to the finite-sample distribution of the least-squares estimator. The former is highly non-standard and captures the main properties of the latter such as tri-modality, asymmetry, and skewness. Thus, basing inference on the continuous record asymptotic theory results in inference procedures about the break date that perform better than existing methods. As shown in Elliott and Müller (2007) and Chang and Perron (2017), Bai’s (1997a) method for constructing confidence intervals for the break date displays a coverage probability far from the nominal level when the magnitude of the break is small. Casini and Perron (2017a) proposed constructing confidence sets by computing the Highest Density Regions (HDR) of the density of the continuous record limiting distribution. Their confidence sets are shown to provide accurate converge rates and relatively short length of the confidence sets across different break magnitudes and break locations when compared with those of Bai (1997a), Elliot and Müller (2007), and Eo and Morley (2015). In addition, Casini and Perron (2017b) investigated a GL estimation and inference method, which involves transforming the least-squares objective function into a proper distribution (i.e., a Quasi-posterior) and minimizing the expected risk under a given loss function. The analysis is carried out under continuous record asymptotics. This yields a new estimator of the break shown to be more accurate than the original least-squares estimator. The proposed confidence sets, which use the HDR concept, also have coverage rates close to the nominal level and of relatively small length whether the break is small or large. The GL estimator based on the least-squares estimate was also considered under large-$T$ asymptotics by Casini and Perron (2018) who also relate it to the distribution theory of Bayesian change-point estimators.

Forecasting

We will first discuss the concept of forecast failure (or breakdown) and describe methods proposed to detect changes in the forecasting performance over time. Second, we will discuss techniques to compare the relative predictive ability of two competing forecast models in an unstable environment. It is useful to clarify the purpose of forecast breakdown tests. The aim is to assess retrospectively whether a given forecasting model provides forecasts that show evidence of changes (improvements or deterioration) with respect to some loss function. Since the losses can change because of changes in the variance of the shocks (e.g., good luck), detection of a forecast failure does not necessarily mean that a forecast model should be abandoned. Care must be exercised to assess the source of the changes. But if a model is shown to provide stable forecasts, it can more safely be applied in real time. In practice, such forecasts are made at the time of the last available data, using a fixed, recursive or rolling window. Hence, there is a natural separation between the in-sample and out-of-sample periods simply dictated by the last data point. Such is not the case when trying to assess retrospectively whether a given model provides stable forecasts. There is then the need for a somewhat artificial separation between the in and out-of-sample periods at some date labeled ${T}_{m}$, say. This separation date should be such that the model in the in-sample period is stable in some sense, for example, yielding stable forecasts. This can, however, create problems: for example, one needs a truncation point ${T}_{m}$ to assess forecast failures, but the choice of this value is itself predicated on some knowledge of stability.

The forecast failure test of Giacomini and Rossi (2009), GR (2009) hereafter, is a global and retrospective test that compares the in-sample average with the out-of-sample average of the sequence of forecast losses. Adopting the same notation as in the previous section, we have $k=1,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T$ observations with a sampling frequency $h$ over the time span $\left[\mathrm{0,}\phantom{\rule{0.1em}{0ex}}N\right]$ with $N=Th$. We recover the setting of GR (2009) by setting $h=1$ in what follows. Define at time $\left(k+\tau \right)h$ a surprise loss given by the deviation between the time-$\left(k+\tau \right)h$ out-of-sample loss and the average in-sample loss: $S{L}_{\left(k+\tau \right)h}({\widehat{\beta}}_{k})={L}_{\left(k+\tau \right)h}({\widehat{\beta}}_{k})-{\overline{L}}_{kh}({\widehat{\beta}}_{k}),$ for $k={T}_{m}\mathrm{,}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T-\tau $, where ${\overline{L}}_{kh}({\widehat{\beta}}_{k})$ is the average in-sample loss computed according to the specific forecasting scheme, where ${\widehat{\beta}}_{k}$ is some estimator of the model parameters and ${T}_{m}$ is the in-sample size. One can then define the average of the out-of-sample surprise losses ${\overline{SL}}_{{N}_{0}}({\widehat{\beta}}_{k})={N}_{0}^{-1}{\displaystyle {\sum}_{k={T}_{m}}^{T-\tau}}S{L}_{\left(k+\tau \right)h}({\widehat{\beta}}_{k}),$ where ${N}_{0}=N-{N}_{in}-h$ denotes the time span of the out-of-sample window and ${N}_{\text{in}}={T}_{m}h$. GR (2009) observed that under the hypothesis of no forecast instability ${\overline{SL}}_{{N}_{0}}$ should have zero mean (i.e., no systematic surprise losses in the out-of-sample window). Under the null hypothesis of no forecast failure, the GR (2009) test ${t}_{GR}={N}_{0}^{1/2}{\overline{SL}}_{{N}_{0}}({\widehat{\beta}}_{k})/{\widehat{\sigma}}_{{T}_{m}\mathrm{,}{T}_{n}}$ follows asymptotically a standard normal distribution.

Casini (2018a) extends the analysis by considering a continuous-time asymptotic framework and partitioning the out-of-sample into ${m}_{T}={T}_{n}/{n}_{T}$ blocks each containing ${n}_{T}$ observations. Let ${B}_{h\mathrm{,}b}={n}_{T}^{-1}{\displaystyle {\sum}_{j=1}^{{n}_{T}}S{L}_{{T}_{m}+\tau +b{n}_{T}+j-1}(\widehat{\beta})}$ and ${\overline{B}}_{h\mathrm{,}b}={n}_{T}^{-1}{\displaystyle {\sum}_{j=1}^{{n}_{T}}{L}_{\left({T}_{m}+\tau +b{n}_{T}+j-1\right)h}(\widehat{\beta})}$ for $b=0,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}\lfloor {T}_{n}/{n}_{T}\rfloor -1$ with ${T}_{n}$ the out-of-sample size. The test statistic is, ${Q}_{max\mathrm{,}h}\left({T}_{n}\mathrm{,}\phantom{\rule{0.1em}{0ex}}\tau \right)={\nu}_{L}^{-1}{{\displaystyle \mathrm{max}}}_{b=0,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}\lfloor {T}_{n}/{n}_{T}\rfloor -2}\left|{B}_{h\mathrm{,}b+1}-{B}_{h\mathrm{,}b}\right|$ where ${\nu}_{L}$ is the square root of the asymptotic variance of the test. The test partitions the out-of-sample window into ${m}_{T}$ blocks of asymptotically vanishing length $\left[b{n}_{T}h\mathrm{,}\phantom{\rule{0.1em}{0ex}}\left(b+1\right){n}_{T}h\right]$ and ${B}_{h\mathrm{,}b}$ is a local average of the surprise losses within the block $b$. The test ${Q}_{max\mathrm{,}h}\left({T}_{n}\mathrm{,}\phantom{\rule{0.1em}{0ex}}\tau \right)$ takes on a large value if there is a large deviation ${B}_{h\mathrm{,}b+1}-{B}_{h\mathrm{,}b}$, which suggests a discontinuity or non-smooth shift in the surprise losses close to time $b{n}_{T}h$, and thus it provides evidence against the null. Simulations show that the test of GR (2009) and Casini (2018b) both have good power properties when the instability is long lasting while the latter performs better when the instability is short lived.

Perron and Yamamoto (2017) adapted the classical structural change tests to the forecast failure context. First, they recommend that all tests should be carried with a fixed scheme to have best power, which ensures the maximum difference between the fitted in and out-of sample means of the losses. There are contamination issues under the rolling and recursive scheme that induce power losses. With such a fixed scheme, the GR (2009) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date ${T}_{m}$. To alleviate this problem, which leads to important losses in power when the break in forecasting performance is not exactly at ${T}_{m}$, one can follow Inoue and Rossi (2012) and consider maximizing the GR (2009) test over all possible values of ${T}_{m}$ within a pre-specified range. This then corresponds to a sup-Wald test for a single change at some date constrained to be the separation point between the in and out-of-sample periods. The test is still not immune to non-monotonic power problems when multiple changes occur. Hence, Perron and Yamamoto (2017) proposed a Double sup-Wald test which for each ${T}_{m}\in \left[{T}_{0}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{1}\right]$ performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over the range ${T}_{m}\in \left[{T}_{0}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{1}\right]$: $DSW={{\displaystyle \mathrm{max}}}_{{T}_{m}\in \left[{T}_{0}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{T}_{1}\right]}S{W}_{{L}^{o}\left({T}_{m}\right)}$, where $S{W}_{{L}^{o}\left({T}_{m}\right)}$ is the sup-Wald test for a change in the mean of the out-of-sample loss series ${L}_{t}^{o}(\widehat{\beta})$ for $t={T}_{m}+\tau \mathrm{,}\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T$, defined by

where $SS{R}_{{L}^{o}\left({T}_{m}\right)}$ is the unrestricted sum of squares, $SSR{\left({T}_{b}\left({T}_{m}\right)\right)}_{{L}^{o}\left({T}_{m}\right)}$ is the sum of squared residuals assuming a one-time change at time ${T}_{b}\left({T}_{m}\right)$, and ${\widehat{V}}_{{L}^{o}\left({T}_{m}\right)}$ is the long-run variance estimate of the out-of-sample loss series. In addition, Perron and Yamamoto (2017) propose to work directly with the total loss series $L\left({T}_{m}\right)$ to define the Total Loss Sup-Wald test (TLSW) and the Total Loss UDmax test (TLUD). Using extensive simulations, based on the original design of GR (2009) (which involves single and multiple changes in the regression parameters and/or the variance of the errors), they show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all DGPs are the Double sup-Wald and Total Loss UDmax tests constructed with a fixed forecasting window scheme (the test of Casini, 2018a, was not included in the simulations).

Next, we turn to testing for forecast comparisons in unstable environment. Here, the goal is to determine the relative out-of-sample predictive ability between two competing models in the presence of possible breaks. Giacomini and Rossi (2010) proposed two tests: the Fluctuation test and the One-time Reversal test. The former tests whether the local relative forecasting performance equals zero at each point in time whereas the One-time Reversal tests the null hypothesis that the two models perform equally well at each point in time against the alternative that there is a break in the relative performance. Here we discuss the Fluctuation test only. Suppose we compare two $\tau $-step ahead forecast models for the scalar ${y}_{k}$. The first model is characterized by a parameter $\theta $ and the second model by a parameter $\gamma $. The relative performance is evaluated by a sequence of out-of-sample loss differences ${\{\Delta {L}_{k}({\widehat{\theta}}_{k-\tau \mathrm{,}{T}_{m}}\mathrm{,}{\widehat{\gamma}}_{k-\tau \mathrm{,}{T}_{m}})\}}_{k={T}_{m}+\tau}^{T}$, where $\Delta {L}_{k}({\widehat{\theta}}_{k-\tau \mathrm{,}{T}_{m}}\mathrm{,}{\widehat{\gamma}}_{k-\tau \mathrm{,}{T}_{m}})={L}^{\left(1\right)}({y}_{k}\mathrm{,}{\widehat{\theta}}_{k-\tau \mathrm{,}{T}_{m}})-{L}^{\left(2\right)}({y}_{k}\mathrm{,}{\widehat{\gamma}}_{k-\tau \mathrm{,}{T}_{m}})$. The expressions for the estimators ${\widehat{\theta}}_{k-\tau \mathrm{,}{T}_{m}}$ and ${\widehat{\gamma}}_{k-\tau \mathrm{,}{T}_{m}}$ depend on the forecasting scheme. The Fluctuation test is Fluct ${}_{k\mathrm{,}{T}_{m}}^{o}={m}^{-1}{\widehat{\sigma}}^{-1}{\displaystyle {\sum}_{k=t-m/2}^{k+m/2-1}\Delta {L}_{k}({\widehat{\theta}}_{k-\tau \mathrm{,}m}\mathrm{,}\phantom{\rule{0.1em}{0ex}}{\widehat{\gamma}}_{k-\tau \mathrm{,}m})}$, for $k={T}_{n}+\tau +m/2,\dots \mathrm{,}\phantom{\rule{0.1em}{0ex}}T-m/2+1$, where ${\widehat{\sigma}}^{2}$ is an estimate of the long-run variance of the sequence of out-of-sample losses, and $m$ is the size of the window. The asymptotic null distribution is non-standard and critical values are computed by simulations. The test has good finite-sample properties under serially uncorrelated losses when scaled by an estimate of the variance instead of the long-run variance. However, Martins and Perron (2016) showed that the test suffers from non-monotonic power when constructed with the long-run variance estimate, as it should be, whether or not the sequence of loss differences exhibit serial correlation. They propose using simple structural change tests such as the sup-Wald test of Andrews (1993) and the UDmax test of Bai and Perron (1998). These are preferred, since they have the highest monotonic power even with a long-run variance constructed with a constrained small bandwidth. Finally, Fossati (2017) noticed that when the predictive ability is state dependent (e.g., recessions versus expansions), then taking account of such property by using a test based on Markov regime switching can be a useful alternative.

Additional work and surveys that relate to the issue of testing for structural changes in forecasting and/or forecasting allowing for possible in and out-of-sample changes include, among others, Clements and Hendry (1998a, 1998b, 2006); Pesaran, Pettenuzzo, and Timmermann (2006); Banerjee, Marcellino, and Masten (2008); Rossi (2013); Giacomini (2015); Giacomini and Rossi (2015); and Xu and Perron (2017).

Acknowledgments

We wish to thank Palgrave Macmillan for granting permission to use parts of Perron (2008).

## References

Altissimo, F., & Corradi, V. (2003). Strong rules for detecting the number of breaks in a time series. *Journal of Econometrics*, *117*, 207–244.Find this resource:

Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelation consistent covariance matrix estimation. *Econometrica*, *59*, 817–858.Find this resource:

Andrews, D. W. K. (1993). Tests for parameter instability and structural change with unknown change point. *Econometrica*, *61*, 821–856.Find this resource:

Andrews, D. W. K., & Ploberger, W. (1994). Optimal tests when a nuisance parameter is present only under the alternative. *Econometrica*, *62*, 1383–1414.Find this resource:

Antoch, J., Hanousek, J., Horvath, L., Huskova, M. H., & Wang, S. (2018). Structural breaks in panel data: Large number of panels and short time series. *Econometric Reviews*, *0*, 1–24..Find this resource:

Aue, A., Cheung, R. C. Y., Lee, T. C. M., & Zhong, M. (2014). Segemented model selection in quantile regression using the minimum description length principle. *Journal of the American Statistical Association*, *109*, 1241–1256.Find this resource:

Bai, J. (1997a). Estimation of a change point in multiple regression models. *Review of Economic and Statistics*, *79*, 551–563.Find this resource:

Bai, J. (1997b). Estimating multiple breaks one at a time. *Econometric Theory*, *13*, 315–352.Find this resource:

Bai., J. (1999). Likelihood ratio tests for multiple structural changes. *Journal of Econometrics*, *91*, 299–323.Find this resource:

Bai, J. (2000). Vector autoregressive models with structural changes in regression coefficients and in variance-covariance matrices. *Annals of Economics and Finance*, *1*, 303–339.Find this resource:

Bai, J. (2010). Common breaks in means and variances for panel data. *Journal of Econometrics*, *157*, 78–92.Find this resource:

Bai, J., & Han, X. (2016). Structural changes in high dimensional factor models. *Frontiers of Economics in China*, *11*, 9–39.Find this resource:

Bai, J., Han, X., & Shi, Y. (2017). *Estimation and inference of structural changes in high dimensional factor models* (Unpublished manuscript). Department of Economics, Columbia University.Find this resource:

Bai, J., Lumsdaine, R. L., & Stock, J. H. (1998). Testing for and dating breaks in multivariate time series. *Review of Economic Studies*, *65*, 395–432.Find this resource:

Bai, J., & Perron, P. (1998). Estimating and testing linear models with multiple structural changes. *Econometrica*, *66*, 47–78.Find this resource:

Bai, J., & Perron, P. (2003a). Computation and analysis of multiple structural change models. *Journal of Applied Econometrics*, *18*, 1–22.Find this resource:

Bai, J., & Perron, P. (2003b). Critical values for multiple structural change tests. *Econometrics Journal*, *6*, 72–78.Find this resource:

Bai, J., & Perron, P. (2006). Multiple structural change models: A simulation analysis. In D. Corbea, S. Durlauf, & B. E. Hansen (Eds.), *Econometric theory and practice: Frontiers of analysis and applied research* (pp. 212–237). Cambridge, U.K.: Cambridge University Press.Find this resource:

Baltagi, B., Feng, Q., & Kao, C. (2016). Estimation of heterogeneous panels with structural breaks. *Journal of Econometrics*, *191*, 176–195.Find this resource:

Baltagi, B., Kao, C., & Wang, F. (2015). Change point estimation in large heterogeneous panels (Unpublished manuscript). Department of Economics, Syracuse University.Find this resource:

Baltagi, B., Kao, C., & Wang, F. (2016). Identification and estimation of a large factor model with structural instability. *Journal of Econometrics*, *197*, 87–100.Find this resource:

Baltagi, B., Kao, C., & Wang, F. (2017). Estimating and testing high dimensional factor models with multiple structural changes (Unpublished manuscript). Department of Economics, Syracuse University.Find this resource:

Banerjee, A., Marcellino, M., & Masten, I. (2008). Forecasting macroeconomic variables using diffusion indexes in short samples with structural change. In D. E. Rapach & M. E. Wohar (Eds.), *Forecasting in the presence of structural breaks and model uncertainty* (pp. 149–194). Emerald Group.Find this resource:

Bleakley, K., & Vert, J.-P. (2011). The group fused Lasso for multiple change-point detection. arXiv preprint arXiv:1106.4199.Find this resource:

Boldea, O., Hall, A. R., & Han, S. (2012). Asymptotic distribution theory for break point estimators in models estimated via 2SLS. *Econometric Reviews*, *31*, 1–33.Find this resource:

Breitung, J., & Eickmeier, S. (2011). Testing for structural breaks in dynamic factor models. *Journal of Econometrics*, *163*, 71–74.Find this resource:

Casini, A. (2018a). Tests for forecast instability and forecast failure under a continuous record asymptotic framework (Unpublished manuscript). Department of Economics, Boston University arXiv preprint arXiv:1803.10883.Find this resource:

Casini, A.(2018b). Theory of evolutionary spectra for heteroskedasticity and autocorrelation robust inference in possibly misspecified and nonstationary moldes (Unpublished manuscript).Department of Economics, Boston University.Find this resource:

Casini, A., & Perron, P. (2017a). Continuous record asymptotics for structural change models. arXiv preprint arXiv:1804.00232.Find this resource:

Casini, A., & Perron, P. (2017b). Continuous record Laplace-based inference about the break date in structural change models. arXiv preprint arXiv:1804.00232.Find this resource:

Casini, A., & Perron, P. (2018). Generalized Laplace inference in multiple structural change models. arXiv preprint arXiv:1803.10871.Find this resource:

Chan, N. H., Yau, C. Y., & Zhang, R.-M. (2014). Group lasso for structural break time series. *Journal of the American Statistical Association*, *109*, 590–599.Find this resource:

Chang, S. Y., & Perron, P. (2016). Inference on a structural break in trend with fractionally integrated errors. *Journal of Time Series Analysis*, *37*, 555–574.Find this resource:

Chang, S. Y., & Perron, P. (2017). A comparison of alternative methods to construct confidence intervals for the estimate of a break date in linear regression models. *Econometric Reviews*, *37*, 577–601.Find this resource:

Chen, L. (2015). Estimation the common break date in large factor models. *Economics Letters*, *131*, 70–74.Find this resource:

Chen, L., Dolado, J. J., & Gonzalo, J. (2014). Detecting big structural breaks in large factor models. *Journal of Econometrics*, *180*, 3–48.Find this resource:

Cheng, X., Liao, Z., & Schorfheide, F. (2016). Shrinkage estimation of high-dimensional factor models with structural instabilities. *Review of Economic Studies*, *83*(4), 1511–1543.Find this resource:

Cho, C. K., & Vogelsang, T. J. (2017). Fixed-b inference for testing structural change in a time series regression. *Econometrics*, *5*(1), 2.Find this resource:

Chun, S., & Perron, P. (2013). Comparisons of robust tests for shifts in trend with an application to trend deviations of real exchange rates in the long run. *Applied Economics*, *45*, 3512–3528.Find this resource:

Ciuperca, G. (2014). Model selection by LASSO methods in a change-point model. *Statistical Papers*, *55*, 349–374.Find this resource:

Clements, M., & Hendry, D. (1998a). *Forecasting economic time series*. Cambridge, U.K.: Cambridge University Press.Find this resource:

Clements, M., & Hendry, D. F. (1998b). Forecasting economic processes. *International Journal of Forecasting*, *14*, 111–131.Find this resource:

Clements, M., & Hendry, D. F. (2006). Forecasting with breaks. In G. Elliott, C. W. J. Granger, A. Timmermann (Eds.), *Handbook of economic forecasting* (pp. 605–657). Amsterdam, The Netherlands: Elsevier Science.Find this resource:

Corradi, V., & Swanson, N. R. (2014). Testing for structural stability of factor augmented forecasting models. *Journal of Econometrics*, *182*, 100–118.Find this resource:

Crainiceanu, C. M., & Vogelsang, T. J. (2007). Nonmonotonic power for tests of a mean shift in a time series. *Journal of Statistical Computation and Simulation*, *77*, 457–476.Find this resource:

Csörgö, M., & Horváth, L. (1997). *Limit theorems in change-point analysis*. Wiley Series in Probability and Statistics. New York, NY: John Wiley.Find this resource:

Deng, A., & Perron, P. (2006). A comparison of alternative asymptotic frameworks to analyze structural change in a linear time trend. *Econometrics Journal*, *9*, 423–447.Find this resource:

De Wachter, S., & Tzavalis, E. (2012). Detection of structural breaks in linear dynamic panel data models. *Computational Statistics and Data Analysis*, *56*(11), 3020–3034.Find this resource:

Deng, A., & Perron, P. (2008). A non-local perspective on the power properties of the CUSUM and CUSUM of squares tests for structural change. *Journal of Econometrics*, *142*, 212–240.Find this resource:

Elliott, G., & Müller, U. K. (2006). Efficient tests for general persistent time variation in regression coefficients. *Review of Economic Studies*, *73*, 907–940.Find this resource:

Elliott, G., & Müller, U. K. (2007). Confidence sets for the date of a single break in linear time series regressions. *Journal of Econometrics*, *141*, 1196–1218.Find this resource:

Eo, Y., & Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. *Quantitative Economics*, *6*, 463–497.Find this resource:

Estrada, F., Perron, P., & Martinez-López, B. (2013). Statistically-derived contributions of diverse human influences to 20th century temperature changes. *Nature Geoscience*, *6*, 1050–1055.Find this resource:

Fossati, S. (2017). Testing for state-dependent predictive ability (Unpublished manuscript). Department of Economics, University of Alberta.Find this resource:

Giacomini, R. (2015). Economic theory and forecasting: Lessons from the literature. *The Econometrics Journal*, *18*, C22–C41.Find this resource:

Giacomini, R., & Rossi, B. (2009). Detecting and predicting forecast breakdowns. *The Review of Economic Studies*, *76*, 669–705.Find this resource:

Giacomini, R., & Rossi, B. (2010). Forecast comparisons in unstable environments. *Journal of Applied Econometrics*, *25*, 595–620.Find this resource:

Giacomini, R., & Rossi, B. (2015). Forecasting in nonstationary environments: What works and what doesn’t in reduced-form and structural models. *Annual Review of Economics*, *7*, 207–229.Find this resource:

Hall, A. R., Han, S., & Boldea, O. (2012). Inference regarding multiple structural changes in linear models with endogenous regressors. *Journal of Econometrics*, *170*, 281–302.Find this resource:

Han, X., & Inoue, A. (2015). Tests for parameter instability in dynamic factor models. *Econometric Theory*, *31*(5), 1117–1152.Find this resource:

Hansen, B. E. (1992). Tests for parameter instability in regressions with I(1) processes. *Journal of Business and Economic Statistics*, *10*, 321–335.Find this resource:

Hansen, B. E. (2000). Testing for structural change in conditional models. *Journal of Econometrics*, *97*, 93–115.Find this resource:

Harchaoui, Z., & Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. *Journal of the American Statistical Association*, *105*, 1480–1493.Find this resource:

Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2007). A simple, robust and powerful test of the trend hypothesis. *Journal of Econometrics*, *141*, 1302–1330.Find this resource:

Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2009). Simple, robust, and powerful tests of the breaking trend hypothesis. *Econometric Theory*, *25*, 995–1029.Find this resource:

Hawkins, D. M. (1976). Point estimation of the parameters of piecewise regression models. *Applied Statistics*, *25*, 51–57.Find this resource:

Hidalgo, J., & Seo, M. H. (2013). Testing for structural stability in the whole sample. *Journal of Econometrics*, *175*, 84–93.Find this resource:

Hodrick R. J., & Prescott, E. C. (1997). Postwar U.S. business cycles: An empirical investigation. *Journal of Money, Credit, and Banking*, *29*, 1–16.Find this resource:

Hou, J., & Perron, P. (2014). Modified local Whittle estimator for long memory processes in the presence of low frequency (and other) contaminations. *Journal of Econometrics*, *182*, 309–328.Find this resource:

Iacone, F. (2010). Local Whittle estimation of the memory parameter in presence of deterministic components. *Journal of Time Series Analysis*, *31*, 37–49.Find this resource:

Iacone, F., Leybourne, S. J., & Taylor, A. M. R. (2013a). Testing for a break in trend when the order of integration is unknown. *Journal of Econometrics*, *176*, 30–45.Find this resource:

Iacone, F., Leybourne, S. J., & Taylor, A. M. R. (2013b). On the behavior of fixed-b trend break tests under fractional integration. *Econometric Theory*, *29*, 393–418.Find this resource:

Inoue, A., & Rossi, B. (2012). Out-of-sample forecast test robust to the window size choice *Journal of Business and Economic Statistics*, *30*, 432–453.Find this resource:

Juhl, T., & Xiao, Z. (2009). Testing for changing mean with monotonic power. *Journal of Econometrics*, *148*, 14–24.Find this resource:

Kejriwal, M. (2009). Tests for a shift in mean with good size and monotonic power. *Economics Letters*, *102*, 78–82.Find this resource:

Kejriwal, M., & Perron, P. (2008). The limit distribution of the estimates in cointegrated regression models with multiple structural changes. *Journal of Econometrics*, *146*, 59–73.Find this resource:

Kejriwal, M., & Perron, P. (2010a). Testing for multiple structural changes in cointegrated regression models. *Journal of Business and Economic Statistics*, *28*, 503–522.Find this resource:

Kerjriwal, M., & Perron, P. (2010b). A sequential procedure to determine the number of breaks in trend with an integrated or stationary noise component. *Journal of Time Series Analysis*, *31*, 305–328.Find this resource:

Kiefer, N. M., & Vogelsang, T. J. (2005). A new asymptotic theory for heteroskedasticity-autocorrelation robust tests. *Econometric Theory*, *21*, 1130–1164.Find this resource:

Kim, D. (2011). Estimating a common deterministic time trend break in large panels with cross sectional dependence. *Journal of Econometrics*, *164*, 310–330.Find this resource:

Kim, D. (2014). Common breaks in time trends for large panel data with a factor structure. *The Econometrics Journal*, *17*, 301–337.Find this resource:

Kim, D., & Perron, P. (2009). Assessing the relative power of structural break tests using a framework based on the approximate Bahadur slope. *Journal of Econometrics*, *149*, 26–51.Find this resource:

Kim, D., Oka, T., Estrada, F., & Perron, P. (2017). Inference related to common breaks in a multivariate system with joined segmented trends with applications to global and hemispheric temperatures (Unpublished manuscript). Department of Economics, Boston University.Find this resource:

Koenker, R. (2005). *Quantile regression*. Cambridge, U.K.: Cambridge University Press.Find this resource:

Kurozumi, E. (2017). Monitoring parameter constancy with endogenous regressors. *Journal of Time Series Analysis*, *38*, 791–805.Find this resource:

Kurozumi, E., & Tuvaandorj, P. (2011). Model selection criteria in multivariate models with multiple structural changes. *Journal of Econometrics*, *164*, 218–238.Find this resource:

Kurozumi, E., & Yamamoto, Y. (2015). Confidence sets for the break date based on optimal tests. *The Econometrics Journal*, *18*, 412–435.Find this resource:

Lavielle, M., & Moulines, E. (2000). Least-squares estimation of an unknown number of shifts in a time series. *Journal of Time Series Analysis*, *21*, 33–59.Find this resource:

Li, D., Qian, J., & Su, L. (2016). Panel data models with interactive fixed effects and multiple structural breaks. *Journal of the American Statistical Association*, *111*, 1804–1819.Find this resource:

Li, Y., & Perron, P. (2017). Inference on locally ordered breaks in multiple regressions. *Econometric Reviews*, *36*, 289–353.Find this resource:

Liang, J., Wang, X., & Yu, J. (2016). New distribution theory for the estimation of structural break point in mean. *Journal of Econometrics*, *205*, 156–176.Find this resource:

Liu, J., Wu, S., & Zidek, J. V. (1997). On segmented multivariate regressions. *Statistica Sinica*, *7*, 497–525.Find this resource:

Martins, L. P., & Perron, P. (2016). Improved tests for forecast comparisons in the presence of instabilities. *Journal of Time Series Analysis*, *37*, 650–659.Find this resource:

Massacci, D. (2015). Least squares estimation of large dimensional threshold factor models (Unpublished manuscript). available at SSRN.Find this resource:

McCloskey, A., & Hill, J. B. (2017). Parameter estimation robust to low-frequency contamination. *Journal of Business & Economic Statistics*, *35*, 598–610.Find this resource:

McCloskey, A., & Perron, P. (2013). Memory parameter estimation in the presence of level shifts and deterministic trends. *Econometric Theory*, *29*, 1196–1237.Find this resource:

Nyblom, J. (1989). Testing for constancy of parameters over time. *Journal of the American Statistical Association*, *84*, 223–230.Find this resource:

Oka, T., & Perron, P. (2017). Testing for common breaks in a multiple equations system (Unpublished manuscript). Department of Economics, Boston University.Find this resource:

Oka, T., & Qu, Z. (2011). Estimating structural changes in regression quantiles. *Journal of Econometrics*, *162*, 248–267.Find this resource:

Perron, P. (1991). A test for changes in a polynomial trend function for a dynamic time series (Unpublished manuscript). Université de Montréal, Centre de recherche et développement en économique.Find this resource:

Perron, P. (1997). L’estimation de modèles avec changements structurels multiples. *Actualité Économique*, *73*, 457–505.Find this resource:

Perron, P. (2006). Dealing with structural breaks. In K. Patterson & T. C. Mills (Eds.), *Palgrave handbook of econometrics: Econometric theory* (Vol. 1, pp. 278–352). Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Perron, P. (2008). Structural change. In S. Durlauf & L. Blume (Eds.), *The new Palgrave dictionary of economics* (2nd ed.). Palgrave Macmillan.Find this resource:

Perron, P., & Qu, Z. (2006). Estimating restricted structural change models. *Journal of Econometrics*, *134*, 373–399.Find this resource:

Perron, P., & Qu, Z. (2007). An analytical evaluation of the log-periodogram estimate in the presence of level shifts (Unpublished manuscript). Boston University.Find this resource:

Perron, P., & Qu, Z. (2010). Long-memory and level shifts in the volatility of stock market return indices. *Journal of Business and Economic Statistics*, *28*, 275–290.Find this resource:

Perron, P., & Yabu, T. (2009a). Estimating deterministic trends with an integrated or stationary noise component. *Journal of Econometrics*, *151*, 56–69.Find this resource:

Perron, P., & Yabu, T. (2009b). Testing for shifts in trend with an integrated or stationary noise component. *Journal of Business and Economic Statistics*, *27*, 369–396.Find this resource:

Perron, P., & Yamamoto, Y. (2013). Estimating and testing multiple structural changes in linear models using band spectral regressions. *Econometrics Journal*, *16*, 400–429.Find this resource:

Perron, P., & Yamamoto, Y. (2014). A note on estimating and testing for multiple structural changes in models with endogenous regressors via 2SLS. *Econometric Theory*, *30*, 491–507.Find this resource:

Perron, P., & Yamamoto, Y. (2015). Using OLS to estimate and test for structural changes in models with endogenous regressors. *Journal of Applied Econometrics*, *30*, 119–144.Find this resource:

Perron, P., & Yamamoto, Y. (2016). On the usefulness or lack thereof of optimality criteria for structural change tests. *Econometric Reviews*, *35*, 782–844.Find this resource:

Perron, P., & Yamamoto, Y. (2017). Testing for changes in forecasting performance (Unpublished manuscript). Department of Economics, Boston University.Find this resource:

Perron, P., & Zhou, J. (2008). Testing jointly for structural changes in the error variance and coefficients of a linear regression model (Unpublished manuscript). Department of Economics, Boston University.Find this resource:

Perron, P., & Zhu, X. (2005). Structural breaks with stochastic and deterministic trends. *Journal of Econometrics*, *129*, 65–119.Find this resource:

Pesaran, M. H., Pettenuzzo, D., & Timmermann, A. (2006). Forecasting time series subject to multiple structural breaks. *The Review of Economic Studies*, *73*, 1057–1084.Find this resource:

Pitarakis, J.-Y. (2004). Least squares estimation and tests of breaks in mean and variance under misspecification. *Econometrics Journal*, *7*, 32–54.Find this resource:

Prodan, R. (2008). Potential pitfalls in determining multiple structural changes with an application to purchasing power parity. *Journal of Business & Economic Statistics*, *26*, 50–65.Find this resource:

Qian, J., & Su, L. (2016). Shrinkage estimation of common breaks in panel data models via adaptive groupt Lasso. *Journal of Econometrics*, *191*, 86–109.Find this resource:

Qu, Z. (2008). Testing for structural change in regression quantiles. *Journal of Econometrics*, *146*, 170–184.Find this resource:

Qu, Z. (2011). A test against spurious long memory. *Journal of Business and Economic Statistics*, *29*, 423–438.Find this resource:

Qu, Z., & Perron, P. (2007). Estimating and testing multiple structural changes in multivariate regressions. *Econometrica*, *75*, 459–502.Find this resource:

Quandt, R. E. (1960). Tests of the hypothesis that a linear regression system obeys two separate regimes. *Journal of the American Statistical Association*, *55*, 324–330.Find this resource:

Rojas, C. R., & Wahlberg, B. (2014). On change point detection using the fused lasso method. arXiv preprint arXiv:1401.5408.Find this resource:

Rossi, B. (2013). Advances in forecasting under instability. In G. Elliott & A. Timmermann (Eds.), *Handbook of economic forecasting* (pp. 1203–1324). Amsterdam, The Netherlands: Elsevier Science.Find this resource:

Sayginsoy, Ö., & Vogelsang, T. J. (2011). Testing for a shift in trend at an unknown date: a fixed-b analysis of heteroskedasticity autocorrelation robust OLS-based tests. *Econometric Theory*, *27*, 992–1025.Find this resource:

Siegmund, D. (1988). Confidence sets in change-point problems. *International Statistical Review*, *56*, 31–48.Find this resource:

Su, L., & Wang, X., (2017). On time-varying factor models: Estimation and inference. *Journal of Econometrics*, *198*, 84–101.Find this resource:

Vogelsang, T. J. (1997). Wald-type tests for detecting breaks in the trend function of a dynamic time series. *Econometric Theory*, *13*, 818–849.Find this resource:

Vogelsang, T. J. (1999). Sources of nonmonotonic power when testing for a shift in mean of a dynamic time series. *Journal of Econometrics*, *88*, 283–299.Find this resource:

Vogelsang, T. J. (2001). Testing for a shift in trend when serial correlation is of unknown form (Unpublished manuscript). Department of Economics, Cornell University.Find this resource:

Xu, J., & Perron, P. (2017). Forecasting in the presence of in and out of sample breaks (Unpublished manuscript). Department of Economics, Boston University.Find this resource:

Yamamoto, Y. (2016). A modified confidence set for the structural break date in linear regression models. *Econometric Reviews*, *37*, 974–999.Find this resource:

Yamamoto, Y., & Tanaka, S. (2015). Testing for factor loading structural change under common breaks. *Journal of Econometrics*, *189*, 187–206.Find this resource:

Yang, J. (2017). Consistency of trend break point estimator with underspecified break number. *Econometrics*, *5*(4), 1–19.Find this resource:

Yang, J., & Vogelsang, T. J. (2011). Fixed-b analysis of LM-type tests for a shift in mean. *Econometrics Journal*, *14*, 438–456.Find this resource: