# The Cointegrated VAR Methodology

- Katarina JuseliusKatarina JuseliusDepartment of Economics, University of Copenhagen

### Summary

The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.

### On the Cointegrated VAR Methodology

In 1982 Clive Granger published a working paper “Error-Correction and Cointegration,” which for the first time introduced the concept of cointegration as the mathematical counterpart to error-correction. The latter had been widely used since the seminal paper by Dennis Sargan in 1964 by his followers at the London School of Economics, in particular David Hendry. The mathematical concept of cointegration turned out to be of immense importance for time series econometrics as it contained the key to handling nonstationarity in economic time series. As a recognition of this work and their many other contributions, Clive Granger was awarded the Nobel Prize in Economics in 2003 together with Robert Engle. A few years after the working paper had appeared, Søren Johansen took up the cointegration thread and developed the probability theory for nonstationary processes and subsequently the statistical theory needed to make likelihood based inference in nonstationary vector autoregressive (VAR) processes. In 1995 Søren Johansen published his seminal book *Likelihood Based Inference in Cointegration*, which contains the basic theory for maximum likelihood inference in cointegrated processes. In 2006 Katarina Juselius published her book *The Cointegrated VAR Model: Methodology and Applications*, which discussed how to use the cointegrated VAR (CVAR) model as a statistically well-founded empirical methodology for inference in economic models.

The CVAR approach was almost immediately received with great enthusiasm: it provides a flexible way of representing economic time series data that allows the user to study both short-run and long-run effects in the same model framework. It has been extensively applied in central banks, research institutes, universities, and the financial sector.

However, the early popularity and interest was not always a force for good: many CVAR models were applied without a firm understanding of its rich, and complex, structure. Many (most) applications were not subject to careful misspecification checking and conveyed the impression of having been found by “pressing the VAR button.” The results were far from convincing, and many economists, in particular North American ones, turned against the approach.

It needs, therefore, to be emphasized that a CVAR analysis based on full information maximum likelihood presumes that all systematic aspects of the data are satisfactorily described by the model. Spanos (2009) argues that a convincing test of the empirical relevance of a theoretical model has to be carried out in the context of a fully specified statistical model that works as an adequate, though approximate, description of the data generating process (DGP) given in its entireness. When the VAR model has passed the specification tests, it is essentially a summary of the most important empirical facts over the sample period. When it has not passed these checks the estimates can be (and often are) totally misleading, and it is difficult to know what is a true empirical fact and what is a result of untested prior assumptions.

A correctly specified VAR model represents the basic covariance information of the empirical problem but only up to a first-order linear approximation. Second-order non-linear effects are common in economics but are often small compared to the linear effects, so can therefore often be efficiently addressed in a second step.

An unrestricted VAR is highly overparametrized and often difficult to interpret. But the CVAR parameterization (combining differences and cointegration) allows us to study an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). It describes the economic reality as a multivariate, dynamic, and stochastic process, the probabilistic assumptions of which are testable. Thus, a correct CVAR analysis should in principle obey equally strict scientific rules as an analysis of a mathematical model in economics. In principle there is no arbitrariness in such empirical analyses, but in practice one often has to make compromises: economic data are extremely difficult to model, partly because of the inherent reflexivity of social behavior (Frydman & Goldberg, 2013; Hands, 2013; Hommes, 2013; Soros, 1987), partly because such behavior may not remain constant over time, and finally because data are often strongly affected by political reforms, interventions, natural disasters, etc.

Hence, a successful CVAR analysis has to address a large number of issues typical of most economic data: (a) a pronounced persistence to be accounted for by cointegration; (b) changes in the structure of the model due to extraordinary events often controlled for by the inclusion of institutional dummies; and (c) regime shifts that cause parameters to be non-constant over certain sample periods. Whether an estimated model is sufficiently stable to satisfactorily describe the underlying mechanisms is by necessity based on judgment to some extent.

Failure to properly account for the above topics is frequently the reason why empirical studies provide unconvincing results. The first two topics will be discussed here in detail, whereas the third one is so comprehensive that it deserves treatise of its own.

### The Unrestricted VAR Model

The VAR model with $k$ lags, a constant, ${\mu}_{0},$ a trend, $t,$ and dummy variables, ${D}_{t},$ is specified as:

where ${x}_{t}$ is a $p$-dimensional vector of economic variables, the starting values ${x}_{0},{x}_{-1}\mathrm{,...,}{x}_{-k+1}$ are assumed fixed, ${D}_{t}$ may contain different kinds of dummy variables, for example, step dummies and permanent or transitory impulse dummies, and ${\epsilon}_{t}\sim Niid(0,\text{\Omega})$.

The VAR model (1) with $k=2$ can be formulated in error-correction form without changing the likelihood function:

where $\text{\Pi}=-(I-{\text{\Pi}}_{1}-{\text{\Pi}}_{2}),$ ${\text{\Gamma}}_{1}=-(I-{\text{\Pi}}_{1}).$ For notational simplicity $k=2$ in the subsequent discussions.

Three caveats are needed when discussing the usefulness of (1) as a valid characterization of economic data: (a) it is derived for a particular sample window [1,T], and there is no guarantee that other sample periods produce even approximately the same linear estimates; (b) the assumption of multivariate normality is seldom satisfied without controlling for extraordinary events during the sample period such as political reforms and interventions, droughts, floods, and storms. Such events can often be controlled for by conditioning on et set of appropriately constructed dummy variables, $\text{\Phi}{D}_{t}$; (c) while the assumption of stationarity of ${x}_{t}$ is seldom tenable for economic time series, the nonstationarity of ${x}_{t}$ can be handled by subjecting the matrix $\text{\Pi}$ in (2) to a non-linear reduced rank restriction $\text{\Pi}=\alpha {\beta}^{\prime}$ where the matrices $\alpha $ and $\beta $ have rank $r<p$.

#### The Multivariate Normality Assumption and the Use of Dummy Variables

A priori there is no obvious reason why one would expect multivariate normality to hold in observed data, but one could argue that the residuals are a catch-all for everything else not included in the model and that “everything else” comprises an “enormous number of factors.” Provided these factors are independent, the central limit theorem suggests that normality could be approximately valid. But, as there is no reason to expect independence, normality is an assumption that needs to be checked, and when checked it is almost always rejected. The latter is mostly due to outlying observations as a result of extraordinary events which tend to cause residual skewness and excess kurtosis.

By conditioning on the extraordinary events using adequately designed dummy variables, it is often possible to control for such non-normality. Hence, one does not have to give up on normality as is done in many empirical applications. But there are also cases for which the multivariate normality assumption in (2) is a bad approximation. For example, asset prices tend to have long tails as well as heteroscedastic errors and are therefore inherently non-normal. Whether one can use the VAR approach nonetheless is then a question of empirical robustness. To study this, Cavaliere, Rahbek, and Taylor (2014) investigate the properties of the cointegration rank test when the error variance exhibits time-varying behavior. The paper shows that bootstrap pseudo likelihood ratio tests are asymptotically correctly sized. Boswijk, Cavaliere, Rahbek, and Taylor (2016) investigate the properties of tests on cointegration relations in a similar setting and show that asymptotic inference in this case can be misleading but that the use of bootstrap methods and Wald, rather than likelihood ratio, tests lead to significant improvements in the size of the test (the probability of rejecting a true null hypothesis). In the following the errors are assumed to be normally distributed after correcting for extraordinary events.

The use of dummy variables in empirical models is sometimes considered problematic by economists arguing that outlying observations are highly informative and must not be dummied out. This argument is, however, more valid in the static regression model in which a dummy variable effectively removes the outlying observation. In a dynamic model, like (2), a dummy variable controls for the first unanticipated effect, whereas the lagged variables secure that the outlying observation enters the information set.

Because failure to properly control for the unanticipated effect of extraordinary events is likely to bias the parameter estimates, a correct use of dummies is often crucial for a correct specification. For example, a non-modeled shift in the equilibrium mean and/or average growth rates (as a result of deregulation, say) is likely to cause residual autocorrelation and may (incorrectly) suggest longer lags in the VAR.

Without the normality benchmark one would be inclined to ignore these highly informative events, which can be used to estimate the effect of, for example, changes in policy. Extraordinary events are also likely to affect the model’s forecasting performance. See, for example, Clements and Hendry (1999, 2008). To get the intuition, it is useful to consider the expected value of $\text{\Delta}{x}_{t}$ given its past values based on (2):

The deviation between the expected and realized value in “normal” times is

implying that economic agents using the VAR to forecast the next period’s outcome are rational in the sense of not making systematic forecast errors. But extraordinary events with large unanticipated effects tend to have a large effect on the forecast error:

After they happen, the effects are no longer unanticipated, and, provided the events have not changed the model’s parameters $({\text{\Gamma}}_{1},\text{\Pi})$, the next period’s forecast error would again be white noise. In general, dummy variables need not enter the VAR model with lags unless the extraordinary event is not just unanticipated but represents a very important institutional event (such as joining the EMU) comparable to an exogenous variable.

To sum up: the reason for assuming multivariate normality is not because economic data are inherently likely to follow the multivariate normality rule, but because this assumption helps us to check that all important effects have been properly accounted for in the model. It is a safeguard against relying on conclusions from a model that is basically misspecified (Hoover, 2006; Hoover, Johansen, & Juselius, 2009) and ensures that the model estimates are based on full information maximum likelihood. The specification of the VAR model is successful when the chosen information set contains the most relevant economic variables and the most important institutional events and the residuals describe a multivariate normal distribution. Therefore, one has to carefully check for a large number of things: Have there been shifts in mean growth rates or in equilibrium means? Are interventions, reforms, and changing policy properly controlled for? Is the sample period defining a constant parameter regime? Is the information set correctly chosen? The accuracy of the results depends on all this being correct in the model. Without such checking, the results can be (and often are) close to useless. A well-specified VAR model has nothing to do with pressing the VAR button.

But even though the empirical VAR model is a good description of the data, it is still not a satisfactory economic model as it is highly overparametrized. To arrive at a more parsimonious model, one has to test and impose various restrictions on the parameters, the most important of which are the so called reduced rank restrictions. The next section discusses reduced rank in the $I(1)$ model (see later for the $I(2)$ model). In the first case, the data contain stochastic trends of first-order persistence; in the second, both first- and second-order persistence.

### First-Order Persistence: The *I*(1) Model

The hypothesis that ${x}_{t}\sim I(1)$ is formulated as a reduced rank condition

where $\alpha $ and $\beta $ are $p\times r$ matrices $(r<p)$ and the $r$ relations, ${\beta}^{\text{'}}{x}_{t},$ define stationary linear relationships among $p$ nonstationary variables. Thus, the nonstationary VAR can be considered a submodel of the more general baseline VAR (2).

The choice of cointegration rank is likely to influence all subsequent inferences and is, therefore, a crucial step in the empirical analysis. Unfortunately, it can also be a difficult choice as the distinction between stationary and nonstationary directions of the vector process often is far from straightforward.

#### The LR Test for the Cointegration Rank

This test, which is often called the trace test or the Johansen test, is based on the VAR model in the so-called $R$-form, ${R}_{0,t}=\alpha {\beta}^{\prime}{R}_{1,t}+error,$ where ${R}_{0,t}$ and ${R}_{1,t}$ are the residuals in a regression of $\text{\Delta}{x}_{t}$ and ${x}_{t-1}$, respectively, on all short-run dynamics, dummies, and other deterministic components (Juselius, 2006). This model can be thought of as an “idealized” empirical model, $\text{\Delta}{x}_{t}=\alpha {\beta}^{\prime}{x}_{t-1}+{\epsilon}_{t},$ describing an economy where only long-run equilibrium forces are at play and no transitory short-run effects are disturbing the important long-run mechanisms. For notational simplicity it is assumed here that ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}=0$ in (2), making the $R$-form equivalent to the $\text{\Delta}x$-form.

The reduced rank regression estimates are obtained by solving an eigenvalue problem that delivers $p$ eigenvalues, ${\lambda}_{i},$ the corresponding eigenvectors, ${\beta}_{i},$ and their loadings, ${\alpha}_{i}.$ The eigenvalues, ${\lambda}_{i}$, can be interpreted as the squared correlations between linear combinations of the levels, ${{\beta}^{\prime}}_{i}{x}_{t-1},\phantom{\rule{0.2em}{0ex}}$ and linear combinations of the differences, ${{\omega}^{\prime}}_{i}\text{\Delta}{x}_{t}\phantom{\rule{0.2em}{0ex}}.$ Thus, the magnitude of ${\sqrt{\lambda}}_{i}$ is an indication of how strongly the linear relation ${{\beta}^{\prime}}_{i}{x}_{t-1}$ is correlated with the stationary part of the process $\text{\Delta}{x}_{t}.$ When ${\lambda}_{i}=0$, the correlation coefficient is zero, the linear combination ${{\beta}^{\prime}}_{i}{x}_{t-1}$ is nonstationary, and there is no equilibrium correction, i.e., ${\alpha}_{i}=0$.

The statistical problem is to derive a test that can discriminate between those ${\lambda}_{i},\phantom{\rule{0.2em}{0ex}}i=1,...,r$ that correspond to stationary relations and those ${\lambda}_{i},\phantom{\rule{0.2em}{0ex}}i=r+1,...,p$ that correspond to non-stationary relations. Because ${\lambda}_{i}=0$ does not change the likelihood function, the maximum is exclusively a function of the non-zero eigenvalues, ${L}^{-2/T}=\left|{S}_{00}\right|\phantom{\rule{0.2em}{0ex}}{\text{\Pi}}_{i=1}^{r}(1-{\lambda}_{i}),$ where ${S}_{00}$ is a covariance matrix defined in Johansen (1996). Using this expression a likelihood ratio test for the determination of the cointegration rank, $r,$ can be derived involving the following hypotheses:

##### The Asymptotic Tables

The test statistic has a non-standard distribution that has been found by simulation. The so-called asymptotic tables provide simulated test statistics for the whole probability distribution given the number of unit roots. These tables can be closely approximated by a gamma distribution, and most software packages provide the p-value for the test based on this approximation. This section discusses general aspects of the asymptotic distributions and the rank tests. The next section shows how the former depend on the deterministic components in the model, in particular when there is a linear trend and/or a level shift in the data (and the model).

As the name suggests, the asymptotic tables are valid for large samples. What is “large” is often difficult to tell: it depends on the number of observations but also on how informative these observations are. Generally, one is on the safe side if the sample is greater than 150, otherwise the asymptotic test statistics might give a poor approximation to the correct ones. For this reason, Johansen (2002a, 2002b) has derived so-called Bartlett corrections for the trace test that give a correct size. These are readily implemented in many software packages (CATS2 in RATS, Version 2, Dennis et al., 2006; CATS3 in OxMetrixcs, Version 3, Doornik & Juselius, 2017).

##### The Test Procedure

Usually, the value of $r={r}^{\ast}$ is determined based on a sequence of tests running from top to bottom, i.e., $\{r=0,\phantom{\rule{0.2em}{0ex}}p\phantom{\rule{0.2em}{0ex}}\text{unit}\phantom{\rule{0.2em}{0ex}}\text{roots}\},\phantom{\rule{0.2em}{0ex}}\{r=1,\phantom{\rule{0.2em}{0ex}}p-1\phantom{\rule{0.2em}{0ex}}\text{unit}\phantom{\rule{0.2em}{0ex}}\text{roots}\},\dots ,\phantom{\rule{0.2em}{0ex}}\text{{}r=p,\phantom{\rule{0.2em}{0ex}}0\phantom{\rule{0.2em}{0ex}}\text{unit}\phantom{\rule{0.2em}{0ex}}\text{roots}\}.$ The first hypothesis that is not rejected for the chosen $p$-value delivers the value of ${r}^{\ast}$.

In the ideal case, the probability to reject a correct null hypothesis is small and the probability to accept a correct alternative hypothesis is high for relevant hypotheses. This is likely to be the case when the estimated eigenvalues are either very large or very small. The trace test is then likely to pick up the correct value ${r}^{\ast}$ with high probability and also have good power properties for relevant alternative hypotheses.^{1}

In other cases, when the estimated eigenvalues are in the region where it is hard to discriminate between significant and insignificant eigenvalues, the trace test has often low power for stationary, near unit root alternatives. For example, the trace test will often reject the stationarity of a true economic relationship if the adjustment back to equilibrium is very slow and the sample is small or moderately sized. In such a case the economic relation in question is likely to have a characteristic root close to the unit circle, a so-called near unit root. If this relation is (incorrectly) considered a stochastic trend, inference on the cointegration relations will be affected (Elliot, 1998; Franchi & Johansen, 2017).

##### Caveats Using Asymptotic Tables

Unfortunately, it is not only the size of the trace test that is important for a correct choice of rank $r$ but also the power. The latter is often low for relevant alternative hypotheses in the neighborhood of the unit circle. As a safeguard against an incorrect decision it is therefore advisable to use as much additional information as possible, such as:

The characteristic roots of the model: If the ${(r+1)}^{th}$ cointegration vector is nonstationary and is wrongly included in the model, then the largest characteristic root will be close to the unit circle.^{2}

The $t$-ratios of the $\alpha $-coefficients to the ${(r+1)}^{th}$ cointegration vector. If all of them are small, say less than 2.6, then one would not gain a lot by including the ${r}^{th}+1$ vector as a cointegrating relation in the model.

The recursive graphs of the trace statistic for $\tilde{r}=\mathrm{1,2,...,}p.$ Since the variable $-{T}_{j}\phantom{\rule{0.2em}{0ex}}\text{ln}(1-{\lambda}_{i}),\phantom{\rule{0.2em}{0ex}}j={T}_{1}\mathrm{,...,}T,$ grows linearly over time when ${\lambda}_{i}\ne 0$, the recursively calculated components of the trace statistic should grow linearly for all $i=1,...,r,$ but stay constant for $i=r+1,...,p.$

The graphs of the cointegrating relations. If the graph of a supposedly stationary cointegration relation reveals distinctly nonstationary behavior, one should reconsider the choice of $r,$ or find out if the model specification is in fact incorrect. For example, data might be $I(2)$ instead of $I(1)$.

The economic interpretability of the results. If, for example, the ${({r}^{\ast}+1)}^{th}$ cointegration relation resembles the economic relation of interest, a consumption-income relation say, and ${\alpha}_{{r}^{\ast}+1}$ shows that consumption and/or income adjusts significantly, then it does not seem reasonable to discard such a relation based on a statistical null of a unit root.

### The Cointegrated VAR Model

A major advantage of the CVAR type of model contra standard regression models is that it allows a separation between short-run and long-run effects by combining differenced variables with cointegration among them. Another closely related advantage is that the distinction between ordinary and extraordinary effects (made possible by properly designed dummy variables) allows us to study the effect of institutional events (reforms, interventions) in the short and the long run. By imposing the cointegration rank $r$ on $\text{\Pi}$ and allowing for dummy variables in (2), both effects can be studied.

The CVAR for $k=2$ with reduced rank and dummy variables is formulated as:

where ${\beta}^{\prime}{x}_{t-1}$ defines $r$ stationary linear combinations of the variables ${x}_{t}$ interpreted as deviations from equilibrium values, $a$ describes how the system adjusts to the cointegrating relations, and ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}$ controls for short-run transitory effects.

It can be useful to think of the VAR model in the following way: assume one is interested in understanding why economic determinants change from one period to the next. For example, why did inflation go up compared to the previous month? Why did the nominal exchange rate drop? etc. The model describes the change of inflation as a result of an unanticipated shock, ${\epsilon}_{t},$ possibly some policy reforms, $\text{\Phi}{D}_{t},$ an adjustment, $\alpha ,$ to a previous disequilibrium, ${\beta}^{\prime}{x}_{t-1},$ and some transitory feedback effects from previous changes in other variables, ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-\mathrm{1..}}$ Thus, the model describes an economic system that is first pushed away from equilibrium by an exogenous shock and then starts adjusting back—how fast depends on the size of the $a$ coefficients and the size of the transitory feedback effects, ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}$.

#### The CVAR and the Regression Model

The advantage of the CVAR formulation is that by transforming the trending variables, ${x}_{t},$ into stationary differences, $\text{\Delta}{x}_{t},$ and stationary cointegration relations, ${\beta}^{\prime}{x}_{t},$ a number of problems in the static regression model are more or less solved:

Multicollinearity between the $x$ variables does not lead to imprecise estimates of the cointegration relations, ${\beta}^{\prime}{x}_{t},$ as two variables are cointegrated only if they share a common stochastic trend. The latter is defined as the cumulation of all permanent shocks that have pushed the variables out of equilibrium. While, for example, cointegration between two unrelated random walks will be rejected with high probability in the CVAR model, they may have a correlation coefficient close to one in small samples in the usual regression model (see Hendry & Juselius, 2000, 2001; Johansen, 2012).

The cointegration coefficients are “canonical” in the sense of being invariant to increases in the information set or to changes in the direction of minimalization. This is in contrast to the ordinary regression model where coefficients change as new (correlated) variables are added.

The removal of trends either by differencing or by cointegration is likely to make the multicollinearity between $\text{\Delta}{x}_{t}$ and ${\beta}^{\prime}{x}_{t},$ small enough not to be a problem. When ${x}_{t}\sim I(1),$ $\text{\Delta}{x}_{t}$, and ${\beta}^{\prime}{x}_{t}$ are stationary, standard inference on $\left(\alpha ,{\text{\Gamma}}_{1},\text{\Sigma}\right)$ applies for given $\beta $.

But while cointegration analysis is a powerful method for uncovering genuine relationships among variables, it is basically a statistical regularity that may break down if conditions change. Therefore, cointegration is no guarantee for structural invariance. Its coefficients might change when other parts of the structure change. This is closely related to the concept of super exogeneity in Engle et al. (1983).

#### The CVAR and the Dual Role of the Deterministic Terms

One complication of the CVAR model is that the deterministic terms play a different role for the differenced process (the short-run effects) and for the cointegration relations (the long-run effects). To give the intuition, (4) is reformulated in terms of deviations from mean values as shown in (7). When deriving $E\text{\Delta}{x}_{t}$ and $E({\beta}^{\prime}{x}_{t})$ the short-run dynamics ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}$ complicates the calculations without adding to the logic and so is set to zero. From (4) with ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}=0$ one can find

Pre-multiplying (5) by $\beta $ gives

Thus, both $E\text{\Delta}{x}_{t}$ and $E({\beta}^{\prime}{x}_{t})$ depend on the deterministic terms in a rather complex way. Juselius (2006) demonstrates how to distribute ${\mu}_{0}$ and ${\mu}_{1}t$ between the two when ${\text{\Gamma}}_{1}\text{\Delta}{x}_{t-1}=0.$ The idea can be illustrated by first decomposing ${\mu}_{0}=\alpha {\beta}_{0}+{\gamma}_{0}$ and ${\mu}_{1}t=\alpha {\beta}_{1}t+{\gamma}_{1}t.$ That the mean of an equilibrium error should be zero can then be used to determine ${\beta}_{0}$ for a given value of ${\beta}_{1}$.

Reformulating the CVAR with $E\text{\Delta}{x}_{t}={\gamma}_{0}+{\gamma}_{1}t$ and $E({\beta}^{\prime}{x}_{t})={\beta}_{0}+{\beta}_{1}t$ gives:

Equation (7) shows that an unrestricted constant and a trend in the CVAR implies $E\text{\Delta}{x}_{t}={\gamma}_{0}+{\gamma}_{1}t$ and $E{\beta}^{\prime}{x}_{t-1}={\beta}_{0}+{\beta}_{1}t.$

For a correct empirical specification of the CVAR, one must from the outset distinguish between data with and without deterministic trends:

${\mu}_{1}\ne 0.$ Data contain deterministic trends (e.g., GDP, CPI). If ${\gamma}_{1}\ne 0,$ then the CVAR model is consistent with both quadratic and linear deterministic trends in the data. Therefore, unless quadratic trends are desirable, ${\mu}_{1}t=\alpha {\beta}_{1}t+{\gamma}_{1}t$ has to be restricted so that ${\gamma}_{1}=0.$ But even though ${\beta}_{1}\ne 0,$ this linear trend in the cointegration relations does not cumulate to a quadratic trend in the data. Finally, if ${\beta}_{1}=0$ but ${\gamma}_{0}\ne 0,$ then the data contain linear trends that cancel in the cointegration relations.

${\mu}_{1}=0.$ Data do not contain deterministic trends (e.g., interest rates). When ${\beta}_{1}=0$ and ${\gamma}_{0}=0,$ then the CVAR is consistent with stochastic but no deterministic trends. If ${\beta}_{0}\ne 0,$ then cointegration relations need an intercept, otherwise they have a zero mean.

Thus, $E{\beta}^{\prime}{x}_{t-1}={\beta}_{0}+{\beta}_{1}t$ shows that the part of ${\mu}_{0}$ that is proportional to *α* measures an intercept (or an equilibrium mean) and the corresponding part of ${\mu}_{1}$ measures a trend coefficient in the cointegrating relations.

The deterministic components play an important role in the CVAR approach, partly because they are crucial for a correct model specification, partly because the asymptotic distribution of the trace test depends on these components. Therefore one should at the outset decide whether the CVAR model needs to be specified with deterministic linear trends or without such trends. As a rule, variables for which the mean growth rate is different from zero, i.e., $E(\text{\Delta}{x}_{t})\ne 0,$ need a deterministic trend in the model. Examples of such variables are real GDP, consumption, investment, nominal prices levels, etc. Variables for which the mean growth rate is zero, i.e., $E(\text{\Delta}{x}_{t})=0,$ should be specified without a linear trend in the model. Examples are interest rates, real exchange rates, stock returns, etc. But, because the latter may exhibit trending behavior in one direction or the other over a specific sample period, it is sometimes difficult to distinguish stochastic from deterministic trends. Nonetheless, economic or financial logic tells us that trending behavior in, for example, an interest rate must be stochastic rather than deterministic, or the model would be consistent with a money machine (and they do not exist). Similarly, even though inflation rates can exhibit linearly trending behavior over the chosen sample period, such trends must be considered stochastic. Otherwise it would imply quadratic deterministic trends in prices and such high predictability is not plausible. If the data vector, ${x}_{t},$ contains both deterministically trending and non-trending variables, the VAR has to be specified with a linear trend. The fact that some variables need a trend while others do not must then affect the choice of identifying restrictions.

##### Data Contain Linear, But No Quadratic, Trends

In this case (4) is specified as

consistent with $E\text{\Delta}{x}_{t}={\gamma}_{0}$ and $E({\beta}^{\prime}{x}_{t-1})={\beta}_{0}+{\beta}_{1}t.$ This implies that both the cointegration relations and the data contain a linear trend, a property that gives “similarity” in the probability analysis (a desirable property). The stationary equilibrium error, $({\beta}^{\prime}{x}_{t-1}-{\beta}_{0}-{\beta}_{1}t),$ contains a trend, but it does not cumulate in the process to produce a quadratic trend. Independently of whether ${\beta}_{1}=0$ or not, the cointegration rank should be determined in this model. Table 4 (restricted trend and unrestricted constant) in Johansen (1996) or Juselius (2006) is used for rank determination.

For the chosen rank, ${r}^{\ast},$ it is straightforward to test the hypothesis ${\beta}_{1}=0.$ If not rejected, then the linear trends in the data are canceled by cointegration; if rejected, then some cointegration relations are trend-stationary. A variable can also be trend-stationary by itself, for example, the output gap $y-{\beta}_{y}t$ is often found to be stationary. In the case when ${\beta}_{1}=0$ cannot be rejected, the rank should remain the same even though the model is respecified with ${\beta}_{1}=0.$

Because a constant term that appears twice in the model (such as ${\beta}_{0}$ and ${\gamma}_{0})$ will cause a singularity problem, the model has first to be estimated with an unrestricted constant ${\mu}_{0}$, which is then decomposed into $\alpha {\beta}_{0}$ and ${\gamma}_{0}.$ The value of ${\beta}_{0}$ can then be found by using $E({\beta}^{\prime}{x}_{t-1}-{\beta}_{0}-{\beta}_{1}t)=0$ so that ${\widehat{\beta}}_{0}=Avg.({\widehat{\beta}}^{\prime}{x}_{t-1}-{\widehat{\beta}}_{1}t)$, where $Avg.$ stands for the average over the sample period.

##### Data Contain No Linear Deterministic Trends

In this case, (4) is specified as

consistent with $E\text{\Delta}{x}_{t}=0$ and $E{\beta}^{\text{'}}{x}_{t-1}={\beta}_{0}.$ The constant is restricted to the cointegration relations and provides an estimate of the intercept of the cointegration relations (the equilibrium mean of the identified $\beta $ relations). Table 2 (restricted constant) in Juselius (2006) is used for the determination of the rank ${r}^{\ast}$. When the rank is found it is always possible to test ${\beta}_{0}=0.$ This hypothesis is, however, seldom accepted as ${\widehat{\beta}}_{0}$ also controls for starting values of the variables, which usually differ from zero.

Most software programs allow the user to choose between model specifications (8) and (9).

#### Dummy Variables in the CVAR Model

Changes in economic institutions such as changes in regulations, taxes, interventions, etc. also cause changes in economic variables. Some interventions may only have a minor effect and can be considered random noise, others have a very significant effect and must be appropriately accounted for in the model, for example, by adding new variables or by using dummy variables as a proxy. For instance, the speculative attack on some of the European currencies in the beginning of the nineties can be seen as (a couple of) extraordinarily large change(s) in the nominal exchange rate. For some of them the attack resulted in a drop/increase in the equilibrium mean of the real exchange rate as the nominal rate moved to a new and more sustainable level. If the CVAR model is estimated without properly accounting for such an event (showing up as non-normal, outlying observations), the model will assume this big change was related to a change in the economic variables, hence biasing the model estimates.

As a rule, a dummy variable in the model should represent a known event, for example, a flooding, a drought, a political intervention, etc., i.e., an extraordinary event that cannot be explained by the chosen data set ${x}_{t}.$ Three of the most important cases are discussed below.

Case (i): The slope coefficient of the linear trend has changed in the sample period (for example as a result of a major financial deregulation).

In this case, $E({\beta}^{\prime}{x}_{t-1})={\beta}_{0}+{\beta}_{01}{D}_{s,xx,t}+{\beta}_{1}t+{\beta}_{11}{t}_{xx,t},$ where ${t}_{xx,t}$ is $0$ before the date $xx$ and $\mathrm{1,2,3,...}$; after that date, ${D}_{s,xx,t}=\text{\Delta}{t}_{xx,t},$ and $E(\text{\Delta}{x}_{t})={\gamma}_{0}+{\gamma}_{01}{D}_{s,xx,t}.$ This specification allows for a broken trend both in the cointegration relations and in the data similarly as in the pure trend case above. The step dummy appears twice in the model (${\beta}_{01}{D}_{s,xx,t}$ and ${\gamma}_{01}{D}_{s,xx,t}$) and, hence, has to enter the model unrestrictedly to avoid singularity problems. In this case the asymptotic tables are no longer correct and need to be re-simulated by controlling for where in the sample the break takes place. Some software programs contains this option.

Case (ii): Data contain no trends, but the equilibrium mean has changed in the sample period.

This corresponds to $E({\beta}^{\prime}{x}_{t-1})={\beta}_{0}+{\beta}_{01}{D}_{s,xx,t}$, where ${D}_{s,xx,t},$ is 1.0 from the date $xx$ to the end of the sample, 0 otherwise, and $E(\text{\Delta}{x}_{t})={D}_{p,xx,t}$ where ${D}_{p,xx,t}=\text{\Delta}{D}_{s,xx,t}$ is an impulse dummy, which is included unrestrictedly in the model. It describes a situation where the equilibrium mean has shifted in the sample period, for example, as a result of a political reform. A shift in the equilibrium mean affects the asymptotic tables that have to be simulated for each specific model specification by controlling for where in the sample period the shift has taken place.

Cases (iii) and (vi):Data contain permanent and/or transitory outliers.

Such outliers can be controlled for by permanent and transitory impulse dummies, ${D}_{p,xx,t}$ and ${D}_{tr,xx,t},$ where ${D}_{p,xx,t},$ is $1$ for the date $xx,$ 0 otherwise, and *(iv)* a transitory impulse dummy ${D}_{tr,xx,t},$ is 1 for the date $xx,$ and $-1$ for $xx+1,$ 0 otherwise. The asymptotic tables are not affected by permanent and transitory dummy variables.

### The Moving Average Representation of the CVAR

Equation (4) can be inverted to describe ${x}_{t}$ as a function of ${\epsilon}_{t},$ *constant*, *trend*, and ${D}_{t}:$

where ${\beta}_{\perp}$ is the orthogonal complement to $\beta $ and ${\alpha}_{\perp}$ is the orthogonal complement to $\alpha ,$ ${{\alpha}^{\prime}}_{\perp}{\displaystyle {\sum}_{i=1}^{t}}{\epsilon}_{i}$ describes $p-r$ underlying stochastic trends, ${{\alpha}^{\prime}}_{\perp}{\displaystyle {\sum}_{i=1}^{t}}({\gamma}_{0}+{\gamma}_{01}{D}_{s,xx,t})$ describes a linear broken trend, and ${\beta}_{\perp}$ describes how these trends load into the variables. The second term describes the short-run dynamic effects of transitory changes in the system. and ${X}_{0}$ is a catch-all for initial values of the process.

Equation (10) is essentially a summary of the mechanisms generating the data process ${x}_{t}.$ It shows how permanent shocks to the system cumulates into stochastic trends that push the variables into nonstationary trajectories. The coefficients of ${\alpha}_{\perp}$ are informative about the sources of the exogenous shocks, and those of ${\beta}_{\perp}$ how these shocks are loaded into the variables. Based on (10) one can calculate so called impulse response functions, describing how a shock to a variable transmits through the system (described by ${C}^{\ast}(L))$ until it reaches its final impact given by ${\beta}_{\perp}.$ Thus, (10) describes the forces that have pushed the variables into nonstationary trajectories and (4) the forces that pull the variables back to equilibrium once they have been pushed away. In the CVAR jargon they are the pulling and pushing forces of the system.

### Second-Order Persistence: The *I*(2) Model

The $I(2)$ model has a very rich structure but is algebraically more complex than the $I(1)$ model, albeit the basic ideas are similar. The complexity might explain why there are relatively few applications in the literature. Another reason is that many economists find it implausible that economic variables move away from their equilibrium values for infinite times. Hence, most economic relations should be either stationary or at most *near* $I(1).$ While it is clearly correct that economic variables or relations do not wander away forever, it does not exclude the possibility that they can exhibit a persistence that is indistinguishable from a unit root or a double unit root process over finite samples. In this vein, Juselius (2012) argues that the classification of variables into single or double unit roots should be seen as a useful way of ordering the data into more homogeneous groups.

Nominal growth rates, in particular, are often found to be very persistent in one direction or the other and, thus, exhibiting little evidence of mean reversion. For example, over the last half decade many inflation rates in industrialized countries have been sufficiently persistent not to be rejected as $I(1)$ by unit root testing. But if inflation rates are empirically $I(1),$ then prices are $I(2)$ and need to be analyzed in the $I(2)$ model.

To investigate the possibility of $I(2),$ (4) is rewritten in its equivalent form:

where $\text{\Gamma}=-(I-{\text{\Gamma}}_{1}).$ If the differenced process also exhibits strong persistence so that $\text{\Delta}{x}_{t}\sim I(1)$ and, hence, ${x}_{t}\sim I(2),$ then (a linear transformation of) $\text{\Gamma}$ also has reduced rank. This is formulated as an additional reduced rank hypothesis:

where $\xi ,\eta $ are $(p-r)\times {s}_{1}$ and ${\alpha}_{\perp},{\beta}_{\perp}$ are orthogonal complements of $\alpha ,\beta $, respectively (Johansen, 1992, 1995). While the $I(1)$ reduced rank condition is associated with the levels of the variables, the $I(2)$ condition is with the differenced variables. The intuition is that the differenced process also contains unit roots when data are $I(2)$.

Nielsen and Rahbek (2007) derive the maximum likelihood trace test which can be used to determine the values of $r,{s}_{1},$ and ${s}_{2}$, where ${s}_{1}$ stands for the number of $I(1)$ trends, ${s}_{2}$ the number of $I(2)$ trends, and $p-r={s}_{1}+{s}_{2}.$

Because the $I(2)$ condition is formulated as a reduced rank on the transformed $\text{\Gamma}$ matrix, the latter is no longer unrestricted as in the $I(1)$ model. To circumvent this problem the following parameterization (see Doornik & Juselius, 2017; Johansen, 1997, 2006) can be used:

The relation in the hard brackets corresponds to the polynomially cointegrated relation, ${\tilde{\beta}}^{\prime}{\tilde{x}}_{t-1}+{d}^{\prime}\text{\Delta}{\tilde{x}}_{t-1},$ with ${{\tilde{x}}^{\prime}}_{t}=[{x}_{t},t].$ It describes a situation where the deviations from a long-run static equilibrium, ${\tilde{\beta}}^{\prime}{\tilde{x}}_{t},$ is a (near) $I(1)$ process and, therefore, has to be combined with the differenced process, ${d}^{\prime}\text{\Delta}{\tilde{x}}_{t},$ to become stationary. Such a relation can often be interpreted as a dynamic rather than the static equilibrium relation typical of the $I(1)$ model.

The relation in soft brackets, $\zeta {\tau}^{\prime}\text{\Delta}{\tilde{x}}_{t-1},$ where ${\tau}^{\prime}=[\tilde{\beta},{\tilde{\beta}}_{\perp 1}]$, is associated with medium-run relations among the differenced variables. The cointegration relations ${{\tau}^{\prime}}_{\perp 1}{x}_{t},$ consisting of ${\tilde{\beta}}^{\prime}{x}_{t}$ and ${{\tilde{\beta}}^{\prime}}_{\perp 1}{x}_{t},$ take the process from $I(2)$ to $I(1)$. The difference between the two is that the former can be stationary either by polynomial cointegration (${\tilde{\beta}}^{\prime}{\tilde{x}}_{t}+{d}^{\prime}\text{\Delta}{\tilde{x}}_{t}\sim I(0)),$ or by differencing (${\tilde{\beta}}^{\prime}\text{\Delta}{\tilde{x}}_{t}\sim I(0)),$ whereas the latter only by differencing (${{\tilde{\beta}}^{\prime}}_{\perp 1}\text{\Delta}{x}_{t}\sim I(0))$. While the economic interpretation of ${{\tau}^{\prime}}_{\perp 1}\text{\Delta}{x}_{t}$ is not always straightforward, Juselius and Assenmacher (2017), Juselius (2017b), and Juselius and Stillwagon (2017) give them an interpretation as medium-run relationship between changes in the process describing momentum trading along the trend in the foreign exchange market. The latter is often due to technical trading (Frydman & Goldberg, 2011).

### Identification When Data Are Nonstationary

In contrast to standard economic models, the CVAR does not distinguish between endogenous and exogenous variables: all stochastic variables are modeled and exogeneity of a variable is tested as a zero row restriction on the $\alpha $ matrix rather than assumed from the outset. The separation between the $r$ pulling and the $p-r$ pushing forces implies that the CVAR is inherently consistent with $r$ equilibrium relations estimated in the form of stationary equilibrium errors, ${\beta}^{\prime}{x}_{t},$ and $p-r$ exogenous trends, ${{\alpha}^{\prime}}_{\perp}{\displaystyle {\sum}_{i=1}^{t}}{\epsilon}_{i},$ where ${{\alpha}^{\prime}}_{\perp}$ is a $p-r\times p$ matrix orthogonal to $\alpha $.

As discussed earlier, the unrestricted equilibrium errors ${\beta}^{\prime}{x}_{t}$ are obtained by solving an eigenvalue problem. While the estimated $\beta $ are uniquely determined given the eigenvalue vector normalization, they cannot in general be given an economic interpretation without imposing further restrictions. In some cases an economic interpretation may not be needed, for example, if the purpose is forecasting rather than finding economic structures.

The exogenous trends are cumulations of latent “structural shocks” to the system such as demand and/or supply shocks, estimated by a linear combination of the CVAR residuals, ${{\alpha}^{\prime}}_{\perp}{\widehat{\epsilon}}_{t}$. Unless a variable is strongly exogenous (the corresponding row in $\alpha $ and ${\text{\Gamma}}_{i}$ are zero), the exogenous trends ${{\alpha}^{\prime}}_{\perp}{\displaystyle {\sum}_{i=1}^{t}}{\epsilon}_{i}$ do not correspond to any variable, ${x}_{j,t}$. When a variable ${x}_{j,t}$ is weakly but not strongly exogenous, it may, and often does, differ fundamentally from the corresponding exogenous trend ${\sum}_{i=1}^{t}}{\epsilon}_{j,i}.$

#### Identification of Pulling and Pushing Forces

The dichotomy of pulling and pushing forces in the CVAR makes it possible to address identification in four dimensions: the identification of (1) the long-run cointegration structure, (2) the short-run adjustment structure, (3) the exogenous driving shocks, and (4) the dynamics of the impulse responses (see Juselius, 2006, for a detailed treatment). The focus here is on (1) with some discussion of (2).

To illustrate the relationship between long-run and short-run identification, the CVAR model (4) is pre-multiplied by the current effects matrix ${A}_{0}$:

where ${A}_{1}={A}_{0}{\text{\Gamma}}_{1},$ ${a}_{1}={A}_{0}\alpha ,$ ${\mu}_{0,a}={A}_{0}{\mu}_{0},$ ${v}_{t}={A}_{0}{\epsilon}_{t},$ $\text{\Sigma}={A}_{0}\text{\Omega}{{A}^{\prime}}_{0}.$ It appears that $\beta $ is the same in the "reduced form" (4) and the "contemporaneous form" (12). Hence, $\beta $ can be estimated based on either form (Juselius, 2006). The fact that the estimate of $\beta $ is $T$ consistent while the estimate of the short-run adjustment parameters are $\sqrt{T}$ consistent means that identification can be performed in two steps: (a) the identification of the long-run parameters, $\beta ,$ and (b) the identification of the short-run structure conditional on the identified $\widehat{\beta}$ (Johansen, 1995). Johansen and Juselius (1994) show how to impose identifying restrictions on the long-run structure, ${\beta}^{\prime}{x}_{t},$ and argue that one should separate between generic, empirical, and economic identification.

Generic identification of the $r$ (simultaneous) long-run relations requires at least $r(r-1)$ restrictions, and the short-run adjustment equations at least $p(p-1)$ restrictions. In both cases the restrictions have to satisfy the identification rank conditions derived for the CVAR model by Johansen (1995) and Johansen and Juselius (1994).^{3} This separation of long-run and short-run effects is extremely useful in empirical work as it simplifies an otherwise very complex identification task.

Empirical identification is generally satisfied when all estimated coefficients in a generically identified structure are statistically significant but fails if a coefficient necessary for identification is insignificant. This is because setting such a coefficient to zero will imply loss of generic identification.

Economic identification is satisfied when the estimated structure is meaningful and interpretable from an economic point of view.

The identification of ${\beta}^{\prime}{x}_{t}$ is about finding meaningful relationships among endogenous and exogenous variables and is in many ways similar to a traditional identification exercise in simultaneous equations. Identification of the short-run structure is basically about how to identify short-run causal links in the data. This is achieved by imposing restrictions on the contemporaneous matrix, ${A}_{0},$ the transitory effects, ${\text{\Gamma}}_{i},$ and the adjustment coefficients, $\alpha .$ Economic identification of the short-run structure generally requires the residuals to be uncorrelated. Large off-diagonal elements of the covariance matrix, $\text{\Omega}$, arise when the current changes of the system variables are strongly correlated. This can be because they are genuinely associated in a simultaneous way or because the variables are simultaneously being affected by omitted variables. It is generally a non-trivial task to impose such restrictions on the short-run structure that yield uncorrelated residuals and at the same time are economically meaningful.

#### Identification of the Long-Run Structure

An identified cointegration structure consists of $r$ irreducible cointegration relations, where irreducibility implies that stationarity is lost if one of the variables is omitted from the relation (Davidson, 1998). Hence, they contain exactly the right number of variables needed to make the relation stationary—no less, no more. There is, however, no reason to expect the number of irreducible relations to be same as the number of postulated economic relations. Consequently, a cointegration relation does not necessarily correspond to a hypothetical economic relation. The latter is often a linear combination of two or more irreducible cointegration relations weighted by $\alpha $ coefficients, for example, ${\alpha}_{1j}{{\beta}^{\prime}}_{1}{x}_{t}+{\alpha}_{2j}{{\beta}^{\prime}}_{2}{x}_{t},$ where ${{\beta}^{\prime}}_{1}{x}_{t}$ and ${{\beta}^{\prime}}_{2}{x}_{t}$ are irreducible.

Because any linear combination of $r$ cointegration relations is also a stationary relation, there are usually many ways to identify a structure of irreducible relations. For example, if ${x}_{1,t}-{x}_{2,t}$ and ${x}_{2,t}-{x}_{3,t}$ are stationary, then ${x}_{1,t}-{x}_{3,t}$ is also stationary. The long-run structure (${{\beta}^{\prime}}_{1}{x}_{t},{{\beta}^{\prime}}_{2}{x}_{t})$ can then be identified by either $({x}_{1,t}-{x}_{2,t},{x}_{2,t}-{x}_{3,t})$ or $({x}_{1,t}-{x}_{2,t},{x}_{1,t}-{x}_{3,t})$, noting that one of the sets may not be economically identified.

Thus, one may think of a generically identified structure of $r$ irreducible cointegration relations, ${\beta}^{\prime}{x}_{t},$ as building blocks that can be used to construct meaningful economic relations with the help of the $\alpha $ coefficients. To make economic sense (to satisfy economic identification), a cointegration relation (either by itself or as a linear combination weighted by the $\alpha $ coefficients) has to be interpretable as a deviation from an underlying equilibrium relation. Hence, economic identification is generally incomplete without combining irreducible cointegration relations with the short-run adjustment coefficients. This is different from a traditional simultaneous equation model associating a number of endogenous variables with a number of exogenous variables and lagged endogenous and exogenous variables. Identification is then mostly achieved by exclusion restrictions, and causality is implicitly assumed by normalizing on a postulated endogenous variable in each equation. For further reading, see Juselius (2015).

#### The Curse of Dimensionality

Identification of the CVAR model is often challenging but still feasible as long as the dimension of the system is not too big. In contrast, economic system are often large and complex. To handle this dilemma one can exploit certain invariance properties of cointegration when searching for structure. For example, the cointegration property is invariant to extensions of the information set. If cointegration is found between a set of variables in a small CVAR model, the same cointegration relation will be found in a CVAR model with a larger set of variables. Adding new variables to the CVAR model is, however, likely to increase the cointegration rank, and, hence, new cointegration relations would have to be identified. The invariance property of a cointegration relation does not, however, extend to the short-run adjustment coefficients. For example, a variable found to be exogenous in a smaller model may no longer be so in a larger model (Johansen & Juselius, 2014). Also allowing for simultaneous effects among the endogenous variables is likely to change $\alpha $ and ${\text{\Gamma}}_{1}$ in (4). While this suggests that economic identification should be based on a fairly complete CVAR model, experience shows that identification of the long-run structure tends to become increasingly difficult as the number of variables increases. Fortunately, the invariance property of a cointegration relation can be used to gradually expand the CVAR, building on previously found cointegration relations. Such a procedure allows us to systematically exploit the effect of the *ceteris paribus* assumption on the empirical conclusions (for an illustration, see Juselius, 2006).

### Linking Theory With Empirical Evidence: A CVAR Scenario

How to link a theoretical model with empirical evidence in a scientifically valid way is a tremendously difficult task that has been much debated as long as economics have existed. Among the early pioneers, Ragnar Frish and Trygve Haavelmo can be mentioned as forerunners to the modern likelihood-based approach to empirical economics. Juselius (2015) argues that the CVAR model builds on the principles of Haavelmo’s 1944 Nobel Prize–winning monograph and additionally provides solutions to most of the then outstanding econometric problems.

In his famous monograph from 1944, Haavelmo introduced the concept of a “designed experiment for data obtained by passive observation” as a means to discuss the difficult link between a theory model and macroeconomic data. Hoover and Juselius (2015) argue that a so called “theory-consistent CVAR scenario” may represent such an experiment. Juselius (2015) translates one of Haavelmo’s own models into a CVAR scenario and shows that all underlying assumptions of the former can be tested by the CVAR in a likelihood-based framework.

The dilemma facing an empirical economist/econometrician is that there are many economic models but only one economic reality: Which of them should be chosen? Instead of choosing one model and forcing it onto the data, the CVAR model chooses to structure the economic data to obtain broad confidence intervals within which potentially relevant economic models should fall. The link between the theory model and the data is achieved by formulating a theory-consistent CVAR scenario by carefully matching basic assumptions on the theoretical model’s shock structure and steady-state behavior with testable hypotheses on the CVAR’s common stochastic trends and cointegration relations (Juselius, 2006, 2017a, 2017b; Juselius & Franchi, 2007; Møller, 2008). Such a scenario describes a set of testable empirical regularities one should expect to see in the data if basic assumptions of the theoretical model were empirically valid. A theoretical model that passes the first check of such basic properties is potentially an empirically relevant model.

The advantage of such an approach is that the number of autonomous shocks is tested rather than assumed; the stationarity of a steady-state relation is tested rather than assumed; the exogeneity of a variable is tested rather than assumed; long-run price homogeneity is tested rather than assumed, and so on. Another advantage is that a CVAR scenario also can be used to discriminate between competing models. Therefore, its systematic use is likely to enhance the ability to develop empirically relevant economic models. This is illustrated in Juselius (2017a, 2017b) by applying the procedure to two types of monetary models for exchange rate determination, one relying on the rational expectations hypothesis and the other on imperfect knowledge–based expectations. When tested, the data failed to support the rational expectations’ model, whereas the imperfect knowledge–based model obtained a remarkable fit. In a similar vein, Juselius (2006) formulates a theory-consistent CVAR scenario for a monetary model of inflation dynamics and finds that most of the basic assumptions fail to obtain empirical support. Juselius and Franchi (2007) formulate a CVAR scenario for a real business cycle model in Ireland (2004) and find that the data support a Keynesian explanation.

In all of the above cases, the scenario analysis was able to uncover features in the data that were inconsistent with or absent in the proposed theoretical model, thereby suggesting how to modify the model in an empirically relevant way. In particular, the pronounced persistence in the data (measured as near $I(2)$ and seemingly associated with financial deregulation) seemed to indicate that unregulated markets tend to drive prices away from equilibrium values for extended periods of time.

The structure of a CVAR scenario resembles in many ways the so-called dynamic stochastic general equilibrium (DSGE) model. The main difference is that the pulling and pushing structures of the latter are based on fairly detailed theoretical assumptions of an economic model and these assumptions are not subjected to empirical scrutiny to the same extent as the CVAR scenarios. If the DSGE model is a good description of the empirical reality, then the two approaches would more or less coincide. See, for example, Juselius and Franchi (2007), where a DSGE model was exposed to a battery of CVAR scenario tests and failed on basically all accounts.

Because many economic models, including the DSGE models, tend to impose many untested restrictions on the data, the empirical model analysis is prone to be less open to signals, suggesting that the theory is incorrect or in need of modification (see Juselius, 2011a, 2011b; Colander, Howitt, Kirman, Leijonhufvud, and Mehrling, 2008). Several papers in the special issue of the electronic journal *Economics* illustrate this point (Juselius, 2009a). Therefore, to assume that we know what the empirical model should tell us and then insist that the results follow can potentially be a disaster for our empirical understanding of the economy, as the Great Recession tragically illustrates. The CVAR methodology has been developed as a tool for avoiding confirmation bias in economics by emphasizing that falsification is more important than confirmation.

#### Further Reading

- Hoover, K. D., Johansen, S., & Juselius, K. (2009). Allowing the data to speak freely: The macroeconometrics of the cointegrated vector autoregression.
*American Economic Review*,*98*, 251–255. - Johansen, S. (1996).
*Likelihood-based inference in cointegrated vector autoregressive models*. Oxford: Oxford University Press. - Juselius, K. (2006).
*The cointegrated VAR model: Methodology and applications*. Oxford: Oxford University Press. - Juselius, K. (2015). Haavelmo’s probability approach and the cointegrated VAR model.
*Econometric Theory*,*31*(2), 213–232.

#### References

- Boswijk, H. P., Cavaliere, G., Rahbek, A., & Taylor, A. M. R. (2016). Inference on co-integration parameters in heteroskedastic vector autoregressions.
*Journal of Econometrics, Elsevier*,*192*(1), 64–85. - Cavaliere, G., Rahbek, A., & Taylor, A. M. R. (2014). Bootstrap determination of the co-integration rank in heteroskedastic VAR models.
*Econometric Reviews*,*33*(5–6), 606–650. - Clements, M. P., & Hendry, D. F. (1999).
*Forecasting non-stationary time series*. Cambridge, MA: MIT Press. - Clements, M. P., & Hendry, D. F. (2008). Economic forecasting in a changing world.
*Capitalism and Society*,*3*, 1–18. - Colander, D., Goldberg, M., Haas, A., Juselius, K., Kirman, A., Lux, T., & Sloth, B. (2009). The financial crisis and the systemic failure of the economics profession.
*Critical Review*,*21*, 3. - Colander, D., Howitt, P., Kirman, A., Leijonhufvud A., & Mehrling, P. (2008). Beyond DSGE models: Toward an empirically based macroeconomics.
*American Economic Review*,*98*, 236–240. - Davidson, J. (1998). Structural relations, cointegration and identification: Some simple results and their application.
*Journal of Econometrics*,*87*(1), 87–113. - Dennis, J. G., Hansen, H., Johansen, S., & Juselius, K. (2006).
*CATS in RATS. Cointegration Analysis of Time Series*. Version 2. Evanston, IL: Estima. - Doornik, J., & Juselius, K. (2017).
*Cointegration Analysis of Time Series using CATS 3 for OxMetrics*. London: Timberlake. - Elliott, G. (1998). On the robustness of cointegration methods when regressors almost have unit roots.
*Econometrica*,*66*, 149–158. - Engle, R. F., Hendry, D. F. & Richard, J.-F. (1983). Exogeneity.
*Econometrics*,*51*(2), 277–304. - Franchi, M., & Johansen, S. (2017). Improved inference on cointegrating vectors in the presence of a near unit root using adjusted quantiles.
*Econometrics*,*5*(2), 25. - Frydman, R., & Goldberg, M. (2011).
*Beyond mechanical markets: Risk and the role of asset price swings*. Princeton, NJ: Princeton University Press. - Frydman, R., & Goldberg, M. (2013). Change and expectations in macroeconomic models: Recognizing the limits to knowability.
*Journal of Economic Methodology*,*20*, 118–138. - Haavelmo, T. (1944). The probability approach to econometrics.
*Econometrica*,*12*(Suppl.), 1–118. - Hands, D. W. (2013). Introduction to symposium on reflexivity and economics: George Soros’s theory of reflexivity and the methodology of economic science.
*Journal of Economic Methodology*,*20*, 303–308. - Hendry, D. F., & Juselius, K. (2000). Explaining cointegration analysis: Part 1.
*The Energy Journal*,*21*(1), 1–42. - Hendry, D. F., & Juselius, K. (2001). Explaining cointegration analysis: Part II.
*The Energy Journal*,*22*(1), 75–120. - Hendry, D. F., & Mizon, G. E. (1993). Evaluating econometric models by encompassing the VAR. In P. C. Phillips (Ed.),
*Models, methods and applications of econometrics*(pp. 272–300). Oxford: Blackwell. - Hommes, C. H. (2006). Heterogeneous agent models in economics and finance. In L. Tesfation & K. L. Judd (Eds.),
*Handbook of Computational Economics 2*(pp. 1109–1186). Amsterdam: Elsevier. - Hommes, C. H. (2013). Reflexivity, expectations feedback and almost self-fulfilling equilibria: Economic theory, empirical evidence and laboratory experiments.
*Journal of Economic Methodology*,*20*, 406–419. - Hoover, K. (2006). The past as future: The Marshallian approach to post Walrasian econometrics In D. Colander (Ed.),
*Post Walrasian macroeconomics: beyond the Dynamic Stochastic General Equilibrium model*(pp. 239–257). Cambridge, UK: Cambridge University Press. - Hoover, K., Johansen, S., & Juselius, K. (2008). Allowing the data to speak freely: The macroeconometrics of the cointegrated VAR.
*American Economic Review*,*98*, 251–255. - Hoover, K., & Juselius, K. (2015). Trygve Haavelmo’s experimental methodology and scenario analysis in a cointegrated vector autoregression.
*Econometric Theory*,*31*(2), 249–274. - Hoover, K. D., Johansen, S., & Juselius, K. (2009). Allowing the data to speak freely: The macroeconometrics of the cointegrated vector autoregression.
*American Economic Review*,*98*, 251–255. - Johansen, S. (1992). A Representation of Vector Autoregressive Processes Integrated of Order 2.
*Econometric Theory*,*8*, 188–202. - Johansen, S. (1995). Identifying restrictions of linear equations. With applications to simultaneous equations and cointegration.
*Journal of Econometrics*,*69*(1), 111–132. - Johansen, S. (1996).
*Likelihood-based inference in cointegrated vector autoregressive models*. Oxford: Oxford University Press. - Johansen, S. (1997). Likelihood analysis of the I2 model.
*Scandinavian Journal of Statistics*,*244*, 433-462. - Johansen, S. (2002a). A small sample correction for the test of cointegrating rank in the vector autoregressive model.
*Econometrica*,*70*(5), 1929–1961. - Johansen, S. (2002b). A small sample correction for tests of hypotheses on the cointegrating vectors.
*Journal of Econometrics*,*111*(2), 195–221. - Johansen, S. (2006), Statistical analysis of hypotheses on the cointegrating relations in the I(2) model.
*Journal of Econometrics*,*132*, 81-115. - Johansen, S. (2012). The analysis of nonstationary time series using regression, correlation and cointegration.
*Contemporary Economics*,*6*(2), 40–57. - Johansen, S., & Juselius, K. (1994). Identification of the long-run and short-run structure: An application to the ISLM model.
*Journal of Econometrics*,*63*, 7–36. - Johansen, S., & Juselius, K. (2014). An asymptotic invariance property of the common trends under linear transformations of the data. J
*ournal of Econometrics*,*178*(Pt. 2), 310–315. - Juselius, K. (2006).
*The cointegrated VAR model: Methodology and applications*. Oxford: Oxford University Press. - Juselius, K. (2009a). Special issue on using econometrics for assessing economic models—An introduction.
*Economics: The Open-Access, Open-Assessment E-Journal*,*3*, 2009–2028. - Juselius, K. (2009b). The long swings puzzle. What the data tell when allowed to speak freely. In T. C. Mills & K. Patterson (Eds.),
*The new Palgrave handbook of empirical econometrics*. London: MacMillan. - Juselius, K. (2011a). On the role of theory and evidence in macroeconomics. In W. Hands & J. Davis (Eds.),
*The Elgar companion to recent economic methodology*(p. 27). Edward Elgar. - Juselius, K. (2011b). Time to reject the priviledging of economic theory over empirical evidence? A reply to Lawson.
*The Cambridge Journal of Economics*,*35*(2), 423–436. - Juselius, K. (2012). Imperfect knowledge, asset price swings and structural slumps: A cointegrated VAR analysis of their interdependence. In E. S. Phelps & R. Frydman (Eds.),
*Rethinking expectations: The way forward for macroeconomics*(pp. 328–350). Princeton, NJ: Princeton University Press. - Juselius, K. (2015). Haavelmo’s probability approach and the cointegrated VAR model.
*Econometric Theory*,*31*(2), 213–232. - Juselius, K. (2017a).
*A theory-consistent CVAR scenario: Testing a rational expectations based monetary model*. Department of Economics, University of Copenhagen. - Juselius, K. (2017b). Using a theory-consistent CVAR scenario to test an exchange rate model based on imperfect knowledge.
*Econometrics*. - Juselius, K., & Assenmacher, K. (2017). Real exchange rate persistence and the excess return puzzle: The case of Switzerland versus the US.
*Journal of Applied Econometrics*. - Juselius, K., & Franchi, M. (2007). Taking a DSGE model to the data meaningfully.
*Economics–The Open-Access, Open-Assessment E-Journal*,*4*. - Juselius, K., & Stillwagon, J. (2017). Are outcomes driving expectations or the other way around? An I(2) CVAR analysis of interest rate expectations in the dollar/pound market. University of Copenhagen, Economics Department.
- Koopmans, T.C., Rubin, H., & Leipnik, R. B. (1950). Measuring the Equation Systems of Dynamic Economics. In T.C. Koopmans (Ed.)
*Statistical inference in dynamic economic models, Cowles Commission Research*. New York: John Wiley & Sons, Inc. - Møller, N. F. (2008). Bridging economic theory models and the cointegrated vector autoregressive model.
*Economics: The Open-Access, Open-Assessment E-Journal*,*2*, 36. - Nielsen, H. B. (2008). Influential observations in cointegrated VAR models: Danish money demand 1973–2003.
*The Econometrics Journal*,*11*(1), 1–19. - Nielsen, H. B. & Rahbek, A. (2007). The Likelihood Ratio Test for Cointegration Ranks in the I(2) Model.
*Econometric Theory*,*23*, 615-637. - Soros, G. (1987).
*The alchemy of finance*. Hoboken, NJ: John Wiley, - Spanos, A. (2009). The pre-eminence of theory versus the European CVAR perspective in macroeconometric modeling.
*Economics: The Open-Access, Open-Assessment E-Journal*,*3*. - Wald, A. (1950). A Note on the Identification of Economic Relations. In T. C. Koopmans (Ed.),
*Statistical inference in dynamic economic models, Cowles Commission Research*. New York: John Wiley & Sons, Inc.

### Notes

1. Note, however, that a high value of ${\lambda}_{i}$ can also be an indication of a large ratio between the number of estimated parameters and the number of observations.

2. The characteristic roots can be calculated either as a solution to the characteristic polynomial of the VAR model or as the eigenvalue roots of the VAR model in companion form. In the first case, the roots of an

*I*(1) model are either on or outside the unit circle, in the second case either on or inside the unit circle (see Juselius, 2006).3. Similar rank conditions was already established for the traditional simultaneous equation system by Koopmans, Rubin, and Leipnik (1950) and Wald (1950).