Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Economics and Finance. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 07 December 2023

The Cointegrated VAR Methodologyfree

The Cointegrated VAR Methodologyfree

  • Katarina JuseliusKatarina JuseliusDepartment of Economics, University of Copenhagen

Summary

The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.

Subjects

  • Econometrics, Experimental and Quantitative Methods
  • Macroeconomics and Monetary Economics

On the Cointegrated VAR Methodology

In 1982 Clive Granger published a working paper “Error-Correction and Cointegration,” which for the first time introduced the concept of cointegration as the mathematical counterpart to error-correction. The latter had been widely used since the seminal paper by Dennis Sargan in 1964 by his followers at the London School of Economics, in particular David Hendry. The mathematical concept of cointegration turned out to be of immense importance for time series econometrics as it contained the key to handling nonstationarity in economic time series. As a recognition of this work and their many other contributions, Clive Granger was awarded the Nobel Prize in Economics in 2003 together with Robert Engle. A few years after the working paper had appeared, Søren Johansen took up the cointegration thread and developed the probability theory for nonstationary processes and subsequently the statistical theory needed to make likelihood based inference in nonstationary vector autoregressive (VAR) processes. In 1995 Søren Johansen published his seminal book Likelihood Based Inference in Cointegration, which contains the basic theory for maximum likelihood inference in cointegrated processes. In 2006 Katarina Juselius published her book The Cointegrated VAR Model: Methodology and Applications, which discussed how to use the cointegrated VAR (CVAR) model as a statistically well-founded empirical methodology for inference in economic models.

The CVAR approach was almost immediately received with great enthusiasm: it provides a flexible way of representing economic time series data that allows the user to study both short-run and long-run effects in the same model framework. It has been extensively applied in central banks, research institutes, universities, and the financial sector.

However, the early popularity and interest was not always a force for good: many CVAR models were applied without a firm understanding of its rich, and complex, structure. Many (most) applications were not subject to careful misspecification checking and conveyed the impression of having been found by “pressing the VAR button.” The results were far from convincing, and many economists, in particular North American ones, turned against the approach.

It needs, therefore, to be emphasized that a CVAR analysis based on full information maximum likelihood presumes that all systematic aspects of the data are satisfactorily described by the model. Spanos (2009) argues that a convincing test of the empirical relevance of a theoretical model has to be carried out in the context of a fully specified statistical model that works as an adequate, though approximate, description of the data generating process (DGP) given in its entireness. When the VAR model has passed the specification tests, it is essentially a summary of the most important empirical facts over the sample period. When it has not passed these checks the estimates can be (and often are) totally misleading, and it is difficult to know what is a true empirical fact and what is a result of untested prior assumptions.

A correctly specified VAR model represents the basic covariance information of the empirical problem but only up to a first-order linear approximation. Second-order non-linear effects are common in economics but are often small compared to the linear effects, so can therefore often be efficiently addressed in a second step.

An unrestricted VAR is highly overparametrized and often difficult to interpret. But the CVAR parameterization (combining differences and cointegration) allows us to study an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). It describes the economic reality as a multivariate, dynamic, and stochastic process, the probabilistic assumptions of which are testable. Thus, a correct CVAR analysis should in principle obey equally strict scientific rules as an analysis of a mathematical model in economics. In principle there is no arbitrariness in such empirical analyses, but in practice one often has to make compromises: economic data are extremely difficult to model, partly because of the inherent reflexivity of social behavior (Frydman & Goldberg, 2013; Hands, 2013; Hommes, 2013; Soros, 1987), partly because such behavior may not remain constant over time, and finally because data are often strongly affected by political reforms, interventions, natural disasters, etc.

Hence, a successful CVAR analysis has to address a large number of issues typical of most economic data: (a) a pronounced persistence to be accounted for by cointegration; (b) changes in the structure of the model due to extraordinary events often controlled for by the inclusion of institutional dummies; and (c) regime shifts that cause parameters to be non-constant over certain sample periods. Whether an estimated model is sufficiently stable to satisfactorily describe the underlying mechanisms is by necessity based on judgment to some extent.

Failure to properly account for the above topics is frequently the reason why empirical studies provide unconvincing results. The first two topics will be discussed here in detail, whereas the third one is so comprehensive that it deserves treatise of its own.

The Unrestricted VAR Model

The VAR model with k lags, a constant, μ0, a trend, t, and dummy variables, Dt, is specified as:

xt=Π1xt1+Π2xt2+...+Πkxtk+μ0+μ1t+ΦDt+εt. t=1,2,...,T (1)

where xt is a p-dimensional vector of economic variables, the starting values x0,x1,...,xk+1 are assumed fixed, Dt may contain different kinds of dummy variables, for example, step dummies and permanent or transitory impulse dummies, and εtNiid(0,Ω).

The VAR model (1) with k=2 can be formulated in error-correction form without changing the likelihood function:

Δxt=Γ1Δxt1+Πxt1+μ0+μ1t+ΦDt+εt (2)

where Π=(IΠ1Π2), Γ1=(IΠ1). For notational simplicity k=2 in the subsequent discussions.

Three caveats are needed when discussing the usefulness of (1) as a valid characterization of economic data: (a) it is derived for a particular sample window [1,T], and there is no guarantee that other sample periods produce even approximately the same linear estimates; (b) the assumption of multivariate normality is seldom satisfied without controlling for extraordinary events during the sample period such as political reforms and interventions, droughts, floods, and storms. Such events can often be controlled for by conditioning on et set of appropriately constructed dummy variables, ΦDt; (c) while the assumption of stationarity of xt is seldom tenable for economic time series, the nonstationarity of xt can be handled by subjecting the matrix Π in (2) to a non-linear reduced rank restriction Π=αβ where the matrices α and β have rank r<p.

The Multivariate Normality Assumption and the Use of Dummy Variables

A priori there is no obvious reason why one would expect multivariate normality to hold in observed data, but one could argue that the residuals are a catch-all for everything else not included in the model and that “everything else” comprises an “enormous number of factors.” Provided these factors are independent, the central limit theorem suggests that normality could be approximately valid. But, as there is no reason to expect independence, normality is an assumption that needs to be checked, and when checked it is almost always rejected. The latter is mostly due to outlying observations as a result of extraordinary events which tend to cause residual skewness and excess kurtosis.

By conditioning on the extraordinary events using adequately designed dummy variables, it is often possible to control for such non-normality. Hence, one does not have to give up on normality as is done in many empirical applications. But there are also cases for which the multivariate normality assumption in (2) is a bad approximation. For example, asset prices tend to have long tails as well as heteroscedastic errors and are therefore inherently non-normal. Whether one can use the VAR approach nonetheless is then a question of empirical robustness. To study this, Cavaliere, Rahbek, and Taylor (2014) investigate the properties of the cointegration rank test when the error variance exhibits time-varying behavior. The paper shows that bootstrap pseudo likelihood ratio tests are asymptotically correctly sized. Boswijk, Cavaliere, Rahbek, and Taylor (2016) investigate the properties of tests on cointegration relations in a similar setting and show that asymptotic inference in this case can be misleading but that the use of bootstrap methods and Wald, rather than likelihood ratio, tests lead to significant improvements in the size of the test (the probability of rejecting a true null hypothesis). In the following the errors are assumed to be normally distributed after correcting for extraordinary events.

The use of dummy variables in empirical models is sometimes considered problematic by economists arguing that outlying observations are highly informative and must not be dummied out. This argument is, however, more valid in the static regression model in which a dummy variable effectively removes the outlying observation. In a dynamic model, like (2), a dummy variable controls for the first unanticipated effect, whereas the lagged variables secure that the outlying observation enters the information set.

Because failure to properly control for the unanticipated effect of extraordinary events is likely to bias the parameter estimates, a correct use of dummies is often crucial for a correct specification. For example, a non-modeled shift in the equilibrium mean and/or average growth rates (as a result of deregulation, say) is likely to cause residual autocorrelation and may (incorrectly) suggest longer lags in the VAR.

Without the normality benchmark one would be inclined to ignore these highly informative events, which can be used to estimate the effect of, for example, changes in policy. Extraordinary events are also likely to affect the model’s forecasting performance. See, for example, Clements and Hendry (1999, 2008). To get the intuition, it is useful to consider the expected value of Δxt given its past values based on (2):

Et1(Δxt|xt1,xt2,trend,const.)=Γ1Δxt1+Πxt1+μ0+μ1t.

The deviation between the expected and realized value in “normal” times is

ΔxtEt1(Δxt|xt1,xt2,trend,const.)=εt,

implying that economic agents using the VAR to forecast the next period’s outcome are rational in the sense of not making systematic forecast errors. But extraordinary events with large unanticipated effects tend to have a large effect on the forecast error:

ΔxtEt1(Δxt|xt1,xt2,trend,const.)=ΦDt+εt.

After they happen, the effects are no longer unanticipated, and, provided the events have not changed the model’s parameters (Γ1,Π), the next period’s forecast error would again be white noise. In general, dummy variables need not enter the VAR model with lags unless the extraordinary event is not just unanticipated but represents a very important institutional event (such as joining the EMU) comparable to an exogenous variable.

To sum up: the reason for assuming multivariate normality is not because economic data are inherently likely to follow the multivariate normality rule, but because this assumption helps us to check that all important effects have been properly accounted for in the model. It is a safeguard against relying on conclusions from a model that is basically misspecified (Hoover, 2006; Hoover, Johansen, & Juselius, 2009) and ensures that the model estimates are based on full information maximum likelihood. The specification of the VAR model is successful when the chosen information set contains the most relevant economic variables and the most important institutional events and the residuals describe a multivariate normal distribution. Therefore, one has to carefully check for a large number of things: Have there been shifts in mean growth rates or in equilibrium means? Are interventions, reforms, and changing policy properly controlled for? Is the sample period defining a constant parameter regime? Is the information set correctly chosen? The accuracy of the results depends on all this being correct in the model. Without such checking, the results can be (and often are) close to useless. A well-specified VAR model has nothing to do with pressing the VAR button.

But even though the empirical VAR model is a good description of the data, it is still not a satisfactory economic model as it is highly overparametrized. To arrive at a more parsimonious model, one has to test and impose various restrictions on the parameters, the most important of which are the so called reduced rank restrictions. The next section discusses reduced rank in the I(1) model (see later for the I(2) model). In the first case, the data contain stochastic trends of first-order persistence; in the second, both first- and second-order persistence.

First-Order Persistence: The I(1) Model

The hypothesis that xtI(1) is formulated as a reduced rank condition

Π=αβ (3)

where α and β are p×r matrices (r<p) and the r relations, β'xt, define stationary linear relationships among p nonstationary variables. Thus, the nonstationary VAR can be considered a submodel of the more general baseline VAR (2).

The choice of cointegration rank is likely to influence all subsequent inferences and is, therefore, a crucial step in the empirical analysis. Unfortunately, it can also be a difficult choice as the distinction between stationary and nonstationary directions of the vector process often is far from straightforward.

The LR Test for the Cointegration Rank

This test, which is often called the trace test or the Johansen test, is based on the VAR model in the so-called R-form, R0,t=αβR1,t+error, where R0,t and R1,t are the residuals in a regression of Δxt and xt1, respectively, on all short-run dynamics, dummies, and other deterministic components (Juselius, 2006). This model can be thought of as an “idealized” empirical model, Δxt=αβxt1+εt, describing an economy where only long-run equilibrium forces are at play and no transitory short-run effects are disturbing the important long-run mechanisms. For notational simplicity it is assumed here that Γ1Δxt1=0 in (2), making the R-form equivalent to the Δx-form.

The reduced rank regression estimates are obtained by solving an eigenvalue problem that delivers p eigenvalues, λi, the corresponding eigenvectors, βi, and their loadings, αi. The eigenvalues, λi, can be interpreted as the squared correlations between linear combinations of the levels, βixt1, and linear combinations of the differences, ωiΔxt. Thus, the magnitude of λi is an indication of how strongly the linear relation βixt1 is correlated with the stationary part of the process Δxt. When λi=0, the correlation coefficient is zero, the linear combination βixt1 is nonstationary, and there is no equilibrium correction, i.e., αi=0.

The statistical problem is to derive a test that can discriminate between those λi,i=1,...,r that correspond to stationary relations and those λi,i=r+1,...,p that correspond to non-stationary relations. Because λi=0 does not change the likelihood function, the maximum is exclusively a function of the non-zero eigenvalues, L2/T=|S00|Πi=1r(1λi), where S00 is a covariance matrix defined in Johansen (1996). Using this expression a likelihood ratio test for the determination of the cointegration rank, r, can be derived involving the following hypotheses:

H0(p):rank=p,i.e.nounitroots;xtisstationaryH1(r):rank=r,i.e.prunitroots,rcointegrationrelations;xtisnon-stationary.
The Asymptotic Tables

The test statistic has a non-standard distribution that has been found by simulation. The so-called asymptotic tables provide simulated test statistics for the whole probability distribution given the number of unit roots. These tables can be closely approximated by a gamma distribution, and most software packages provide the p-value for the test based on this approximation. This section discusses general aspects of the asymptotic distributions and the rank tests. The next section shows how the former depend on the deterministic components in the model, in particular when there is a linear trend and/or a level shift in the data (and the model).

As the name suggests, the asymptotic tables are valid for large samples. What is “large” is often difficult to tell: it depends on the number of observations but also on how informative these observations are. Generally, one is on the safe side if the sample is greater than 150, otherwise the asymptotic test statistics might give a poor approximation to the correct ones. For this reason, Johansen (2002a, 2002b) has derived so-called Bartlett corrections for the trace test that give a correct size. These are readily implemented in many software packages (CATS2 in RATS, Version 2, Dennis et al., 2006; CATS3 in OxMetrixcs, Version 3, Doornik & Juselius, 2017).

The Test Procedure

Usually, the value of r=r is determined based on a sequence of tests running from top to bottom, i.e., {r=0,punitroots},{r=1,p1unitroots},,{r=p,0unitroots}. The first hypothesis that is not rejected for the chosen p-value delivers the value of r.

In the ideal case, the probability to reject a correct null hypothesis is small and the probability to accept a correct alternative hypothesis is high for relevant hypotheses. This is likely to be the case when the estimated eigenvalues are either very large or very small. The trace test is then likely to pick up the correct value r with high probability and also have good power properties for relevant alternative hypotheses.1

In other cases, when the estimated eigenvalues are in the region where it is hard to discriminate between significant and insignificant eigenvalues, the trace test has often low power for stationary, near unit root alternatives. For example, the trace test will often reject the stationarity of a true economic relationship if the adjustment back to equilibrium is very slow and the sample is small or moderately sized. In such a case the economic relation in question is likely to have a characteristic root close to the unit circle, a so-called near unit root. If this relation is (incorrectly) considered a stochastic trend, inference on the cointegration relations will be affected (Elliot, 1998; Franchi & Johansen, 2017).

Caveats Using Asymptotic Tables

Unfortunately, it is not only the size of the trace test that is important for a correct choice of rank r but also the power. The latter is often low for relevant alternative hypotheses in the neighborhood of the unit circle. As a safeguard against an incorrect decision it is therefore advisable to use as much additional information as possible, such as:

1.

The characteristic roots of the model: If the (r+1)th cointegration vector is nonstationary and is wrongly included in the model, then the largest characteristic root will be close to the unit circle.2

2.

The t-ratios of the α-coefficients to the (r+1)th cointegration vector. If all of them are small, say less than 2.6, then one would not gain a lot by including the rth+1 vector as a cointegrating relation in the model.

3.

The recursive graphs of the trace statistic for r˜=1,2,...,p. Since the variable Tjln(1λi),j=T1,...,T, grows linearly over time when λi0, the recursively calculated components of the trace statistic should grow linearly for all i=1,...,r, but stay constant for i=r+1,...,p.

4.

The graphs of the cointegrating relations. If the graph of a supposedly stationary cointegration relation reveals distinctly nonstationary behavior, one should reconsider the choice of r, or find out if the model specification is in fact incorrect. For example, data might be I(2) instead of I(1).

5.

The economic interpretability of the results. If, for example, the (r+1)th cointegration relation resembles the economic relation of interest, a consumption-income relation say, and αr+1 shows that consumption and/or income adjusts significantly, then it does not seem reasonable to discard such a relation based on a statistical null of a unit root.

The Cointegrated VAR Model

A major advantage of the CVAR type of model contra standard regression models is that it allows a separation between short-run and long-run effects by combining differenced variables with cointegration among them. Another closely related advantage is that the distinction between ordinary and extraordinary effects (made possible by properly designed dummy variables) allows us to study the effect of institutional events (reforms, interventions) in the short and the long run. By imposing the cointegration rank r on Π and allowing for dummy variables in (2), both effects can be studied.

The CVAR for k=2 with reduced rank and dummy variables is formulated as:

Δxt=Γ1Δxt1+αβ'xt1+μ0+μ1t+ΦDt+εt, (4)

where βxt1 defines r stationary linear combinations of the variables xt interpreted as deviations from equilibrium values, a describes how the system adjusts to the cointegrating relations, and Γ1Δxt1 controls for short-run transitory effects.

It can be useful to think of the VAR model in the following way: assume one is interested in understanding why economic determinants change from one period to the next. For example, why did inflation go up compared to the previous month? Why did the nominal exchange rate drop? etc. The model describes the change of inflation as a result of an unanticipated shock, εt, possibly some policy reforms, ΦDt, an adjustment, α, to a previous disequilibrium, βxt1, and some transitory feedback effects from previous changes in other variables, Γ1Δxt1.. Thus, the model describes an economic system that is first pushed away from equilibrium by an exogenous shock and then starts adjusting back—how fast depends on the size of the a coefficients and the size of the transitory feedback effects, Γ1Δxt1.

The CVAR and the Regression Model

The advantage of the CVAR formulation is that by transforming the trending variables, xt, into stationary differences, Δxt, and stationary cointegration relations, βxt, a number of problems in the static regression model are more or less solved:

1.

Multicollinearity between the x variables does not lead to imprecise estimates of the cointegration relations, βxt, as two variables are cointegrated only if they share a common stochastic trend. The latter is defined as the cumulation of all permanent shocks that have pushed the variables out of equilibrium. While, for example, cointegration between two unrelated random walks will be rejected with high probability in the CVAR model, they may have a correlation coefficient close to one in small samples in the usual regression model (see Hendry & Juselius, 2000, 2001; Johansen, 2012).

2.

The cointegration coefficients are “canonical” in the sense of being invariant to increases in the information set or to changes in the direction of minimalization. This is in contrast to the ordinary regression model where coefficients change as new (correlated) variables are added.

3.

The removal of trends either by differencing or by cointegration is likely to make the multicollinearity between Δxt and βxt, small enough not to be a problem. When xtI(1), Δxt, and βxt are stationary, standard inference on (α,Γ1,Σ) applies for given β.

But while cointegration analysis is a powerful method for uncovering genuine relationships among variables, it is basically a statistical regularity that may break down if conditions change. Therefore, cointegration is no guarantee for structural invariance. Its coefficients might change when other parts of the structure change. This is closely related to the concept of super exogeneity in Engle et al. (1983).

The CVAR and the Dual Role of the Deterministic Terms

One complication of the CVAR model is that the deterministic terms play a different role for the differenced process (the short-run effects) and for the cointegration relations (the long-run effects). To give the intuition, (4) is reformulated in terms of deviations from mean values as shown in (7). When deriving EΔxt and E(βxt) the short-run dynamics Γ1Δxt1 complicates the calculations without adding to the logic and so is set to zero. From (4) with Γ1Δxt1=0 one can find

EΔxt=αE(βxt1)+μ0+μ1t. (5)

Pre-multiplying (5) by β gives

E(βΔxt)=βαE(βxt1)+βμ0+βμ1tE(βxt)=(I+βα)E(βxt1)+βμ0+βμ1t. (6)

Thus, both EΔxt and E(βxt) depend on the deterministic terms in a rather complex way. Juselius (2006) demonstrates how to distribute μ0 and μ1t between the two when Γ1Δxt1=0. The idea can be illustrated by first decomposing μ0=αβ0+γ0 and μ1t=αβ1t+γ1t. That the mean of an equilibrium error should be zero can then be used to determine β0 for a given value of β1.

Reformulating the CVAR with EΔxt=γ0+γ1t and E(βxt)=β0+β1t gives:

Δxtγ0γ1t=α(βxt1β0β1t)+εt. (7)

Equation (7) shows that an unrestricted constant and a trend in the CVAR implies EΔxt=γ0+γ1t and Eβxt1=β0+β1t.

For a correct empirical specification of the CVAR, one must from the outset distinguish between data with and without deterministic trends:

1.

μ10. Data contain deterministic trends (e.g., GDP, CPI). If γ10, then the CVAR model is consistent with both quadratic and linear deterministic trends in the data. Therefore, unless quadratic trends are desirable, μ1t=αβ1t+γ1t has to be restricted so that γ1=0. But even though β10, this linear trend in the cointegration relations does not cumulate to a quadratic trend in the data. Finally, if β1=0 but γ00, then the data contain linear trends that cancel in the cointegration relations.

2.

μ1=0. Data do not contain deterministic trends (e.g., interest rates). When β1=0 and γ0=0, then the CVAR is consistent with stochastic but no deterministic trends. If β00, then cointegration relations need an intercept, otherwise they have a zero mean.

Thus, Eβxt1=β0+β1t shows that the part of μ0 that is proportional to α measures an intercept (or an equilibrium mean) and the corresponding part of μ1 measures a trend coefficient in the cointegrating relations.

The deterministic components play an important role in the CVAR approach, partly because they are crucial for a correct model specification, partly because the asymptotic distribution of the trace test depends on these components. Therefore one should at the outset decide whether the CVAR model needs to be specified with deterministic linear trends or without such trends. As a rule, variables for which the mean growth rate is different from zero, i.e., E(Δxt)0, need a deterministic trend in the model. Examples of such variables are real GDP, consumption, investment, nominal prices levels, etc. Variables for which the mean growth rate is zero, i.e., E(Δxt)=0, should be specified without a linear trend in the model. Examples are interest rates, real exchange rates, stock returns, etc. But, because the latter may exhibit trending behavior in one direction or the other over a specific sample period, it is sometimes difficult to distinguish stochastic from deterministic trends. Nonetheless, economic or financial logic tells us that trending behavior in, for example, an interest rate must be stochastic rather than deterministic, or the model would be consistent with a money machine (and they do not exist). Similarly, even though inflation rates can exhibit linearly trending behavior over the chosen sample period, such trends must be considered stochastic. Otherwise it would imply quadratic deterministic trends in prices and such high predictability is not plausible. If the data vector, xt, contains both deterministically trending and non-trending variables, the VAR has to be specified with a linear trend. The fact that some variables need a trend while others do not must then affect the choice of identifying restrictions.

Data Contain Linear, But No Quadratic, Trends

In this case (4) is specified as

Δxt=Γ1Δxt1+α[β,β0,β1][xt11t]+γ0+ΦDt+εt. (8)

consistent with EΔxt=γ0 and E(βxt1)=β0+β1t. This implies that both the cointegration relations and the data contain a linear trend, a property that gives “similarity” in the probability analysis (a desirable property). The stationary equilibrium error, (βxt1β0β1t), contains a trend, but it does not cumulate in the process to produce a quadratic trend. Independently of whether β1=0 or not, the cointegration rank should be determined in this model. Table 4 (restricted trend and unrestricted constant) in Johansen (1996) or Juselius (2006) is used for rank determination.

For the chosen rank, r, it is straightforward to test the hypothesis β1=0. If not rejected, then the linear trends in the data are canceled by cointegration; if rejected, then some cointegration relations are trend-stationary. A variable can also be trend-stationary by itself, for example, the output gap yβyt is often found to be stationary. In the case when β1=0 cannot be rejected, the rank should remain the same even though the model is respecified with β1=0.

Because a constant term that appears twice in the model (such as β0 and γ0) will cause a singularity problem, the model has first to be estimated with an unrestricted constant μ0, which is then decomposed into αβ0 and γ0. The value of β0 can then be found by using E(βxt1β0β1t)=0 so that β^0=Avg.(β^xt1β^1t), where Avg. stands for the average over the sample period.

Data Contain No Linear Deterministic Trends

In this case, (4) is specified as

Δxt=Γ1Δxt1+αβ,β0][xt11]+ΦDt+εt. (9)

consistent with EΔxt=0 and Eβ'xt1=β0. The constant is restricted to the cointegration relations and provides an estimate of the intercept of the cointegration relations (the equilibrium mean of the identified β relations). Table 2 (restricted constant) in Juselius (2006) is used for the determination of the rank r. When the rank is found it is always possible to test β0=0. This hypothesis is, however, seldom accepted as β^0 also controls for starting values of the variables, which usually differ from zero.

Most software programs allow the user to choose between model specifications (8) and (9).

Dummy Variables in the CVAR Model

Changes in economic institutions such as changes in regulations, taxes, interventions, etc. also cause changes in economic variables. Some interventions may only have a minor effect and can be considered random noise, others have a very significant effect and must be appropriately accounted for in the model, for example, by adding new variables or by using dummy variables as a proxy. For instance, the speculative attack on some of the European currencies in the beginning of the nineties can be seen as (a couple of) extraordinarily large change(s) in the nominal exchange rate. For some of them the attack resulted in a drop/increase in the equilibrium mean of the real exchange rate as the nominal rate moved to a new and more sustainable level. If the CVAR model is estimated without properly accounting for such an event (showing up as non-normal, outlying observations), the model will assume this big change was related to a change in the economic variables, hence biasing the model estimates.

As a rule, a dummy variable in the model should represent a known event, for example, a flooding, a drought, a political intervention, etc., i.e., an extraordinary event that cannot be explained by the chosen data set xt. Three of the most important cases are discussed below.

Case (i): The slope coefficient of the linear trend has changed in the sample period (for example as a result of a major financial deregulation).

In this case, E(βxt1)=β0+β01Ds,xx,t+β1t+β11txx,t, where txx,t is 0 before the date xx and 1,2,3,...; after that date, Ds,xx,t=Δtxx,t, and E(Δxt)=γ0+γ01Ds,xx,t. This specification allows for a broken trend both in the cointegration relations and in the data similarly as in the pure trend case above. The step dummy appears twice in the model (β01Ds,xx,t and γ01Ds,xx,t) and, hence, has to enter the model unrestrictedly to avoid singularity problems. In this case the asymptotic tables are no longer correct and need to be re-simulated by controlling for where in the sample the break takes place. Some software programs contains this option.

Case (ii): Data contain no trends, but the equilibrium mean has changed in the sample period.

This corresponds to E(βxt1)=β0+β01Ds,xx,t, where Ds,xx,t, is 1.0 from the date xx to the end of the sample, 0 otherwise, and E(Δxt)=Dp,xx,t where Dp,xx,t=ΔDs,xx,t is an impulse dummy, which is included unrestrictedly in the model. It describes a situation where the equilibrium mean has shifted in the sample period, for example, as a result of a political reform. A shift in the equilibrium mean affects the asymptotic tables that have to be simulated for each specific model specification by controlling for where in the sample period the shift has taken place.

Cases (iii) and (vi): Data contain permanent and/or transitory outliers.

Such outliers can be controlled for by permanent and transitory impulse dummies, Dp,xx,t and Dtr,xx,t, where Dp,xx,t, is 1 for the date xx, 0 otherwise, and (iv) a transitory impulse dummy Dtr,xx,t, is 1 for the date xx, and 1 for xx+1, 0 otherwise. The asymptotic tables are not affected by permanent and transitory dummy variables.

The Moving Average Representation of the CVAR

Equation (4) can be inverted to describe xt as a function of εt, constant, trend, and Dt:

xt=βαi=1t(εi+γ0+γ01Ds,xx,t)+C(L)(β0+β01Ds,xx,t+β1t+β11txx,t+εt)+X0 (10)

where β is the orthogonal complement to β and α is the orthogonal complement to α, αi=1tεi describes pr underlying stochastic trends, αi=1t(γ0+γ01Ds,xx,t) describes a linear broken trend, and β describes how these trends load into the variables. The second term describes the short-run dynamic effects of transitory changes in the system. and X0 is a catch-all for initial values of the process.

Equation (10) is essentially a summary of the mechanisms generating the data process xt. It shows how permanent shocks to the system cumulates into stochastic trends that push the variables into nonstationary trajectories. The coefficients of α are informative about the sources of the exogenous shocks, and those of β how these shocks are loaded into the variables. Based on (10) one can calculate so called impulse response functions, describing how a shock to a variable transmits through the system (described by C(L)) until it reaches its final impact given by β. Thus, (10) describes the forces that have pushed the variables into nonstationary trajectories and (4) the forces that pull the variables back to equilibrium once they have been pushed away. In the CVAR jargon they are the pulling and pushing forces of the system.

Second-Order Persistence: The I(2) Model

The I(2) model has a very rich structure but is algebraically more complex than the I(1) model, albeit the basic ideas are similar. The complexity might explain why there are relatively few applications in the literature. Another reason is that many economists find it implausible that economic variables move away from their equilibrium values for infinite times. Hence, most economic relations should be either stationary or at most near I(1). While it is clearly correct that economic variables or relations do not wander away forever, it does not exclude the possibility that they can exhibit a persistence that is indistinguishable from a unit root or a double unit root process over finite samples. In this vein, Juselius (2012) argues that the classification of variables into single or double unit roots should be seen as a useful way of ordering the data into more homogeneous groups.

Nominal growth rates, in particular, are often found to be very persistent in one direction or the other and, thus, exhibiting little evidence of mean reversion. For example, over the last half decade many inflation rates in industrialized countries have been sufficiently persistent not to be rejected as I(1) by unit root testing. But if inflation rates are empirically I(1), then prices are I(2) and need to be analyzed in the I(2) model.

To investigate the possibility of I(2), (4) is rewritten in its equivalent form:

Δ2xt=ΓΔxt1+aβxt1+μ0+μ1t+ΦDt+εt,

where Γ=(IΓ1). If the differenced process also exhibits strong persistence so that ΔxtI(1) and, hence, xtI(2), then (a linear transformation of) Γ also has reduced rank. This is formulated as an additional reduced rank hypothesis:

αΓβ=ξη,

where ξ,η are (pr)×s1 and α,β are orthogonal complements of α,β, respectively (Johansen, 1992, 1995). While the I(1) reduced rank condition is associated with the levels of the variables, the I(2) condition is with the differenced variables. The intuition is that the differenced process also contains unit roots when data are I(2).

Nielsen and Rahbek (2007) derive the maximum likelihood trace test which can be used to determine the values of r,s1, and s2, where s1 stands for the number of I(1) trends, s2 the number of I(2) trends, and pr=s1+s2.

Because the I(2) condition is formulated as a reduced rank on the transformed Γ matrix, the latter is no longer unrestricted as in the I(1) model. To circumvent this problem the following parameterization (see Doornik & Juselius, 2017; Johansen, 1997, 2006) can be used:

Δ2xt=α[(ββ1)(xt1t1)+(dd0)(Δxt11)]+ζ(ττ0)(Δxt11)+ΦDt+εt,t=1,...,T (11)

The relation in the hard brackets corresponds to the polynomially cointegrated relation, β˜x˜t1+dΔx˜t1, with x˜t=[xt,t]. It describes a situation where the deviations from a long-run static equilibrium, β˜x˜t, is a (near) I(1) process and, therefore, has to be combined with the differenced process, dΔx˜t, to become stationary. Such a relation can often be interpreted as a dynamic rather than the static equilibrium relation typical of the I(1) model.

The relation in soft brackets, ζτΔx˜t1, where τ=[β˜,β˜1], is associated with medium-run relations among the differenced variables. The cointegration relations τ1xt, consisting of β˜xt and β˜1xt, take the process from I(2) to I(1). The difference between the two is that the former can be stationary either by polynomial cointegration (β˜x˜t+dΔx˜tI(0)), or by differencing (β˜Δx˜tI(0)), whereas the latter only by differencing (β˜1ΔxtI(0)). While the economic interpretation of τ1Δxt is not always straightforward, Juselius and Assenmacher (2017), Juselius (2017b), and Juselius and Stillwagon (2017) give them an interpretation as medium-run relationship between changes in the process describing momentum trading along the trend in the foreign exchange market. The latter is often due to technical trading (Frydman & Goldberg, 2011).

Identification When Data Are Nonstationary

In contrast to standard economic models, the CVAR does not distinguish between endogenous and exogenous variables: all stochastic variables are modeled and exogeneity of a variable is tested as a zero row restriction on the α matrix rather than assumed from the outset. The separation between the r pulling and the pr pushing forces implies that the CVAR is inherently consistent with r equilibrium relations estimated in the form of stationary equilibrium errors, βxt, and pr exogenous trends, αi=1tεi, where α is a pr×p matrix orthogonal to α.

As discussed earlier, the unrestricted equilibrium errors βxt are obtained by solving an eigenvalue problem. While the estimated β are uniquely determined given the eigenvalue vector normalization, they cannot in general be given an economic interpretation without imposing further restrictions. In some cases an economic interpretation may not be needed, for example, if the purpose is forecasting rather than finding economic structures.

The exogenous trends are cumulations of latent “structural shocks” to the system such as demand and/or supply shocks, estimated by a linear combination of the CVAR residuals, αε^t. Unless a variable is strongly exogenous (the corresponding row in α and Γi are zero), the exogenous trends αi=1tεi do not correspond to any variable, xj,t. When a variable xj,t is weakly but not strongly exogenous, it may, and often does, differ fundamentally from the corresponding exogenous trend i=1tεj,i.

Identification of Pulling and Pushing Forces

The dichotomy of pulling and pushing forces in the CVAR makes it possible to address identification in four dimensions: the identification of (1) the long-run cointegration structure, (2) the short-run adjustment structure, (3) the exogenous driving shocks, and (4) the dynamics of the impulse responses (see Juselius, 2006, for a detailed treatment). The focus here is on (1) with some discussion of (2).

To illustrate the relationship between long-run and short-run identification, the CVAR model (4) is pre-multiplied by the current effects matrix A0:

A0Δxt=A1Δxt1+a1βxt1+vt,vtNID(0,Σ). (12)

where A1=A0Γ1, a1=A0α, μ0,a=A0μ0, vt=A0εt, Σ=A0ΩA0. It appears that β is the same in the "reduced form" (4) and the "contemporaneous form" (12). Hence, β can be estimated based on either form (Juselius, 2006). The fact that the estimate of β is T consistent while the estimate of the short-run adjustment parameters are T consistent means that identification can be performed in two steps: (a) the identification of the long-run parameters, β, and (b) the identification of the short-run structure conditional on the identified β^ (Johansen, 1995). Johansen and Juselius (1994) show how to impose identifying restrictions on the long-run structure, βxt, and argue that one should separate between generic, empirical, and economic identification.

Generic identification of the r (simultaneous) long-run relations requires at least r(r1) restrictions, and the short-run adjustment equations at least p(p1) restrictions. In both cases the restrictions have to satisfy the identification rank conditions derived for the CVAR model by Johansen (1995) and Johansen and Juselius (1994).3 This separation of long-run and short-run effects is extremely useful in empirical work as it simplifies an otherwise very complex identification task.

Empirical identification is generally satisfied when all estimated coefficients in a generically identified structure are statistically significant but fails if a coefficient necessary for identification is insignificant. This is because setting such a coefficient to zero will imply loss of generic identification.

Economic identification is satisfied when the estimated structure is meaningful and interpretable from an economic point of view.

The identification of βxt is about finding meaningful relationships among endogenous and exogenous variables and is in many ways similar to a traditional identification exercise in simultaneous equations. Identification of the short-run structure is basically about how to identify short-run causal links in the data. This is achieved by imposing restrictions on the contemporaneous matrix, A0, the transitory effects, Γi, and the adjustment coefficients, α. Economic identification of the short-run structure generally requires the residuals to be uncorrelated. Large off-diagonal elements of the covariance matrix, Ω, arise when the current changes of the system variables are strongly correlated. This can be because they are genuinely associated in a simultaneous way or because the variables are simultaneously being affected by omitted variables. It is generally a non-trivial task to impose such restrictions on the short-run structure that yield uncorrelated residuals and at the same time are economically meaningful.

Identification of the Long-Run Structure

An identified cointegration structure consists of r irreducible cointegration relations, where irreducibility implies that stationarity is lost if one of the variables is omitted from the relation (Davidson, 1998). Hence, they contain exactly the right number of variables needed to make the relation stationary—no less, no more. There is, however, no reason to expect the number of irreducible relations to be same as the number of postulated economic relations. Consequently, a cointegration relation does not necessarily correspond to a hypothetical economic relation. The latter is often a linear combination of two or more irreducible cointegration relations weighted by α coefficients, for example, α1jβ1xt+α2jβ2xt, where β1xt and β2xt are irreducible.

Because any linear combination of r cointegration relations is also a stationary relation, there are usually many ways to identify a structure of irreducible relations. For example, if x1,tx2,t and x2,tx3,t are stationary, then x1,tx3,t is also stationary. The long-run structure (β1xt,β2xt) can then be identified by either (x1,tx2,t,x2,tx3,t) or (x1,tx2,t,x1,tx3,t), noting that one of the sets may not be economically identified.

Thus, one may think of a generically identified structure of r irreducible cointegration relations, βxt, as building blocks that can be used to construct meaningful economic relations with the help of the α coefficients. To make economic sense (to satisfy economic identification), a cointegration relation (either by itself or as a linear combination weighted by the α coefficients) has to be interpretable as a deviation from an underlying equilibrium relation. Hence, economic identification is generally incomplete without combining irreducible cointegration relations with the short-run adjustment coefficients. This is different from a traditional simultaneous equation model associating a number of endogenous variables with a number of exogenous variables and lagged endogenous and exogenous variables. Identification is then mostly achieved by exclusion restrictions, and causality is implicitly assumed by normalizing on a postulated endogenous variable in each equation. For further reading, see Juselius (2015).

The Curse of Dimensionality

Identification of the CVAR model is often challenging but still feasible as long as the dimension of the system is not too big. In contrast, economic system are often large and complex. To handle this dilemma one can exploit certain invariance properties of cointegration when searching for structure. For example, the cointegration property is invariant to extensions of the information set. If cointegration is found between a set of variables in a small CVAR model, the same cointegration relation will be found in a CVAR model with a larger set of variables. Adding new variables to the CVAR model is, however, likely to increase the cointegration rank, and, hence, new cointegration relations would have to be identified. The invariance property of a cointegration relation does not, however, extend to the short-run adjustment coefficients. For example, a variable found to be exogenous in a smaller model may no longer be so in a larger model (Johansen & Juselius, 2014). Also allowing for simultaneous effects among the endogenous variables is likely to change α and Γ1 in (4). While this suggests that economic identification should be based on a fairly complete CVAR model, experience shows that identification of the long-run structure tends to become increasingly difficult as the number of variables increases. Fortunately, the invariance property of a cointegration relation can be used to gradually expand the CVAR, building on previously found cointegration relations. Such a procedure allows us to systematically exploit the effect of the ceteris paribus assumption on the empirical conclusions (for an illustration, see Juselius, 2006).

Linking Theory With Empirical Evidence: A CVAR Scenario

How to link a theoretical model with empirical evidence in a scientifically valid way is a tremendously difficult task that has been much debated as long as economics have existed. Among the early pioneers, Ragnar Frish and Trygve Haavelmo can be mentioned as forerunners to the modern likelihood-based approach to empirical economics. Juselius (2015) argues that the CVAR model builds on the principles of Haavelmo’s 1944 Nobel Prize–winning monograph and additionally provides solutions to most of the then outstanding econometric problems.

In his famous monograph from 1944, Haavelmo introduced the concept of a “designed experiment for data obtained by passive observation” as a means to discuss the difficult link between a theory model and macroeconomic data. Hoover and Juselius (2015) argue that a so called “theory-consistent CVAR scenario” may represent such an experiment. Juselius (2015) translates one of Haavelmo’s own models into a CVAR scenario and shows that all underlying assumptions of the former can be tested by the CVAR in a likelihood-based framework.

The dilemma facing an empirical economist/econometrician is that there are many economic models but only one economic reality: Which of them should be chosen? Instead of choosing one model and forcing it onto the data, the CVAR model chooses to structure the economic data to obtain broad confidence intervals within which potentially relevant economic models should fall. The link between the theory model and the data is achieved by formulating a theory-consistent CVAR scenario by carefully matching basic assumptions on the theoretical model’s shock structure and steady-state behavior with testable hypotheses on the CVAR’s common stochastic trends and cointegration relations (Juselius, 2006, 2017a, 2017b; Juselius & Franchi, 2007; Møller, 2008). Such a scenario describes a set of testable empirical regularities one should expect to see in the data if basic assumptions of the theoretical model were empirically valid. A theoretical model that passes the first check of such basic properties is potentially an empirically relevant model.

The advantage of such an approach is that the number of autonomous shocks is tested rather than assumed; the stationarity of a steady-state relation is tested rather than assumed; the exogeneity of a variable is tested rather than assumed; long-run price homogeneity is tested rather than assumed, and so on. Another advantage is that a CVAR scenario also can be used to discriminate between competing models. Therefore, its systematic use is likely to enhance the ability to develop empirically relevant economic models. This is illustrated in Juselius (2017a, 2017b) by applying the procedure to two types of monetary models for exchange rate determination, one relying on the rational expectations hypothesis and the other on imperfect knowledge–based expectations. When tested, the data failed to support the rational expectations’ model, whereas the imperfect knowledge–based model obtained a remarkable fit. In a similar vein, Juselius (2006) formulates a theory-consistent CVAR scenario for a monetary model of inflation dynamics and finds that most of the basic assumptions fail to obtain empirical support. Juselius and Franchi (2007) formulate a CVAR scenario for a real business cycle model in Ireland (2004) and find that the data support a Keynesian explanation.

In all of the above cases, the scenario analysis was able to uncover features in the data that were inconsistent with or absent in the proposed theoretical model, thereby suggesting how to modify the model in an empirically relevant way. In particular, the pronounced persistence in the data (measured as near I(2) and seemingly associated with financial deregulation) seemed to indicate that unregulated markets tend to drive prices away from equilibrium values for extended periods of time.

The structure of a CVAR scenario resembles in many ways the so-called dynamic stochastic general equilibrium (DSGE) model. The main difference is that the pulling and pushing structures of the latter are based on fairly detailed theoretical assumptions of an economic model and these assumptions are not subjected to empirical scrutiny to the same extent as the CVAR scenarios. If the DSGE model is a good description of the empirical reality, then the two approaches would more or less coincide. See, for example, Juselius and Franchi (2007), where a DSGE model was exposed to a battery of CVAR scenario tests and failed on basically all accounts.

Because many economic models, including the DSGE models, tend to impose many untested restrictions on the data, the empirical model analysis is prone to be less open to signals, suggesting that the theory is incorrect or in need of modification (see Juselius, 2011a, 2011b; Colander, Howitt, Kirman, Leijonhufvud, and Mehrling, 2008). Several papers in the special issue of the electronic journal Economics illustrate this point (Juselius, 2009a). Therefore, to assume that we know what the empirical model should tell us and then insist that the results follow can potentially be a disaster for our empirical understanding of the economy, as the Great Recession tragically illustrates. The CVAR methodology has been developed as a tool for avoiding confirmation bias in economics by emphasizing that falsification is more important than confirmation.

Further Reading

  • Hoover, K. D., Johansen, S., & Juselius, K. (2009). Allowing the data to speak freely: The macroeconometrics of the cointegrated vector autoregression. American Economic Review, 98, 251–255.
  • Johansen, S. (1996). Likelihood-based inference in cointegrated vector autoregressive models. Oxford: Oxford University Press.
  • Juselius, K. (2006). The cointegrated VAR model: Methodology and applications. Oxford: Oxford University Press.
  • Juselius, K. (2015). Haavelmo’s probability approach and the cointegrated VAR model. Econometric Theory, 31(2), 213–232.

References

  • Boswijk, H. P., Cavaliere, G., Rahbek, A., & Taylor, A. M. R. (2016). Inference on co-integration parameters in heteroskedastic vector autoregressions. Journal of Econometrics, Elsevier, 192(1), 64–85.
  • Cavaliere, G., Rahbek, A., & Taylor, A. M. R. (2014). Bootstrap determination of the co-integration rank in heteroskedastic VAR models. Econometric Reviews, 33(5–6), 606–650.
  • Clements, M. P., & Hendry, D. F. (1999). Forecasting non-stationary time series. Cambridge, MA: MIT Press.
  • Clements, M. P., & Hendry, D. F. (2008). Economic forecasting in a changing world. Capitalism and Society, 3, 1–18.
  • Colander, D., Goldberg, M., Haas, A., Juselius, K., Kirman, A., Lux, T., & Sloth, B. (2009). The financial crisis and the systemic failure of the economics profession. Critical Review, 21, 3.
  • Colander, D., Howitt, P., Kirman, A., Leijonhufvud A., & Mehrling, P. (2008). Beyond DSGE models: Toward an empirically based macroeconomics. American Economic Review, 98, 236–240.
  • Davidson, J. (1998). Structural relations, cointegration and identification: Some simple results and their application. Journal of Econometrics, 87(1), 87–113.
  • Dennis, J. G., Hansen, H., Johansen, S., & Juselius, K. (2006). CATS in RATS. Cointegration Analysis of Time Series. Version 2. Evanston, IL: Estima.
  • Doornik, J., & Juselius, K. (2017). Cointegration Analysis of Time Series using CATS 3 for OxMetrics. London: Timberlake.
  • Elliott, G. (1998). On the robustness of cointegration methods when regressors almost have unit roots. Econometrica, 66, 149–158.
  • Engle, R. F., Hendry, D. F. & Richard, J.-F. (1983). Exogeneity. Econometrics, 51(2), 277–304.
  • Frydman, R., & Goldberg, M. (2011). Beyond mechanical markets: Risk and the role of asset price swings. Princeton, NJ: Princeton University Press.
  • Frydman, R., & Goldberg, M. (2013). Change and expectations in macroeconomic models: Recognizing the limits to knowability. Journal of Economic Methodology, 20, 118–138.
  • Haavelmo, T. (1944). The probability approach to econometrics. Econometrica, 12(Suppl.), 1–118.
  • Hands, D. W. (2013). Introduction to symposium on reflexivity and economics: George Soros’s theory of reflexivity and the methodology of economic science. Journal of Economic Methodology, 20, 303–308.
  • Hendry, D. F., & Juselius, K. (2000). Explaining cointegration analysis: Part 1. The Energy Journal, 21(1), 1–42.
  • Hendry, D. F., & Juselius, K. (2001). Explaining cointegration analysis: Part II. The Energy Journal, 22(1), 75–120.
  • Hendry, D. F., & Mizon, G. E. (1993). Evaluating econometric models by encompassing the VAR. In P. C. Phillips (Ed.), Models, methods and applications of econometrics (pp. 272–300). Oxford: Blackwell.
  • Hommes, C. H. (2006). Heterogeneous agent models in economics and finance. In L. Tesfation & K. L. Judd (Eds.), Handbook of Computational Economics 2 (pp. 1109–1186). Amsterdam: Elsevier.
  • Hommes, C. H. (2013). Reflexivity, expectations feedback and almost self-fulfilling equilibria: Economic theory, empirical evidence and laboratory experiments. Journal of Economic Methodology, 20, 406–419.
  • Hoover, K. (2006). The past as future: The Marshallian approach to post Walrasian econometrics In D. Colander (Ed.), Post Walrasian macroeconomics: beyond the Dynamic Stochastic General Equilibrium model (pp. 239–257). Cambridge, UK: Cambridge University Press.
  • Hoover, K., Johansen, S., & Juselius, K. (2008). Allowing the data to speak freely: The macroeconometrics of the cointegrated VAR. American Economic Review, 98, 251–255.
  • Hoover, K., & Juselius, K. (2015). Trygve Haavelmo’s experimental methodology and scenario analysis in a cointegrated vector autoregression. Econometric Theory, 31(2), 249–274.
  • Hoover, K. D., Johansen, S., & Juselius, K. (2009). Allowing the data to speak freely: The macroeconometrics of the cointegrated vector autoregression. American Economic Review, 98, 251–255.
  • Johansen, S. (1992). A Representation of Vector Autoregressive Processes Integrated of Order 2. Econometric Theory, 8, 188–202.
  • Johansen, S. (1995). Identifying restrictions of linear equations. With applications to simultaneous equations and cointegration. Journal of Econometrics, 69(1), 111–132.
  • Johansen, S. (1996). Likelihood-based inference in cointegrated vector autoregressive models. Oxford: Oxford University Press.
  • Johansen, S. (1997). Likelihood analysis of the I2 model. Scandinavian Journal of Statistics, 244, 433-462.
  • Johansen, S. (2002a). A small sample correction for the test of cointegrating rank in the vector autoregressive model. Econometrica, 70(5), 1929–1961.
  • Johansen, S. (2002b). A small sample correction for tests of hypotheses on the cointegrating vectors. Journal of Econometrics, 111(2), 195–221.
  • Johansen, S. (2006), Statistical analysis of hypotheses on the cointegrating relations in the I(2) model. Journal of Econometrics, 132, 81-115.
  • Johansen, S. (2012). The analysis of nonstationary time series using regression, correlation and cointegration. Contemporary Economics, 6(2), 40–57.
  • Johansen, S., & Juselius, K. (1994). Identification of the long-run and short-run structure: An application to the ISLM model. Journal of Econometrics, 63, 7–36.
  • Johansen, S., & Juselius, K. (2014). An asymptotic invariance property of the common trends under linear transformations of the data. Journal of Econometrics, 178(Pt. 2), 310–315.
  • Juselius, K. (2006). The cointegrated VAR model: Methodology and applications. Oxford: Oxford University Press.
  • Juselius, K. (2009a). Special issue on using econometrics for assessing economic models—An introduction. Economics: The Open-Access, Open-Assessment E-Journal, 3, 2009–2028.
  • Juselius, K. (2009b). The long swings puzzle. What the data tell when allowed to speak freely. In T. C. Mills & K. Patterson (Eds.), The new Palgrave handbook of empirical econometrics. London: MacMillan.
  • Juselius, K. (2011a). On the role of theory and evidence in macroeconomics. In W. Hands & J. Davis (Eds.), The Elgar companion to recent economic methodology (p. 27). Edward Elgar.
  • Juselius, K. (2011b). Time to reject the priviledging of economic theory over empirical evidence? A reply to Lawson. The Cambridge Journal of Economics, 35(2), 423–436.
  • Juselius, K. (2012). Imperfect knowledge, asset price swings and structural slumps: A cointegrated VAR analysis of their interdependence. In E. S. Phelps & R. Frydman (Eds.), Rethinking expectations: The way forward for macroeconomics (pp. 328–350). Princeton, NJ: Princeton University Press.
  • Juselius, K. (2015). Haavelmo’s probability approach and the cointegrated VAR model. Econometric Theory, 31(2), 213–232.
  • Juselius, K. (2017a). A theory-consistent CVAR scenario: Testing a rational expectations based monetary model. Department of Economics, University of Copenhagen.
  • Juselius, K. (2017b). Using a theory-consistent CVAR scenario to test an exchange rate model based on imperfect knowledge. Econometrics.
  • Juselius, K., & Assenmacher, K. (2017). Real exchange rate persistence and the excess return puzzle: The case of Switzerland versus the US. Journal of Applied Econometrics.
  • Juselius, K., & Franchi, M. (2007). Taking a DSGE model to the data meaningfully. Economics–The Open-Access, Open-Assessment E-Journal, 4.
  • Juselius, K., & Stillwagon, J. (2017). Are outcomes driving expectations or the other way around? An I(2) CVAR analysis of interest rate expectations in the dollar/pound market. University of Copenhagen, Economics Department.
  • Koopmans, T.C., Rubin, H., & Leipnik, R. B. (1950). Measuring the Equation Systems of Dynamic Economics. In T.C. Koopmans (Ed.) Statistical inference in dynamic economic models, Cowles Commission Research. New York: John Wiley & Sons, Inc.
  • Møller, N. F. (2008). Bridging economic theory models and the cointegrated vector autoregressive model. Economics: The Open-Access, Open-Assessment E-Journal, 2, 36.
  • Nielsen, H. B. (2008). Influential observations in cointegrated VAR models: Danish money demand 1973–2003. The Econometrics Journal, 11(1), 1–19.
  • Nielsen, H. B. & Rahbek, A. (2007). The Likelihood Ratio Test for Cointegration Ranks in the I(2) Model. Econometric Theory, 23, 615-637.
  • Soros, G. (1987). The alchemy of finance. Hoboken, NJ: John Wiley,
  • Spanos, A. (2009). The pre-eminence of theory versus the European CVAR perspective in macroeconometric modeling. Economics: The Open-Access, Open-Assessment E-Journal, 3.
  • Wald, A. (1950). A Note on the Identification of Economic Relations. In T. C. Koopmans (Ed.), Statistical inference in dynamic economic models, Cowles Commission Research. New York: John Wiley & Sons, Inc.

Notes

  • 1. Note, however, that a high value of λi can also be an indication of a large ratio between the number of estimated parameters and the number of observations.

  • 2. The characteristic roots can be calculated either as a solution to the characteristic polynomial of the VAR model or as the eigenvalue roots of the VAR model in companion form. In the first case, the roots of an I(1) model are either on or outside the unit circle, in the second case either on or inside the unit circle (see Juselius, 2006).

  • 3. Similar rank conditions was already established for the traditional simultaneous equation system by Koopmans, Rubin, and Leipnik (1950) and Wald (1950).