Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Statistics LibreTexts

3.2: Causality and Invertibility

( \newcommand{\kernel}{\mathrm{null}\,}\)

While a moving average process of order q will always be stationary without conditions on the coefficients θ1,,θq, some deeper thoughts are required in the case of AR(p) and ARMA(p,q) processes. For simplicity, we start by investigating the autoregressive process of order one, which is given by the equations Xt=ϕXt1+Zt (writing ϕ=ϕ1). Repeated iterations yield that

Xt=ϕXt1+Zt=ϕ2Xt2+Zt+ϕZt1==ϕNXtN+N1j=0ϕjZtj.

Letting N, it could now be shown that, with probability one,

Xt=j=0ϕjZtj

is the weakly stationary solution to the AR(1) equations, provided that |ϕ|<1. These calculations would indicate moreover, that an autoregressive process of order one can be represented as linear process with coefficients ψj=ϕj.

Example 3.2.1: Mean and ACVF of an AR(1) process

Since an autoregressive process of order one has been identified as an example of a linear process, one can easily determine its expected value as

E[Xt]=j=0ϕjE[Ztj]=0,tZ.

For the ACVF, it is obtained that

γ(h)=Cov(Xt+h,Xt)=E[j=0ϕjZt+hjk=0ϕkZtk]=σ2k=0ϕk+hϕk=σ2ϕhk=0ϕ2k=σ2ϕh1ϕ2,

where h0. This determines the ACVF for all h using that γ(h)=γ(h). It is also immediate that the ACF satisfies ρ(h)=ϕh. See also Example 3.1.1 for comparison.

Example 3.2.2: Nonstationary AR(1) processes

In Example 1.2.3 we have introduced the random walk as a nonstationary time series. It can also be viewed as a nonstationary AR(1) process with parameter ϕ=1. In general, autoregressive processes of order one with coefficients |ϕ|>1 are called {\it explosive}\/ for they do not admit a weakly stationary solution that could be expressed as a linear process. However, one may proceed as follows. Rewrite the defining equations of an AR(1) process as

Xt=ϕ1Zt+1+ϕ1Xt+1,tZ.

Apply now the same iterations as before to arrive at

Xt=ϕNXt+NNj=1ϕjZt+j,tZ.

Note that in the weakly stationary case, the present observation has been described in terms of past innovations. The representation in the last equation however contains only future observations with time lags larger than the present time t. From a statistical point of view this does not make much sense, even though by identical arguments as above we may obtain

Xt=j=1ϕjZt+j,tZ,

as the weakly stationary solution in the explosive case.

The result of the previous example leads to the notion of causality which means that the process (Xt:tZ) has a representation in terms of the white noise (Zs:st) and that is hence uncorrelated with the future as given by (Zs:s>t). We give the definition for the general ARMA case.

Definition: Causality

An ARMA(p,q) process given by (3.1.1) is causal if there is a sequence (ψj:jN0) such that j=0|ψj|< and

Xt=j=0ψjZtj,tZ.

Causality means that an ARMA time series can be represented as a linear process. It was seen earlier in this section how an AR(1) process whose coefficient satisfies the condition |ϕ|<1 can be converted into a linear process. It was also shown that this is impossible if |ϕ|>1. The conditions on the autoregressive parameter ϕ can be restated in terms of the corresponding autoregressive polynomial ϕ(z)=1ϕz as follows. It holds that

|ϕ|<1 if and only if ϕ(z)0 for all |z|1,

|ϕ|>1 if and only if ϕ(z)0 for all |z|1.

It turns out that the characterization in terms of the zeroes of the autoregressive polynomials carries over from the AR(1) case to the general ARMA(p,q) case. Moreover, the ψ-weights of the resulting linear process have an easy representation in terms of the polynomials ϕ(z) and θ(z). The result is summarized in the next theorem.

Theorem 3.2.1

Let (Xt:tZ) be an ARMA(p,q) process such that the polynomials ϕ(z) and θ(z) have no common zeroes. Then (Xt:tZ) is causal if and only if ϕ(z)0 for all zC with |z|1. The coefficients (ψj:jN0) are determined by the power series expansion

ψ(z)=j=0ψjzj=θ(z)ϕ(z),|z|1.

A concept closely related to causality is invertibility. This notion is motivated with the following example that studies properties of a moving average time series of order 1.

Example 3.2.3

Let (Xt:tN) be an MA(1) process with parameter θ=θ1. It is an easy exercise to compute the ACVF and the ACF as

γ(h)={(1+θ2)σ2,h=0,θσ2,h=10h>1,ρ(h)={1h=0.θ(1+θ2)1,h=1.0h>1.

These results lead to the conclusion that ρ(h) does not change if the parameter θ is replaced with θ1. Moreover, there exist pairs (θ,σ2) that lead to the same ACVF, for example (5,1) and (1/5,25). Consequently, we arrive at the fact that the two MA(1) models

Xt=Zt+15Zt1,tZ,(Zt:tZ)iid N(0,25),

and

Xt=˜Zt+5˜Zt1,tZ,(˜Z:tZ)iid N(0,1),

are indistinguishable because we only observe Xt but not the noise variables Zt and ˜Zt.

For convenience, the statistician will pick the model which satisfies the invertibility criterion which is to be defined next. It specifies that the noise sequence can be represented as a linear process in the observations.

Definition: Invertibility

An ARMA(p,q) process given by (3.1.1) is invertible if there is a sequence (πj:jN0) such that j=0|πj|< and

Zt=j=0πjXtj,tZ.

Theorem 3.2.2

Let (Xt:tZ) be an ARMA(p,q) process such that the polynomials ϕ(z) and θ(z) have no common zeroes. Then (Xt:tZ) is invertible if and only if θ(z)0 for all zC with |z|1. The coefficients (πj)jN0 are determined by the power series expansion

π(z)=j=0πjzj=ϕ(z)θ(z),|z|1.

From now on it is assumed that all ARMA sequences specified in the sequel are causal and invertible unless explicitly stated otherwise. The final example of this section highlights the usefulness of the established theory. It deals with parameter redundancy and the calculation of the causality and invertibility sequences (ψj:jN0) and (πj:jN0).

Example 3.2.4: Parameter redundancy

Consider the ARMA equations

Xt=.4Xt1+.21Xt2+Zt+.6Zt1+.09Zt2,

which seem to generate an ARMA(2,2) sequence. However, the autoregressive and moving average polynomials have a common zero:

˜ϕ(z)=1.4z.21z2=(1.7z)(1+.3z),˜θ(z)=1+.6z+.09z2=(1+.3z)2.

Therefore, one can reset the ARMA equations to a sequence of order (1,1) and obtain

Xt=.7Xt1+Zt+.3Zt1.

Now, the corresponding polynomials have no common roots. Note that the roots of ϕ(z)=1.7z and θ(z)=1+.3z are 10/7>1 and 10/3<1, respectively. Thus Theorems 3.2.1 and 3.2.2 imply that causal and invertible solutions exist. In the following, the corresponding coefficients in the expansions

Xt=j=0ψjZtjandZt=j=0πjXtj,tZ,

are calculated. Starting with the causality sequence (ψj:jN0). Writing, for |z|1,

j=0ψjzj=ψ(z)=θ(z)ϕ(z)=1+.3z1.7z=(1+.3z)j=0(.7z)j,

it can be obtained from a comparison of coefficients that

ψ0=1andψj=(.7+.3)(.7)j1=(.7)j1,jN.

Similarly one computes the invertibility coefficients (πj:jN0) from the equation

j=0πjzj=π(z)=ϕ(z)θ(z)=1.7z1+.3z=(1.7z)j=0(.3z)j

(|z|1) as

π0=1andπj=(1)j(.3+.7)(.3)j1=(1)j(.3)j1.

Together, the previous calculations yield to the explicit representations

Xt=Zt+j=1(.7)j1ZtjandZt=Xt+j=1(1)j(.3)j1Xtj.

In the remainder of this section, a general way is provided to determine the weights (ψj:j1) for a causal ARMA(p,q) process given by ϕ(B)Xt=θ(B)Zt, where ϕ(z)0 for all zC such that |z|1. Since ψ(z)=θ(z)/ϕ(z) for these z, the weight ψj can be computed by matching the corresponding coefficients in the equation ψ(z)ϕ(z)=θ(z), that is,

(ψ0+ψ1z+ψ2z2+)(1ϕ1zϕpzp)=1+θ1z++θqzq.

Recursively solving for ψ0,ψ1,ψ2, gives

ψ0=1,ψ1ϕ1ψ0=θ1,ψ2ϕ1ψ1ϕ2ψ0=θ2,

and so on as long as j<max. The general solution can be stated as

\psi_j-\sum_{k=1}^j\phi_k\psi_{j-k}=\theta_j, \qquad 0\leq j<\max\{p,q+1\}, \tag{3.2.1}\\[.2cm]

\psi_j-\sum_{k=1}^p\phi_k\psi_{j-k}=0, \qquad \phantom{0\leq} j\geq\max\{p,q+1\},\tag{3.2.2}

if we define \phi_j=0 if j>p and \theta_j=0 if j>q. To obtain the coefficients \psi_j one therefore has to solve the homogeneous linear difference equation (3.2.2) subject to the initial conditions specified by (3.2.1). For more on this subject, see Section 3.6 of Brockwell and Davis (1991) and Section 3.3 of Shumway and Stoffer (2006).

R calculations

In R, these computations can be performed using the command ARMAtoMA. For example, one can use the commands

>ARMAtoMA(ar=.7,ma=.3,25)

>plot(ARMAtoMA(ar=.7,ma=.3,25))

which will produce the output displayed in Figure 3.4. The plot shows nicely the exponential decay of the \psi-weights which is typical for ARMA processes. The table shows row-wise the weights \psi_0,\ldots,\psi_{24}. This is enabled by the choice of 25 in the argument of the function ARMAtoMA.

1.0000000000 0.7000000000 0.4900000000 0.3430000000 0.2401000000
0.1680700000 0.1176490000 0.0823543000 0.0576480100 0.0403536070
0.0282475249 0.0197732674 0.0138412872 0.0096889010 0.0067822307
0.0047475615 0.0033232931 0.0023263051 0.0016284136 0.0011398895
0.0007979227 0.0005585459 0.0003909821 0.0002736875

0.0001915812

ARMAtoMA.png
Figure 3.4: The R output for the ARMA(1,1) process of Example 3.2.4

This page titled 3.2: Causality and Invertibility is shared under a not declared license and was authored, remixed, and/or curated by Alexander Aue.

Support Center

How can we help?