Jump to ContentJump to Main Navigation
Time-Series-Based EconometricsUnit Roots and Co-integrations$

Michio Hatanaka

Print publication date: 1996

Print ISBN-13: 9780198773535

Published to Oxford Scholarship Online: November 2003

DOI: 10.1093/0198773536.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 26 February 2017

(p.260) Appendix 5 Mathematics for the VAR, VMA, and VARMA

(p.260) Appendix 5 Mathematics for the VAR, VMA, and VARMA

Source:
Time-Series-Based Econometrics
Publisher:
Oxford University Press

A bivariate AR may be written, for example, as either

{ [ a 11 0 a 12 0 a 21 0 a 22 0 ] + [ a 11 1 0 a 21 1 a 22 1 ] L + [ 0 0 0 a 22 2 ] L 2 } [ x 1 t x 2 t ] = [ ɛ 1 t ɛ 2 t ]
(A5.1)
or
[ a 11 0 + a 11 1 L , a 12 0 a 21 0 + a 21 1 L , a 22 0 + a 22 1 L + a 22 2 L 2 ] [ x 1 t x 2 t ] = [ ɛ 1 t ɛ 2 t ] .
(A5.2)
Econometricans are more familiar with (A5.1), but the system theory adopts (A5.2). I shall give an elementary, non rigorous explanation of the system theory used in Section 12.2.

I distinguish the words, polynomials, and infinite power series, by restricting the former to finite orders. The matrix on the left‐hand side of (A5.2) has a polynomial of lag operator L in each element. More generally we consider a matrix of which each element is a polynomial function of a scalar argument z. It will be denoted by A(z). The coefficients of polynomials are assumed to be real numbers. Such a matrix is called the polynomial matrix. When a polynomial matrix is factored below, the factor polynomial matrix must also have coefficients in real numbers.

The determinant of a square polynomial matrix, A(z), is defined in the same way as in the ordinary matrix. The determinant will be denoted by det[A(z)]. A(z) is said to be non‐singular if det [ A ( z ) ] = 0 holds only at a finite number of real or complex values of z, i.e. det[A(z)] is not identically zero. A(z) is singular if and only if there exists a polynomial vector b ( z ) such that b ( z ) A ( z ) 0 .

The k‐variate VAR (vector autoregressive process) is represented by

A ( L ) x t = ɛ t .
(A5.3)
A(z) is assumed to be non‐singular. We are concerned with the stationarity of { x t } generated by (A5.3) with an i.i.d. {ɛ t}. (A5.3) is a linear difference equation for { x t } with {ɛ t} as a forcing function. The stationarity of { x t } is equivalent to the stability of the difference equation just in the same way as in the univariate AR. And the stability of (A5.3) is that the roots of the equation for z
det [ A ( z ) ] = 0
(A5.4)
are all larger than unity in moduli, i.e. all lie outside the unit circle of the complex plane. If the characteristic polynomial of the difference equation is written as in mathematical economics, the stability would be represented by the (p.261) characteristic roots being less than unity in moduli as we learn in economic dynamics (see Section 2.3).

In the present appendix and Section 12.2 we frequently refer to the roots of a determinantal equation such as (A5.4) lying all outside of the unit circle of the complex plane. In every such statement I simply write that det [ A ( z ) ] = 0 has all roots outside the unit circle, omitting the words such as the equation and the complex plane.

Let B(z) be a k × k polynomial matrix. Then the k‐variate VMA (vector moving average process) is represented by

x t = B ( L ) ɛ t .
(A5.5)
We are concerned with the invertibility of VMA. To explain the invertibility I revert to an expression such as (A5.1). B(z) may be written B 0 + B 1 z + + B q z q , where the Bs are each k × k matrices. B(z) is said to be invertible if one can find a unique sequence of matrices, A 0 , A 1 , , such that
( A 0 + A 1 z + A 2 z 2 + ) ( B 0 + B 1 z 1 + + B q z q ) = I k
holds identically and j = 0 A j converges in terms of some measure of the norm ‖ ‖ of As. A necessary and sufficient condition for the invertibility of B(z) is that roots of
det [ B ( z ) ] = 0
(A5.6)
all lie outside the unit circle. This generalizes the well‐known invertibility condition for a univariate MA.

VARMA is written as

A ( L ) x t = B ( L ) ɛ t .
(A5.7)
Just as a (scalar) polynomial may be factored into two or more (scalar) polynomials, A(z) may be factored into two or more polynomial matrices, and so is B(z). If A(z) and B(z) have a factor polynomial matrix common between them, it should be cancelled in (A5.7). When all the common factors are cancelled between A(z) and B(z), A ( z ) 1 B ( z ) is called the irreducible MFD (matrix fraction description). The irreducibility is assumed throughout the following description.

A matrix with a rational function of z in each element is called the rational matrix. When A(z) is non‐singular C ( z ) A ( z ) 1 B ( z ) is a rational matrix. The determinant of a rational matrix is also defined in the same way as in the ordinary matrix, and the concept of non‐singularity follows from it in the same way as in the polynomial matrix. In fact,

C ( z ) = ( det [ A ( z ) ] ) 1 A ~ ( z ) B ( z ) ,
where A ~ ( z ) is the adjoint (adjunct) of A(z) and hence is a polynomial matrix. C ~ ( z ) A ~ ( z ) B ( z ) is also a polynomial matrix. Since det [ C ( z ) ] = ( det [ A ( z ) ] ) k det [ C ~ ( z ) ] , (p.262) the non‐singularity of C(z) is equivalent to the non‐singularity of C ~ ( z ) .2

I assume that the k × k C ( z ) A ( z ) 1 B ( z ) is non‐singular in (A5.7), which implies that B(z) is non‐singular. The assumption precludes linear dependency among elements of { x t } in (A5.7). When C(z) is non‐singular, it can be represented in the Smith–McMillan form with the following properties.

C ( z ) = U ( z ) Λ ( z ) V ( z ) ,
(A5.8)
where
  1. (i) three matrices on the right‐hand side are each k × k;

  2. (ii) U(z) and V(z) are polynomial matrices, and det[U(z)] and det[V(z)] are both non‐zero constants not involving z;

  3. (iii) Λ(z) = diag[λ1(z), . . . , λk(z)] and λi(z) = f i(z)/g i(z), i = 1, . . . , k such that

  4. (iii a) f i(z) and g i(z) are polynomials not sharing a common factor;

  5. (iii b) f i(z)|f i+1(z), i = 1, . . . , k − 1;

  6. (iii c) g i+1(z)|g i(z), i = 1, . . . , k − 1;

  7. (iv) Λ(z) is uniquely determined by C(z), but U(z) and V(z) are not.

Concerning (iii b) and (iii c) above, a(z)|b(z) means that a(z) divides b(z), i.e. a(z) is a factor of b(z). See Kailath (1980: 443–4) for the Smith–McMillan form.

The polynomial matrix of which the determinant is a non‐zero constant (i.e. a polynomial in zero degree) is called unimodular. Any products of unimodular matrices are unimodular, and the inverse of a unimodular matrix is unimodular. The reason why unimodular matrices appear in (A5.8) is that elementary row operations and column operations are unimodular. The elementary row operations consist of (i) an interchange of two rows, (ii) addition to a row of a polynomial multiple of another row, and (iii) scaling all elements of a row by a non‐zero constant. The elementary column operations are likewise defined.

For example, consider

A ( z ) = [ 1 z 0 1 + az ] , B ( z ) = [ 1 + bz 0 cz 1 ] , c 0 .
Then
C ( z ) = A ( z ) 1 B ( z ) = ( 1 + az ) 1 [ ( 1 + az ) ( 1 + bz ) cz 2 z cz 1 ] .
(A5.9)
To express C(z) in a Smith–McMillan form it is convenient to rewrite
[ ( 1 + az ) ( 1 + bz ) cz 2 z cz 1 ]
(A5.10)
(p.263) in what is called the Smith form. A polynomial matrix with rank 2 can be represented in a Smith form,
U 1 ( z ) Λ 1 ( z ) V 1 ( z ) .
(A5.11)
Here both U 1 ( z ) and V 1 ( z ) are 2 × 2 and unimodular and Λ 1(z) = diag[λ1(z), λ2(z)] where λ1(z)|λ2(z). How to obtain a Smith form is explained in Kailath (1980: 375–6, 391). It consists of elementary row and column operations. (A5.10) is equal to (A5.11) with
U 1 ( z ) = [ 0 1 1 0 ] [ 1 0 c 1 ( a + b ) + c 1 ( a b c ) z 1 ] [ 0 1 1 0 ] [ 1 0 c z 1 ] . Λ 1 ( z ) = [ 1 0 0 ( 1 + a z ) ( 1 + b z ) ] . V 1 ( z ) = [ 1 c 1 ( a + b ) c 1 a b z 0 1 ] .
Then the Smith–McMillan form of (A5.9) is (A5.8) with U ( z ) = U 1 ( z ) , V ( z ) = V 1 ( z ) , and
Λ ( z ) = [ 1 / ( 1 + a z ) 0 0 ( 1 + b z ) ] . f 1 ( z ) = 1 , f 2 ( z ) = 1 + b z , g 1 ( z ) = 1 + a z , g 2 ( z ) = 1 .
V ( z ) 1 represents an elementary column operation, and the four matrices that form U(z) are each obtained by inverting some matrices that represent elementary row operations.

The above result has a number of implications. First, for any finite values of z in general and for a root of f i(z) = 0, z 0, in particular U ( z 0 ) and V ( z 0 ) are non‐singular so that the rank of C ( z 0 ) is equal to the rank of Λ(z 0). (There should be no contradiction between C(z) being non‐singular and C ( z 0 ) being singular.) Second, if z 0 is not a root of f kr(z) = 0 nor of g 1(z) = 0 but is a root of f kr+1(z) = 0, then z 0 is a root of f kr+2(z) = 0, . . . , f k(z) = 0, and Λ(z 0) has rank, kr, which is also the rank of C ( z 0 ) . As a third implication I quote a lemma from Hannan and Deistler (1988: 54). Roots of f 1(z) = 0, . . . , f k(z) = 0 are roots of det [ B ( z ) ] = 0 , and roots of g 1(z) = 0, . . . , g k(z) = 0 are roots of det [ A ( z ) ] = 0 .

For any matrix A its rank will be denoted by ρ ( A ) below.

In Section 12.2 we consider (A5.7), especially the case where det [ A ( z ) ] = 0 has all roots lying outside the unit circle and det [ B ( z ) ] = 0 has real unit roots as well as other roots lying outside the unit circle. It follows from the third implication mentioned above that, for some (kr) such that 0 ≤ krk − 1, f kr+1(1) = f kr+2(1) = . . . = f k(1) = 0 while f 1(1), . . . , f kr(1) are not zero. (The latter is void if kr = 0.) From the assumption about A(z) we have g 1(1) ≠ 0, . . . , g k(1) ≠ 0, and it follows that ρ(Λ(1)) = kr, which is (p.264) also equal to ρ ( C ( 1 ) ) as stated in the first implication. Since C ( 1 ) = A ( 1 ) 1 B ( 1 ) and A(1) is non‐singular by the assumption about A(z), ρ ( B ( 1 ) ) is also kr.

The conclusions are that C(1) and B(1) have the identical rank, which is equal to the number of non‐zero terms among f 1(1), . . . , f k(1).

Incidentally, if det [ B ( z ) ] = 0 has all roots outside the unit circle and none of the real unit root, than ρ ( B ( 1 ) ) = k .

Reverting to the case where det [ B ( z ) ] = 0 has real unit roots, Engle and Yoo (1991) suggest the following rearrangement of (A5.8). Write

f i ( z ) f ~ i ( z ) , i k r f i ( z ) ( 1 z ) m i f ~ i ( z ) , i k r + 1 ,
(A5.12)
where f̃i(z) = 0 has all roots lying outside the unit circle. This is possible by virtue of the third implication and the assumption about B(z). Moreover, let us suppose for the exposition in Section 12.2 that all the m i s in (A5.12) are unity. Let
Λ ( z ) D g ( z ) 1 D ( z ) D f ~ ( z ) D g ( z ) diag [ g 1 ( z ) , , g k ( z ) ] D f ~ ( z ) diag [ f ~ 1 ( z ) , , f ~ k ( z ) ] D ( z ) diag [ 1, 1, k r ( 1 z ) , , ( r 1 z ) ] ,
(A5.13)
Then
C ( z ) = ( U ( z ) D g ( z ) 1 ) D ( z ) ( D f ~ ( z ) V ( z ) ) .
(A5.14)
Setting U ~ ( z ) U ( z ) D g ( z ) 1 and V ~ ( Z ) D f ~ ( z ) V ( z ) , (A5.7) can be rewritten as
U ~ ( L ) 1 x t = D ( L ) V ~ ( L ) ɛ t .
(A5.15)
In regard to U ~ ( z ) 1 = D g ( z ) U ( z ) 1 , since det [ U ( z ) 1 ] = 0 has no (finite) roots,3 and since det [ A ( z ) ] = 0 and hence det [ D g ( z ) ] = 0 have all roots outside the unit circle, det [ U ~ ( z ) ] 1 = 0 has all roots outside the unit circle. Moreover, since det[U (z] is a constant not involving z, U ~ ( z ) 1 is a polynomial (rather than rational) matrix. V ~ ( z ) is a polynomial matrix, and det [ V ~ ( z ) ] = 0 has all roots outside the unit circle. V ~ ( L ) is invertible, but D(L) is non‐invertible. Therefore (A5.15) is VARMA with non‐invertible MA, which (A5.7) is. The difference between (A5.7) and (A5.15) is that the presence of unit root is made more visible in (A5.15).

Notes:

(1) See Kailath (1980: 367) for a more accurate description of the irreducibility.

(2) The roots are not necessarily identical. If the highest power of z in det[A(z)] exceeds that of every element of C ( z ) , det [ C ( ± ) ] = 0 , while det [ C ( ± ) ] 0 .

(3) In general, if A(z) is a non‐singular k × k polynomial matrix and B(z) is its adjoint matrix, then det [ B ( z ) ] = ( det [ A ( z ) ] ) k 1 . The roots of det [ B ( z ) ] = 0 must also be roots of det [ A ( z ) ] = 0 . Concerning the text, det [ U ( z ) ] = 0 has no roots because U(z) is unimodular. Let U * ( z ) be the adjoint of U(z). Then det [ U * ( z ) ] = 0 has no finite roots. Though det [ U ( z ) 1 ] = 0 may have z = ± ∞ as a root, it is outside the unit circle.