(p.146) (p.147) Appendix 1 Stochastic Secondorder Difference Equations
(p.146) (p.147) Appendix 1 Stochastic Secondorder Difference Equations
This appendix analyses stochastic linear secondorder difference equations. To begin with, we explain the definitions and the behaviour of the sine and cosine functions. Then it is shown that under appropriate parameter combinations the solution to a nonstochastic secondorder difference equation displays damped sinusoidal oscillations. Finally, it is demonstrated that the solution to a stochastic secondorder difference equation which displays damped oscillations in the absence of shocks exhibits variability, persistence, and reversion in the presence of recurrent shocks.
Trigonometric functions
The sine and cosine functions are defined by
A first important result is
for all x. Differentiating the lefthand side of (A1.1) with respect to x gives 2 (sin x cos x − cos x sin x) = 0. Hence, sin^{2} x + cos^{2} x = a for some constant a. Setting x = 0 yields a = 1, which proves (A1.1).
Next, consider two differentiable functions f (x) and g (x) satisfying
By definition, these requirements are fulfilled by f (x) = sin x and g (x) = cos (x). But there are further examples, such as f (x) = cos (−x) and g (x) = sin (−x), and f (x) = sin (x + y) and g (x) = cos (x + y), where y is an (p.148) arbitrary real number. Consider the functions
Differentiating with respect to x and using (A1.2) yields cos xg (x)−sin xf (x)+ sin xf (x)−cos xg (x) = 0 and cos xf (x)+sin xg (x)−sin xg (x)−cos xf (x) = 0, respectively. Hence, there exist two real numbers a and b such that
Multiply the former equation by sin x and the latter by cos x and add. Then multiply the former by cos x and the latter by sin x and subtract the former from the latter. Using (A1.1), one obtains
(A1.3) and (A1.4) can be used to prove
To do so, let f (x) = cos (−x) and g (x) = sin (−x), which, we know, satisfy (A1.2). Then (A1.3) and (A1.4) become
Setting x = 0 yields b = 0 and a = −1. Inserting this into the equations above proves (A1.5).
Next, we derive the addition formulas
from (A1.3) and (A1.4). Let f (x) = sin (x + y) and g (x) = cos (x + y), where y is an arbitrary real number. Then,
Setting x = 0 yields b = cos y and a = − sin y, which yields (A1.6) upon substitution into the above pair of equations.
(p.149) Since sin^{2} x + cos^{2} x = 1, the sine and cosine curves range between −1 and +1. They display harmonic oscillations: there exists a real number π such that

(A1.7): This holds true by the definitions of sine and cosine.

(A1.8): At the origin, the sine curve is upwardsloping (sin′ 0 = cos 0 = 1). So the cosine curve is downwardsloping for x small: cos′ x = − sin′ x 〈 0. There exists an x (〉 0) such that sin x = 1 (and cos x = 0). Suppose this is not the case. Then cos x = sin′ x 〉 0 for all x. sin x 〈 1 and sin′ x 〉 0 imply that sin x converges to a constant a 〉 0. But this implies that cos′ x =− sin′ x converges to −a 〈 0. This contradicts cos x 〉 0 for all x. The smallest value x such that cos x = 0isdenotedby π/2.

(A1.9): This follows from the addition formula for the cosine function: cos π = cos (π/2 + π/2) = cos (π/2) cos (π/2) − sin (π/2) sin (π/2) = −1. From sin^{2} x + cos^{2} x = 1, it follows that sin π = 0.

(A1.10): Similarly, sin (3π/2) = sin (π + π/2) = sin π cos (π/2) + sin (π/2) cos π =−1 and cos (3π/2) = 0.

(A1.11): cos (2π) = cos (π + π) = cos π cos π − sin π sin π = 1 and sin (2π) = 0.

(A1.12): sin (x + 2π) = sin x cos (2π) + sin (2π)cos x = sin x and cos (x + 2π) = cos x cos (2π) − sin x sin (2π) = cos x. The sine and cosine functions take on the same value every 2π periods.
Finally, we prove DeMoivre’s theorem:
($i\equiv \sqrt{1}$ denotes the imaginary unit). The proof is by induction. The validity for t = 1 is obvious. So it remains to show the validity for t − 1, (p.150) that is,
entails validity for t. Multiplying both sides of the equation by cos ω ± i sin ω gives
From the addition formulas (A1.6), cos ω cos ω(t − 1) − sin ω sin ω(t − 1) = cos ωt and sin ω cos ω(t − 1) + cos ω sin ω(t − 1) = sin ωt. It follows that
This completes the proof of DeMoivre’s theorem.
Nonstochastic equations
Next, we examine secondorder difference equations in the absence of stochastic disturbances:
The first important thing to note is that if y _{1,t} and y _{2,t} are two particular solutions of (A1.13), then any linear combination y _{t} = A _{1} y _{1,t} + A _{2} y _{2,t} of the two also satisfies (A1.13) (A _{1} and A _{2} are arbitrary, nonzero constants):
Suppose there exist numbers λ ≠ 0 such that y _{t} = λ ^{t} are solutions to (A1.13). Then λ ^{t} + a _{1} λ ^{t−1} + a _{2} λ ^{t−2} = 0 or, dividing by λ ^{t−2} (≠ 0),
This is the characteristic equation of (A1.13). Its solutions,
(p.151) are called the characteristic roots of (A1.13). Assume $\mathrm{\Delta}\equiv {a}_{1}^{2}4{a}_{2}\u30080$. Then the characteristic roots are complex conjugates:
where α ≡−a _{1}/2 and $\theta \equiv \sqrt{\left\mathrm{\Delta}\right}/2$. Since ${\lambda}_{1}^{t}$ and ${\lambda}_{2}^{t}$ are distinct solutions to (A1.13), the linear combination
also solves (A1.13). This equation is the general solution of (A1.13). In order for y _{t} to be real for all t, A _{1} and A _{2} must be complex numbers. Let
where A and e are real numbers. We proceed to show that the solution of (A1.13) is ${y}_{t}=A{\sqrt{{a}_{2}}}^{t}\text{cos}(\omega te)$, where ω is a real number. This equation is derived in several steps, the nonself explanatory of which are explained below:

(A1.16): Let ω and r be determined by cos ω ≡ α/r and sin ω ≡ θ/r. Then cos ω/sin ω = α/θ. Since, from (A1.7) and (A1.9), cos ω/sin ω equals ∞ for ω = 0 and −∞ for ω = π, there exists an ω and, hence, an r = α/cos ω which satisfy these equations. Substituting α = r cos ω and θ = r sin ω into (A1.15) gives (A1.16). Notice that α ^{2} + θ ^{2} = r ^{2}(sin^{2} ω + cos^{2} ω) = r ^{2}, hence $r=\sqrt{{\alpha}^{2}+{\theta}^{2}}$.

(A1.17): This is the crucial step in the proof: The time argument ‘wanders’ into the sine and cosine terms. We obtain the equation by applying DeMoivre’s theorem.

(p.152) (A1.18): In this step, the imaginary unit i disappears. Use is made of the fact that A _{1} + A _{2} = A cos e and (A _{1} − A _{2})i = (−iA sin e)i = − i ^{2} A sin e = A sin e, as implied by (A1.14) together with i ^{2} =−1.

(A1.19): This follows from (A1.5) and (A1.6):
The period of oscillation of y _{t} is given by t′ − t where ωt′ − e = ωt − e + 2π. It is equal to t′ − t = 2π/ω.
Random variables
Random variables are variables which take on different possible values with given probabilities. For our purposes, it is sufficient to consider continuous random variables, which can take on arbitrary real numbers y. Suppose the distribution of the variable can be described by means of the continuously differentiable distribution function H (y), where H (y) is the probability that the random variable takes on a value no greater than y. H (y) is nondecreasing with lim_{y→−∞} H (y) = 0 and lim_{y→∞} H (y) = 1. The probability that the random variable falls into the interval [y, y + dy] is H (y + dy) − H (y). As dy goes to zero, this probability approaches dH (y) and the average value of the random variable in this interval approaches y. The expectation of the random variable is obtained by ‘summing’ over the probabilityweighted yvalues:
Two random variables x and y are independent when the distribution functions G(x) and H (y) are independent of each other. Independent random variables satisfy E(xy) = Ex Ey:
(p.153) Stochastic equations
We proceed to derive the equations concerned with the variance and autocorrelations of y _{t} in the presence of shocks. Since (1 + a _{1} + a _{2})Ey = Eε = 0, the expectation of y _{t} is Ey _{t} = 0. The variance of y _{t} is ${\sigma}_{y}^{2}\equiv E{y}_{t}^{2}$, the covariance between y _{t} and y _{t−j} is E(y _{t} y _{t−j}), and the correlation between y _{t} and y _{t−j} is ρ _{j} ≡ E(y _{t} y _{t−j})/σ ^{2}. The covariance function satisfies
Hence, ρ _{j} = ρ _{−j}. From y _{t} + a _{1} y _{t−1} + a _{2} y _{t−2} = ε_{t}, we have
Dividing by σ ^{2} and making use of the fact that ε_{t} is independent of y _{t−1} and y _{t−2} and that E(y _{t−1} y _{t−j}) = E[y _{t} y _{t−(j−1)}] and E(y _{t−2} y _{t−j}) = E[y _{t} y _{t−(j−2)}], one obtains
for all j 〉 0. Setting j = 1 and j = 2, it follows that
where use is made of the symmetry property ρ _{j} = ρ _{−j}. To calculate the variance σ ^{2} of y _{t}, set j = 0 in equation (A1.20) and notice that $E\left({\epsilon}_{t}{y}_{t}\right)={\sigma}_{\epsilon}^{2}$ because ε_{t} is independent of y _{t−1} and y _{t−2}:
Substituting the expressions in (A1.21) for ρ _{1} and ρ _{2} yields the formula reported in the main text:
Further reading
In this appendix, we have followed Lang (1983: section 4.3) on trigonometric functions, Gandolfo (1996: ch. 5) on nonstochastic secondorder difference equations, and Pindyck and Rubinfield (1991: section 16.2) on stochastic secondorder difference equations. These sources can be consulted for related material.