## Melvin Lax, Wei Cai, and Min Xu

Print publication date: 2006

Print ISBN-13: 9780198567769

Published to Oxford Scholarship Online: January 2010

DOI: 10.1093/acprof:oso/9780198567769.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 16 January 2019

# What is a random process

Chapter:
(p.44) 2 What is a random process
Source:
Random Processes in Physics and Finance
Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780198567769.003.0002

# Abstract and Keywords

A random or stochastic process is a random variable that evolves in time by some random mechanism (of course, the time variable can be replaced by a space variable, or some other variable, in application). The variable can have a discrete set of values at a given time, or a continuum of values may be available. Likewise, the time variable can be discrete or continuous. A stochastic process is regarded as completely described if the probability distribution is known for all possible sets of times. A stationary process is one which has no absolute time origin. All probabilities are independent of a shift in the origin of time. This chapter discusses multitime probability description, conditional probabilities, stationary, Gaussian, and Markovian processes, and the Chapman–Kolmogorov condition.

# 2.1 Multitime probability description

A random or stochastic process is a random variable X(t), at each time t, that evolves in time by some random mechanism (of course, the time variable can be replaced by a space variable, or some other variable in application). The variable X can have a discrete set of values xj at a given time t, or a continuum of values x may be available. Likewise, the time variable can be discrete or continuous.

A stochastic process is regarded as completely described if the probability distribution

(2.1)
is known for all possible sets [t 1, t 2, …, tn] of times. Thus we assume that a set of functions
(2.2)
describes the probability of finding
(2.3)
We have previously discussed multivariate distributions. To be a random process, the set of variables Xj must be related to each other as the evolution in time of a single “stochastic” process.

# 2.2 Conditional probabilities

The concept of conditional probability introduced in Section 1.8 immediately generalizes to the multivariable case. In particular, Eq. (1.82)

(2.4)
can be iterated to yield
(2.5)
(p.45) When the variables are part of a stochastic process, we understand xj to be an abbreviation for X(tj). The variables are written in time sequence since we regard the probability of xn as conditional on the earlier time values x n−1,…, x 1.

# 2.3 Stationary, Gaussian and Markovian processes

A stationary process is one which has no absolute time origin. All probabilities are independent of a shift in the origin of time. Thus

(2.6)
In particular, this probability is a function only of the relative times, as can be seen by setting τ = −t 1. Specifically, for a stationary process, we expect that
(2.7)
(2.8)
(2.9)
and the two-time conditional probability
(2.10)
reduces to the stationary state, independent of the starting point when this limit exists. For the otherwise stationary Brownian motion and Poisson processes in Chapter 3, the limit does not exist. For example, a Brownian particle will have a distribution that continues to expand with time, even though the individual steps are independent of the origin of time.

A Gaussian process is one for which the multivariate distributions Pn(xn, x n−1, …, x 1) are Gaussians for all n. A Gaussian process may, or may not be stationary (and conversely).

A Markovian process is like a student who can remember only the last thing he has been told. Thus it is defined by

(2.11)
that is the probability distribution of xn is sensitive to the last known event x n−1 and forgets all prior events. For a Markovian process, the conditional probability formula, Eq. (2.5) specializes to
(2.12)
so that the process is completely characterized by an initial distribution p(x 1) and the “transition probabilities” p(xj|x j−1). If the Markovian process is also (p.46) stationary, all p(xj|x j−1) are described by a single transition probability
(2.13)
independent of the initial time t j−1.

# 2.4 The Chapman–Kolmogorov condition

We have just shown that a Markovian random process is completely characterized by its “transition probabilities” p(x 2|x 1). To what extent is p(x 2|x 1) arbitrary? This question may be answered by taking Eq. (2.4) for a general random process specializing to the three time case and dividing by p(x 1) to obtain

(2.14)
or
(2.15)
If we integrate over x 2 we obtain
(2.16)
For the Markovian case this specializes to the Chapman–Kolmogorov condition
(2.17)
which must be obeyed by the conditional probabilities of all Markovian processes. The Chapman–Kolmogorov condition is not as restrictive as it appears. Many Markovian processes have transition probabilities that for small Δt obey:
(2.18)
where wa′a is the transition probability per unit time and the second term has been added to conserve probability. It describes the particles that have not left the state (p.47) a provided that
(2.19)
If we set t = t 0 + Δt 0, we can evaluate the right hand side of the Chapman–Kolmogorov condition to first order in Δt and Δt 0:
(2.20)
which is just the value p(a′ t 0 + Δt + Δt 0|a 0, t 0) expected from Eq. (2.18).

Note, however, that this proof did not make use of the conservation condition, Eq. (2.19). This will permit us, in Chapter 8, to apply the Chapman–Kolmogorov condition to processes that are Markovian but whose probability is not normalized.