(p.215) Appendix A Mathematical methods
(p.215) Appendix A Mathematical methods
A.1 A very brief tutorial to linear algebra
In this section, we cover the very basics of matrix and vector algebra. Matrices and vectors are arrays of numbers, and they are usually denoted by bold letters, capitalized for matrices and non-capitalized for vectors. So let us define the example matrices A, B, and C, and the vectors x and y by
The matrices and have 2 columns and 2 rows, so they are a matrices, whereas the matrix has 2 rows and 3 columns, so it is a matrix. Vectors are just special cases of matrices of which either of the dimensions is 1. Row vectors consist of a single row and thus have dimension , whereas column vectors have a single column and thus have dimension . For example, the column vector is a matrix. Numbers can also be considered as special cases of matrices, with dimension , as is the case with in Eq. (A.1).
Two matrices that have the same dimensions (same numbers of rows and columns) are added by simply summing their elements, yielding e.g.
Matrices are multiplied by a number by multiplying every element by that number, so that e.g.
Two matrices can also be multiplied by each other, but matrix multiplication is different from element-wise multiplication. To illustrate, multiplying the matrix A by the matrix C yields D = AC, with
For example, the element (1, 3) of the matrix has value . This can be derived by taking the row 1 of matrix (with values 2 and 1) and the column 3 of matrix (with values and 1), multiplying these values element by element (giving and 1), and then summing the values. More generally, assume that (p.216) the dimensions of the matrix are , and that the dimensions of the matrix are . These matrices can be multiplied together because the ‘inner dimensions’ match: the second dimension of the first matrix and the first dimension of the second matrix are both . The dimensions of the product matrix will be given by the ‘outer dimensions’, i.e. the first dimension of the first matrix and the second dimension of the second matrix. Thus, the dimensions of are . The element is computed from the elements in the row of matrix and the column of matrix . Both of these have numbers, which are multiplied element by element, and added together. As an equation, .
Unlike with numbers, with matrices it does not generally hold that , and therefore it does matter in which order matrices are multiplied by each other. For example, with the example matrices given in Eq. (A.1), we computed the product , but it is not even possible to compute the product because the inner dimensions of the matrices do not match in this order.
One often needed matrix operation is that of the transpose, denoted by the superscript . Transposing a matrix simply means switching the roles of rows and columns, so that e.g. the transpose of the matrix would be
One common application of matrix algebra is solving a system of linear equations. Thus, we wish to find the solution to the equation , where the matrix and the vector are given. For example, let be as given in Eq. (A.1), an let . Note that as shown in Eq. (A.5) we have used the transpose to convert the row vector into the column vector . The matrix equation corresponds to the system of the two linear equations
where we wish to find the numbers and which solve the system. It is easy to see that the solution here is and , or in the matrix form . In matrix form, the solution can be written as the product , where is called the inverse of the matrix .
Another commonly needed matrix operation is that of finding eigenvalues and eigenvectors. These can be computed only for square matrices, i.e. matrices with an equal number of rows and columns. For such a matrix , a number and a column vector are said to form an eigenvalue–eigenvector pair if . For example, for the matrix given in Eq. (A.1), and form an eigenvalue–eigenvector pair. To see this, we apply matrix multiplication to see that , which is the same as . A matrix generally has eigenvalue–eigenvector pairs. The reader may wish to verify that the second such pair for the example matrix is given by and .
More precisely, solutions to the equation are called the right eigenvectors. Vectors that satisfy are correspondingly called left eigenvectors. The eigenvalues are the same in both cases but the left and right eigenvector are generally different. For example, the left eigenvector of the matrix associated with the eigenvalue is , whereas the left eigenvector associated with the eigenvalue is . The left eigenvectors of the matrix are the right eigenvectors of the matrix . We will not go further (p.217) here on how eigenvalues and eigenvectors can be computed—in practice this can be done with matrix algebra routines implemented in many programming languages.
Other commonly needed matrix operations include e.g. computing the determinant of a matrix, and various matrix decompositions such as the decomposition and the Cholesky decomposition. These are explained in many basic textbooks of linear algebra, such as Poole (2014).
A.2 A very brief tutorial to calculus
A.2.1 Derivatives, integrals, and convolutions
A derivative measures the rate of change. If is a function of time , the derivative , also denoted by , tells how fast the function changes (increases or decreases) as time increases. Similarly, if is a function of one-dimensional space , then tells how the function changes (increases or decreases) as one moves across the space from left to right, i.e. in increasing direction of the coordinate. In Figure A.1, when moving from left to right, the function (shown in Figure A.1A) first increases, and thus the derivative (shown in Figure A.1B) attains positive values in this part of the space. Then the function decreases (and thus the derivative is negative), then increases (and thus the derivative is positive), and finally decreases again (and thus the derivative is negative). In those locations where the function attains a local maximum or minimum, it does not increase nor decrease, and thus the derivative equals 0.
The second derivative is simply defined as the derivative of the first derivative. Thus, it measures how fast the first derivate increases or decreases, as can be seen by comparing Figures A.1B and C as we just did for Figures A.1A and B. A comparison between Figures A.1A and C illustrates how the second derivative measures the curvature of the function . If thinking of the function as e.g. the population density, the second derivative is positive in locations where population density is lower than in the neighbouring locations, whereas the second derivative is negative in locations where population density is higher than in the neighbouring locations. For example, in the middle of Figure A.1A (at ), the function has a local minimum. At this location the function is convex, which corresponds to a positive second derivative, meaning that population density at is lower than on nearby locations. (p.218) Similarly, the third derivative , also denoted by is defined as the derivative of the second derivative. More generally, the derivative of order is denoted by .
Integral is the inverse of derivative, so e.g. the integral of is , the integral of which is , the integral of which is . We may write . The upper limit of the integration is and thus the value of a function at equals its derivative integrated from some lower limit up to that point. We have set here the lower limit to , but it could have been set to any value. This is because knowing the derivative does not determine the function completely, only up to an integration constant, which we have denoted above by . Changing the lower limit of the integration can be compensated for by changing the integration constant.
The area under a curve can be computed with an integral. For example, for the function of Figure A.1A, it holds that , and thus the region between the x-axis and the curve has an area of 1.
In addition to derivation and integration, some ecological models involve a convolution term. Convolution does not apply to a single function but to a pair of functions. The convolution between the functions and is denoted by , and it is defined by
Figure A.2 gives examples of how the convolution of two functions behaves. If thinking of the function shown in Figure A.2A as the density (number per unit area) of individuals (say, trees) and the function of the Figure A.2B as the dispersal kernel (say, of seeds), the convolution shown in Figures A.2C describes the distribution of the seeds after dispersal. In this example the dispersal kernel is symmetric around 0, in which case the convolution produces a smoothing (i.e. a local average) of the function . Note that in this example the density of seeds is not highest where the density of trees is highest. In Figure A.2D we have taken the convolution of the function with itself. As the function is not centred around 0, the convolution has also moved the function in space. As we will discuss in Appendix A.3, one application of convolutions relates to computing the distribution for the sum of two random variables.
A.2.2 Differential equations
Differential equations are routinely used in many kinds of models in ecology and evolution. As a very simple example, assume that the population consists initially (at time ) of a single individual, and that the population increases at a constant rate 2, meaning that 2 new individuals arrive per time unit to the population. The mathematical description of this problem can be written as the differential equation and the initial condition . The solution to this problem is , and thus the population density increases linearly with time, with the slope 2. To see that this function indeed solves the problem, we first note that and thus the proposed solution satisfies the initial condition. Further, by computing the derivative of the function with respect to time yields , and thus the proposed solution satisfies also the differential equation.
As a side note, this model predicts that e.g. at time the population will consist of 1.5 individuals, which is clearly not feasible. This is because differential equations are deterministic models defined in a continuous state space (see Table 1.1), and consequently they are best suited for modelling large populations not influenced by stochastic fluctuations. If having initially 1,000 individuals instead of a single individual, there would be 1,500 individuals at time 0.25. While the model would still predict non-integer values, approximating 1,501 individuals by 1,500.5 individuals is more accurate than approximating 2 individuals by (p.219) 1.5 individuals. Another option is to consider the differential equation as modelling the expected value of a stochastic process: if starting with a single individual at time , then at the population may consist of 1, sometimes of 2 (or 3 or more) individuals, with 1.5 being the expected value. In Appendix A.3 we will discuss Markov processes which can be seen as stochastic counterparts of differential equations.
In the example just described, we assumed that the arrival rate of new individuals is independent of the present size of the population. Let us then assume that new individuals appear because the individuals that make up the present population produce offspring. If the per capita rate (number per unit time) by which the present individuals produce offspring is 2, then the growth rate of the population is , where is the current size of the population. This reasoning leads to the differential equation . The solution to this problem is . To see that this is indeed the solution, one needs to check that satisfies the initial condition as well as the differential equation. Clearly, , and thus the initial condition holds. Further, , and thus also the differential equation holds. Note that now the solution does not correspond to linear growth but to exponential growth.
Differential equations can also involve higher derivatives. For example, consider the second-order differential equation , where the highest order derivative (p.220) (here, second derivative) is put to the left-hand side (this is what is being modelled), and the right-hand side involves lower order derivatives (here, the function itself and the first derivative). A second-order differential equation needs to be supplemented with two initial conditions, one for the function and one for its derivative, e.g. and . As an exercise, the reader may wish to verify that the solution to this problem is . To do so, one needs to check that the proposed function satisfies both of the initial conditions as well as the differential equation. For the latter, simply compute the value of the left-hand side and the value of the right-hand side , and note that they are indeed the same.
A.2.3 Systems of differential equations
Many ecological models are written as coupled systems of differential equations. Examples involve e.g. multi-species models (say, predator–prey model, with one equation for each species) and metapopulation models (with one equation for each patch). The mathematically simplest version is that of a linear system of autonomous differential equations. With the help of the matrix notation introduced in Appendix A.1, such a system can be written as , where the variable is usually time. The equation needs to be supplemented with an initial condition, which can be written as . This is a linear system because is a linear function of . It is an autonomous system because depends on the variable only through , i.e. the matrix is independent of time.
Autonomous linear systems of differential equations can be solved with a standard procedure. This consists of making an ‘ansatz’ (an educated guess which will later be verified to hold) that the solution is of the form . So let us substitute this ansatz to the differential equation and see what happens. The derivative of the ansatz is given by , and the matrix product is given by . Equating these two gives . This will be true for all times if and only if . Thus, we have shown that the ansatz is a solution to the differential equation, if is an eigenvalue–eigenvector pair of the matrix (see Appendix A.1). In general, if is a matrix, it will have eigenvalue–eigenvector pairs, which we may denote by for . As the system of differential equations is linear, not only all the functions satisfy the differential equations, but also any linear combination . The initial condition is needed to determine the weights of the linear combination. As for it holds that , we obtain . Denoting by the matrix which has the eigenvectors as its columns, we can write the initial condition in matrix notation as , where is the column vector with elements . The solution can be written , where is the inverse of the matrix (see Appendix A.1).
As an example, let be the matrix
and let . Denoting the two components of the vector by and , the system of differential equations can also be written as (see Appendix A.1)
with the initial condition , . The eigenvalues of the matrix are , , where is the imaginary unit with . It may seem confusing that the eigenvalues are complex numbers, as the original system of differential equations had nothing to do (p.221) with complex numbers! But there is nothing wrong here, the complexity of the eigenvalues just indicates that the solution will oscillate in time (this relates to what is called Euler’s formula: ). After computing the eigenvalue–eigenvector pairs and solving , the solution can be presented as
Reassuringly, the imaginary parts have cancelled out from the solution, and thus we needed complex numbers only as an intermediate step. The dynamics of this system are illustrated in Figure A.3. They resemble predatory–prey dynamics discussed in Section 4.2.
Systems of differential equations used to model ecological or evolutionary dynamics will often be non-linear and non-autonomous. For the general case, a system of first-order differential equations can be written as , where is some vector-valued function which may depend in any non-linear manner on the current state of the system , as well as on time . Unlike for linear and autonomous systems, there is no general recipe on how to solve this equation, and one often needs to resort to numerical solutions. However, specific properties of the system (most importantly, local stability of equilibriums solutions) can often be found analytically by linearizing the system. We refer the interested reader to the many textbooks covering these topics (e.g. Swokowski 2000).
A.2.4 Partial differential equations
Sometimes differential equations involve derivatives both with respect to time and space. In such a case, they are called partial differential equations, to be separated from ordinary differential equations which involve derivatives only with respect to one variable. As an example, consider the diffusion equation (Eq. (2.2)) of Chapter 2, simplified here to a single spatial dimension as
(p.222) The state variable in this equation, denoted by , is a function of both time and spatial location . In the context of Chapter 2, would model the probability density of an individual’s location. The symbol that appears on both sides of the equation stands for a partial derivative. In the left-hand side the partial derivative is written as , and it measures the rate of change with respect to time . Thus, the left-hand side of the equation asks how fast the function increases or decreases with time. The right-hand side of the equation gives the answer: the rate of change is proportional to the second spatial derivative. Note that in the right-hand side, the partial derivative is taken not with respect to time but with respect to the spatial variable , and it is not the first but the second partial derivative, as indicated by the superscripts 2 in . Based on the interpretation of the second derivative, Eq. (A.11) says that the function will increase in locations where it currently has a smaller value than in neighbouring locations, and that it will decrease in locations where it currently has a higher value than in neighbouring locations. Therefore, the function becomes smoother than it was originally, as peaks become flattened out and dips become filled.
A partial differential equation also needs to be supplemented by an initial condition . Unlike with ordinary differential equations, the initial condition is not a single value, but a function which describes the initial state of the function in all spatial locations .
A.2.5 Difference equations
The difference between differential and difference equations is that difference equations are defined for discrete time, while differential equations are defined with respect to continuous time. For example, a discrete-time analogy for the differential equation would be the difference equation . This equation tells that every time-step the number of individuals in the population doubles. Starting again with a single individual, , the solution to this problem is . Like the solution of the corresponding differential equation, this also corresponds to exponential growth. As in this book we mainly focus on continuous-time models, we have focused on differential equations, but very similar mathematical tools exist also for difference equations. Often the same biological problem can be modelled almost identically by a differential or a difference equation, and thus the choice between these two is in many cases somewhat arbitrary. However, in other cases these two approaches can give qualitatively different answers.
A.3 A very brief tutorial to random variables
Differential equations and difference equations (Section A.2) are examples of deterministic models. This means that ‘running’ the same model repeatedly always leads to the same solution. Many, if not all, ecological and evolutionary phenomena are, however, not of deterministic but of stochastic nature, meaning that running exactly the same model twice (e.g. repeating the same experiment twice) usually results in different outcomes. Such random variation can be captured by stochastic processes. A central concept behind stochastic processes is that of a random variable, a concept that is also needed for understanding the mathematical foundations of statistics that we discuss in Appendix B.
A.3.1 Discrete valued random variables
Random variables are usually denoted by capital letters such as . An example of a random variable could be the outcome of a toss of a coin or a die. In the case of the die, there are (p.223) six possible outcomes, , which simply correspond to the number of dots (called pips) on the six faces of the die. Thus, in this example we may write , or in a more general notation , where . Assuming a fair (i.e. unbiased) die, the probabilities of each of these outcomes are identical, so that for . is a discrete valued random variable, as the outcomes form a discrete set. The probability distribution of can be described simply by listing the probabilities of all possible outcomes, as we have just done.
One of the most basic properties of a random variable is its expectation, denoted by . The expected value is defined as the mean value of the outcomes, weighted by their probabilities, . For the die example, the expected result, also called the mean or the average, is . If throwing the die 1,000 times, the sum of the values should be close to 3,500. In addition to the expectation, another routinely used statistic is variance, which measures the amount of variability among the outcomes. Variance is defined as . In this formula, asks how far the random variable is from its expectation . Variance is thus defined as the expectation of the squared difference between the random variable and its expectation. In the case of the die example, we obtain
Random variables can have infinitely many possible outcomes. As one example, let us consider a random variable for which the possible outcomes are all the non-negative integers Let us assume that the first outcome () takes place every second time, i.e. with probability . Let us assume that the second outcome () takes place in every second case, conditional on not taking place, and thus with probability . Assuming that the third possible outcome () would again hold every second time, conditional on or not taking place, we have . Continuing similarly, we obtain for the general case . The probabilities are illustrated in Figure A.4A. They describe the probability density function of a random variable, which follows the geometric distribution with parameter value 1/2. For any discrete probability distribution such as the geometric distribution, summing up the probabilities of all possible outcomes gives one, i.e. .
Applying the formulae of expectation and variance described two paragraphs earlier, we find that these are for the geometrically distributed (with parameter ½) random variable , and . We note that for this distribution, we were able to compute the values of the infinite sums exactly, but in a more general case one may need to compute them numerically.
A.3.2 Continuous valued random variables
Let us then move to random variables with a continuous state-space. An archetypal example of such a random variable is given by the normal distribution with mean and variance . The notation is to be read as ‘the random variable is distributed according to the normal distribution with mean and variance ’. Let us consider for simplicity the so-called standard normal distribution , with mean 0 and variance 1. The random variable can obtain any real value. Thus, if we sample once from the distribution, we may obtain , whereas the next sample may be . While any value is possible, not all values are equally likely. The relative probabilities of different values are described by the probability density function of the distribution. In the case of , the (p.224) probability density function (illustrated in Figure A.4C) is . The value of the probability density at the most likely value of is , whereas a value as far from 0 as is extremely unlikely, .
In the case of a discrete valued random variable, the values of the probability density function are probabilities which sum to 1. But in the case of a continuous random variable, there is a continuum of possible values. Thus, we cannot say that the most likely value is sampled with probability 0.4. Even if the sampled value could be , mathematically the probability of sampling this value is 0, as is the probability of sampling any specific single value. With continuous valued random variables, we may ask with what probability the random variable will fall within a specific range. The answer to this question is obtained by integrating the probability density function over that range. For example, as the area under the probability density curve within the range is , the fraction of cases where the random variable sampled from will have its value within this range is 68.3%.
The probability by which the random variable will obtain a value that is at most a given threshold is called the cumulative distribution function, often denoted by . (p.225) For continuous random variables, is defined by , whereas for discrete random variables it is defined by . Note that inside the integral (or sum) we have used another name for the variable () because is reserved for the integration (summation) limit.
In the case of discrete valued random variables, the probabilities must sum to 1. Similarly, in the case of a continuous valued random variable, the probability density function needs to integrate to 1, i.e. . These statements can be expressed with the help of the cumulative distribution function as . Figure A.4 illustrates how the cumulative distribution functions of the geometric and the normal distributions converge to 1 as increases. Note that the probability density function of the standard normal distribution involves a factor . This factor is called the normalizing constant, as it has been chosen so that .
The expected value of a continuous valued random variable is defined like that for a discrete valued random variable, except replacing the summation by an integral. Thus, , where each possible value of the random variable is weighted by its probability density . For , it holds that . Similarly, the variance of a continuous valued random variable is defined by , yielding for the . Note that we derived that the mean and variance of a random variable are indeed 0 and 1.
A.3.3 Joint distribution of two or more random variables
Often it is interesting to consider two or more random variables at the same time. Denoting two random variables by and , we may consider their joint distribution, i.e. the probabilities (or values of the probability density) of all possible combinations of their outcomes . Covariance and correlation measure the degree by which the two random variables depend on each other. The covariance is defined as , and correlation is obtained by scaling the covariance by the variances, . While covariance can obtain any value from to , correlation is restricted to the range from to 1. Two variables are statistically independent if their covariance, and thus correlation, is 0.
To illustrate the concept of covariance, let us consider the multivariate normal distribution. The notation means that the random vector follows a multivariate normal distribution with mean (a vector) and variance-covariance matrix (a matrix). As an example, let us consider the bivariate case () with
In this example, the vector has two elements, . The first one of these is distributed as and the second one as , where the means and the variances of the univariate normal distributions are obtained from the mean vector and the diagonal elements of the variance-covariance matrix . The covariance 7 between the two random variables can be translated into the correlation of . The dots in Figure A.5A illustrate 200 random deviates sampled from this distribution, and the ellipses the 50 and 95% quantiles of the probability distribution. The mean of the distribution is centred at , and the positive correlation assumed between the two variables is evident in the figure. Figure A.5B illustrates otherwise the same case but the covariance has been changed to , leading to a negative correlation of .
(p.226) A.3.4 Sums of random variables
Expectations, variances, and covariances have many useful properties. For example, one may wish to study the properties of a random variable obtained as a weighted sum of two random variables and , where the weights and are real numbers. The expectation and variance of can be computed from those of the original variables as , and . Similarly, if and , then
If the random variables and are independent of each other, the covariance is 0, and thus the formula for the variance simplifies to
In addition to expectation and variance, it is possible to compute the entire probability distribution for the (possibly weighted) sum of two random variables. To illustrate, let us consider the example of throwing the die (see Section A.3.1) but now doing it twice, with the first and second outcomes called and . The sum of these two outcomes may obtain any value between 2 and 12. For example, the value of 5 will be obtained by the combinations , , and . As each of these takes place with probability 1/36, we obtain . More generally, if we denote by and the probability distributions of the random variables and (which in the die example are identically distributed, but that more generally could be different), we have
Equation (A.15) is simply based on going through all the cases for which the sum of the two random variables obtains the specific value . This happens if the first variable obtains any (p.227) value , and the second variable obtains the value , which makes the sum equal , as we just illustrated for the die example for .
We note that Eq. (A.15) holds only if the random variables and are independent of each other. To see this, let us not throw the second die at all, but decide to set the value of the second die to the value of the first die. In this case, the outcome of the second die is still a random variable, as it obtains each of the values from 1 to 6 with probability 1/6. However, it is not independent from the outcome of the first die . Now the possible outcomes of are 2, 4, 6, 8, 10, and 12, and each of these takes place with probability 1/6.
Equation (A.15) holds also for continuous valued random variables if replacing the sum by an integral. Thus assuming that and are independent continuous valued random variables, the probability density function of is given by
where the symbol denotes convolution (see Section A.2). As an example, let be an uniformly distributed random variable in the range from 0 to 1. Let also the random variable follow the same distribution. Evidently, the sum may obtain any value from 0 to 2. At first sight one might expect that could obtain any value from 0 to 2 equally likely, i.e. that . This would actually be the case if we would make the two random variables fully correlated by sampling only and then setting , as we did in the case of the die example where we set the value of the second die to that of the first die.
To compute the distribution of when and are independent, let us apply Eq. (A.16). With , the probability density function is if and otherwise. As is identically distributed, it has the same probability density function. Thus we obtain
In Eq. (A.17) we have first utilized the fact that for and otherwise. We have then utilized the fact that if and only if . As the integrand is just the constant 1, the value of the integral is the length of the integration interval, and therefore . Thus, if , we have . If , we have . This probability density looks like a triangle: the most likely value for the sum is , and the value of the probability density decreases linearly to 0 if approaching the extremes of 0 or 2. Informally, there are more combinations of and which produce the sum of than e.g. the sum of . Note that this is analogous to the earlier discussion on the sum of two throws of the die, for which case e.g. is greater than .
A special case often needed in the context of hierarchical generalized linear models (Section B.1) is the sum of two normally distributed variables. Assume that and that , and let . Assuming that and are independent, it holds that . Thus, the sum of two normally distributed random variables is also normally distributed, the expected value being the sum of the expectations, and the variance being the sum of the variances.
(p.228) A.3.5 An application of random variables to quantitative genetics
As an illustration of how to work with random variables, we follow here Appendix A of Ovaskainen et al. (2011) to derive Eq. (5.4) in the main text. Our starting point is Eq. (5.1), which models the breeding value of an individual as
We consider the allelic states as random variables (even if they are denoted by non-capitalized letters), and the allelic effects as fixed. By the covariance formula for sum of random variables (Eq. (A.13)), we obtain
We then apply the definition of covariance to write
We denote the frequency of the allele of gene in the ancestral population by . As we assume that the genes under consideration are neutral, the expected allele frequency for any individual is equal to that in the ancestral population, and thus . Recall that is an indicator function taking the value 1 for one of the allelic variants and 0 for all other allelic variants. Thus, while one might at first think that , it actually holds that . For the same reason, it holds that for . The value of the product equals 1 if in gene the allele of the individual is of the same type u as the allele of the individual , whereas the product equals 0 if this is not the case. This means that the probability by which the allele of the individual is of the same type u as the allele of the individual is given by . Therefore, the probability that two randomly chosen alleles in a gene are of the same type for the individuals and is given by . This probability can be decomposed into two components. First, the alleles may be identical by descent, the ancestral allele being of type u. Given the definition of coancestry coefficients (see Section 5.2), this happens with probability . The second option is that the alleles are not identical by descent, but that the alleles in the ancestral generation were both of type . As the probability of the latter case is , we obtain
As the individuals and can have different allelic variants only if the alleles are not identical by descent (recall from Section 5.2 that we have ignored mutations), we obtain for
Equation (A.22) can be written as
To simplify Eq. (A.23), we denote by , where is Kronecker’s delta, which obtains the value of 1 if and the value of 0 if this is not the case. In this notation, we have
which equals Eq. (5.4), if we measure the amount of additive variation present in the ancestral generation by
A.4 A very brief tutorial to stochastic processes
After the discussion of random variables, we are ready to move to stochastic processes. A stochastic process is a random variable that depends on time. Thus, instead of a single random variable , we consider a collection of random variables indexed by time . Time can be continuous or discrete, and the state-space (i.e. the set of values which the random variable can attain) can be continuous or discrete as well.
A.4.1 Markov chains
Let us start from the discrete time setting , and consider the simplest possible example in which the state space involves only two states, so that the random variable has either the value 1 or the value 2. A realization of such a stochastic process may look like (1, 1, 1, 2, 2, 1, 2, 2, 1, 2, 1, …), where we have showed the values of the random variables at times
The simplest and most important class of this kind of models is given by Markov chains. The key feature of a Markov chain is that the probability distribution of the random variable is allowed to depend only on the state of the system in the previous step, i.e. on the value of . Markov chains can be described with the help of a transition matrix . The elements of this matrix, denoted by , are the probabilities that the process will next move to a state if it is currently in state . As an example, let us consider the transition matrix
If the system is presently in state 1, it may either stay in that state (with probability ) or move to state 2 (with probability ). If the system is currently in state 2, it will stay in that state with probability and move to state 1 with probability . Figure A.6A shows a simulation of this Markov chain model. As the transition matrix is set up so that state 1 is favoured, it is intuitive that the system spends more time in that state than in state 2.
Markov chains are mathematically convenient since much of their behaviour can be derived with the help of matrix algebra. Let us denote by a row vector that describes the probability distribution of the systems state at time , coded so that is the probability that the system is in state 1 and is the probability that the system is in state 2. At each time step, the probability distribution changes according to the rules of the transition matrix: the probability that the system will be in state 1 at time is the probability that it was at time in state 1 and stays there, plus the probability that it was in state 2 and will move to state 1. As an equation, . This equation and the analogous equation for can be combined in the more compact matrix notation (see Appendix A.1) as . Thus, the probability distribution is obtained by multiplying the probability distribution by the transition matrix from the right.
(p.230) As an example, let us assume that the system is initially in state 1 and thus . Then we may compute that , , and , which numbers are illustrated in Figure A.6C. We may use matrix algebra to jump directly over multiple time steps. For example, , where means that the matrix is raised to the second power and thus multiplied by itself in the sense of matrix multiplication. More generally, , and thus gives the transition matrix describing how the probability distribution changes over time steps.
The long-term behaviour of a Markov chain is characterized by its stationary state. In the example in the previous paragraph, the probability distribution approaches as time increases. Mathematically, this can be written as . Thus, if sampling the system long enough after the process started, the probability to find it in state 1 will be 0.75 and the probability to find it in state 2 will be 0.25. This means that the system will (p.231) spend 75% of its time in state 1 and 25% of its time in state 2. The stationary state can be found analytically by noting that it must satisfy the equation . As discussed in Appendix A.1, this means that is a left eigenvector of associated with the eigenvalue 1. Solving the eigenvector problem with standard matrix algebra tools yields .
Note that the probabilities , and listed earlier approach, but do not equal, the stationary state because it matters from which state the system is initiated. During the initial transient period, the system ‘forgets’ from which state it was initiated and it converges to the stationary state that is independent of the initial condition. In other words,
showing that whichever was the initial condition, the system will after a sufficiently long time be in state 1 or in state 2 with probabilities 0.75 and 0.25, respectively.
A.4.2 Markov processes
The continuous time analogue of a Markov chain is a Markov process. While in a Markov chain the system stays a fixed time period in each state before moving to another state (or possibly staying in the same one), in a Markov process the durations of the time steps are exponentially distributed random variables. A Markov process is determined by specifying its transition rates.
Before continuing with the Markov process, let us discuss the relationship between rates and probabilities. Unlike probabilities, rates are not constrained to be between 0 and 1 but they can have any positive values. The probability by which an event with a rate takes place during a short time interval is given by . This equation holds only for very small time intervals, so that technically it holds only for . To see this, consider e.g. the rate , and let the duration of the time interval be . The formula would predict that the probability by which the transition takes place is 2, which clearly cannot be valid. For an arbitrary time interval of any duration, the probability of the event happening is , which gives the probability of for and . If assuming a short time interval, the linear relationship becomes an increasingly good approximation. For example, if and , the approximation is very close to the exact value of .
Let us then illustrate a Markov process with an example system with three states. We let the transition rate of moving from state 1 to 2 be , and set the other rates to , , , , and . Markov processes can be simulated with the help of the Gillespie (1977) algorithm, the description of which also illustrates mathematical properties of Markov processes. Let us assume that the process is currently in state 1. As the process leaves this state with rates (to state 2) and (to state 3), the total rate at which the process leaves the state 1 is . Mathematical theory of Markov processes now tells us that the time that the system stays in state 1 is exponentially distributed with parameter equalling the total rate 3. The expected value of the exponential distribution is the inverse of the parameter (in this case ), so that a large rate means a short waiting time. When simulating the process, we may thus randomize the duration that the system spends in the present state using an exponentially distributed random number. The next decision to be made in the simulation is to which state the system moves when it leaves the present state 1. This is determined by the relative rates. By dividing the rates and by their sum, we see that the system moves to state 2 with probability and to the state 2 with probability . Figure A.6B illustrates a simulation of the Markov process just described.
(p.232) Also Markov processes can be studied analytically with the help of matrix algebra. The analogue of the transition probability matrix is the transition rate matrix . In the example in the previous paragraph, we have
Here the off-diagonal elements have been set to the transition rates, whereas the diagonal elements have been set so that the row sums equal to 0. This is in contrast with the transition probability matrix , in which the row sums equal to 1.
While in the case of the discrete time Markov chain the state of the system evolves according to the difference equation , in the case of the continuous time Markov process it evolves according to the system of differential equations
Figure A.6D illustrates the solution to this system of differential equations, thus showing how the probability of being in each of the three states behaves if starting initially from state 1. After long enough time, the system converges to the stationary state. At the stationary state the probability distribution does not change anymore, and thus its derivative is 0, meaning that the stationary probability distribution must satisfy the equation . Thus, is the left eigenvector of the matrix corresponding to the eigenvalue 0. For our example, we obtain , reflecting the amount of time that the system spends in the three states at the stationary state.
The transition rate matrix can be translated to a transition probability matrix with the help of the equation , where is the time interval over which the transition probability matrix is to be computed. Here is the matrix exponential of the matrix . It is defined, analogously to the Taylor expansion of the usual exponential function, as the power series where I stands for the identity matrix. We note that , simply confirming that over a vanishingly short time the system will stay where it was originally.
Sometimes Markov processes or Markov chains have an absorbing state. For example, consider the transition rate matrix
With this transition rate matrix, the system will never leave from state 2. Whether the system is initiated from state 1, 2, or 3, it will sooner or later enter state 2. For this reason, state 2 is called an absorbing state, and the stationary distribution is concentrated in this state, . In population models without the possibility of immigration from outside, population extinction will represent an absorbing state. In such a case, it is often of interest to study the quasi-stationary distribution, which describes the probability distribution after long enough time so that the initial condition has become irrelevant, but before the system has reached the absorbing state. Further discussion about quasi-stationary distributions can be found e.g. from Darroch and Seneta (1967), and an application in the ecological context e.g. from Ovaskainen (2001).