# (p.611) Appendix D Theorems On Mct Equations

# (p.611) Appendix D Theorems On Mct Equations

# D.1 Convergence of the approximant sequences

A sequence of arrays of approximants φ^{(r)}(*t*), *r* = 0, 1, …, is constructed in connection with Eqs. (4.25)–(4.27). In the present section, the uniform convergence of this sequence will be demonstrated. The proofs are due to Haussmann (1990) for the first version of the equations of motion and due to Götze and Sjögren (1995) for the second one. For the discussions, which go beyond the ones presented in Sec. 4.2.2, the label *q* for the components of the arrays merely occurs as a fixed parameter. There is no additional reasoning necessary for the extension of the proofs from the case dealing with a single component, *M* = 1, to the general case with a larger value of *M*. Therefore, only the one-component case shall be considered.

It is the goal to prove the uniform convergence of the sum $S={\displaystyle {\sum}_{r=0}^{\infty}{X}_{r}\left(t\right)}$, which is formed with the positive functions

This result ensures the absolute and uniform convergence of the sum ${\sum}_{r=0}^{\infty}\left[{\varphi}^{\left(r+1\right)}\left(t\right)-{\varphi}^{\left(r\right)}\left(t\right)\right]$ and, consequently, the existence of the desired uniform limit formulated in Eq. (4.25a). The goal shall be reached by constructing a sequence of positive numbers *b* _{r}, which define a convergent sum $B={\displaystyle {\sum}_{r=0}^{\infty}{b}_{r}}$. The numbers *b* _{r} depend neither on the time *t* nor on the coefficients *V* _{α}, α = 1, … , *N*, nor on the frequencies Ω and ν or the time τ , which specify the equations of motion. This holds provided the coefficients are restricted to the closed finite intervals specified at the beginning of Sec. 4.2.2 and provided 0 ≤ *t* ≤ *t* _{max}. Here, *t* _{max} is some arbitrary positive finite upper limit for the time interval to be considered. The numbers *b* _{r} shall serve as majorants:

The approximant φ^{(0)}(*t*) reads exp[−Γ*t*], Γ > 0. For the other approximants, the recursion relations (4.27a,b) shall be written in the form:

One infers from Eq. (4.25c) that the three variables are located in finite closed intervals:

*K* is a polynomial and, thus, it obeys a Lipschitz condition. Since the variables as well as the coefficients of the mononomials of $\mathcal{F}$[*P*, ξ] are restricted to finite closed intervals, a Lipschitz constant *L* _{max} can be chosen so that there holds

From Eq. (D.3b), one gets for the second version:

Let us define recursively two versions of sequences of functions *a* _{r}(*t*), *r* = 0, 1, … One writes *a* _{0}(*t*) = 1. For the first version, there holds for *r* ≥ 1:

*L*

_{max}Ω

^{2}. For the second one, there holds for

*r*≥ 1:

*L*

_{max}/τ. One shows by induction that the functions

*a*

_{r}(

*t*) are nonnegative and non-decreasing functions of

*t*. The important inequality (4.25c) implies

Let us prove by induction that the preceding inequality can be extended to: *X* _{r}(*t*) ≤ *a* _{1}(*t*), *a* _{2}(*t*), … , *a* _{r}(*t*), *r* ≥ 1. As first step, one substitutes the inequality *X* _{1}(*t*) ≤ *a* _{0} into the Eq. (D.5a) in order to get: ${X}_{1}\left(t\right)\text{}\le \text{}\nu \text{}{\displaystyle {\int}_{0}^{t}{a}_{0}\left({t}^{\prime}\right)}d{t}^{\prime}\text{}+\text{}\mu \text{}{\displaystyle {\int}_{0}^{t}\left\{{\displaystyle {\int}_{0}^{{t}^{\prime}}\left[2{X}_{0}\left({t}^{\u2033}\right)+{a}_{0}\left({t}^{\u2033}\right)\right]}d{t}^{\u2033}\right\}}d{t}^{\prime}=\text{}\nu \text{}{\displaystyle {\int}_{0}^{t}{a}_{0}}\left({t}^{\prime}\right)d{t}^{\prime}+3\mu \text{}{\displaystyle {\int}_{0}^{t}\left\{{\displaystyle {\int}_{0}^{{t}^{\prime}}{a}_{0}}\left({t}^{\u2033}\right)d{t}^{\u2033}\right\}}dt\prime ={a}_{1}\left(t\right)$

(p.613)
Using Eq. (D.5b), one obtains ${X}_{1}\left(t\right)\le 3\kappa {\displaystyle {\int}_{0}^{t}{a}_{0}\left({t}^{\prime}\right)}d{t}^{\prime}={a}_{1}\left(t\right)$. Repetition of this step yields *X* _{2}(*t*) ≤ *a* _{1}(*t*). This can be substituted into Eq. (D.5a) in order to get: ${X}_{2}\left(t\right)\text{}\le \text{}\nu \text{}{\displaystyle {\int}_{0}^{t}{a}_{1}}\left({t}^{\prime}\right)d{t}^{\prime}+\mu {\displaystyle {\int}_{0}^{t}\left\{{\displaystyle {\int}_{0}^{{t}^{\prime}}\left[2{X}_{1}\left({t}^{\u2033}\right)+{a}_{1}\left({t}^{\u2033}\right)\right]d{t}^{\u2033}}\right\}d{t}^{\prime}}$. Using the preceding result, *X* _{1}(*t*″) ≤ *a* _{1}(*t*″), and formula (D.6a), one gets *X* _{2}(*t*) ≤ *a* _{2}(*t*). The second version of inequalities (D.5b,D.6b) yields the same conclusion. This reasoning can be continued. In step number *r*, one uses Eq. (D.7) to derive *X* _{r}(*t*) ≤ *a* _{1}(*t*). From this, there follows *X* _{r}(*t*) ≤ *a* _{2}(*t*). This leads to *X* _{r}(*t*) ≤ *a* _{3}(*t*), etc. The resulting inequality

*a*

_{r}(

*t*),

*r*= 0, 1, …, as majorant of the sequence of functions

*X*

_{r}(

*t*).

Since 0 ≤ *a* _{r}(*t*″) ≤ *a* _{r}(*t*′) for 0 ≤ *t*″ ≤ *t*′ ≤ *t* _{max}, one gets the estimation: ${\int}_{0}^{{t}^{\prime}}{a}_{r-1}}\left({t}^{\u2033}\right)d{t}^{\u2033}\text{}\le \text{}{t}_{\mathrm{max}}{a}_{r-1}\left({t}^{\prime}\right)$. Defining a further sequence of functions *b* _{t}(*t*), *r* = 0, 1, … recursively by *b* _{0}(*t*) = 1 and ${b}_{r}\left(t\right)\text{}=\text{}\left(v+3\mu {t}_{\mathrm{max}}\right){\displaystyle {\int}_{0}^{t}b\left({t}^{\prime}\right)}d{t}^{\prime}$, one derives from Eq. (D.6a): *a* _{r}(*t*) ≤ *b* _{r}(*t*). One concludes for both versions of equations of motion

*A*= (ν + 3μ

*t*

_{max}) and

*A*= 3

_{k}, respectively. The desired result (D.2) is established. One can choose

*b*

_{r}=

*B*

^{r}/

*r*! with $B=\left({v}_{\mathrm{max}}+3{L}_{\mathrm{max}}{\Omega}^{2}{t}_{\mathrm{max}}\right){t}_{\mathrm{max}}$ and

*B*= 3

*L*

_{max}γ

_{max}

*t*

_{max}, respectively.

# D.2 Completely monotonic approximants

In this section, the sequence of approximants φ^{(r)}(*t*) shall be discussed, which is constructed in Sec. 4.2.2 for the second version of equations of motion; *r* = 0, 1, … It will be demonstrated that all functions ${\varphi}_{q}^{\left(r\right)}\left(t\right),q=1,\dots ,M$, are finite sums of elementary relaxation correlators.

Let us use this paragraph to note some formulas with the aim to simplify the following discussions. All functions *F*(*t*) to be considered shall be continuous and exhibit the standard symmetries: *F*(*t*) = *F*(*t*)* = *F*(−*t*). Therefore, all times can be restricted to *t* ≥ 0. Function *F*(*t*) is called completely monotonic if there is a bounded monotonically increasing weight function σ(γ) so that

This is equivalent to the formula for the Laplace transform

(p.614)
The expression describes also the analytical continuation on the whole plane of complex frequencies *z*, except for the values *z* = −*i*γ, γ ≥ 0. From Eq. (2.46b), one gets for the spectrum

It is an analytic function of the frequency for all ω ≠ 0. The Laplace transform in the conventional notation is defined by Eq. (A.1) for the half plane of values *s*, which obey $s\text{}>\text{}0:\text{}\widehat{F}\left(s\right)={\displaystyle {\int}_{0}^{\infty}\mathrm{exp}}\left[-st\right]F\left(t\right)$. From Eq. (D.10a), one gets

This expression defines the analytical continuation on the whole complex plane except, possibly, for the values *s* = − γ, γ ≥ 0. In agreement with the general formula (A.3b), there holds the relation

It is demonstrated in Sec. 4.2.2 that the recursion relation (4.26b) is equivalent to the double-fraction representation: ${\varphi}_{q}^{\left(r+1\right)}\left(z\right)=-1/\left\{z-1/\left[i{\tau}_{q}+{m}_{q}^{\left(r\right)}\left(z\right)\right]\right\}\text{}\mathrm{Im}\text{}z\text{}>\text{}0$. The index *q* shall be dropped in the following in order to simplify the formulas. According to Eq. (A.3b), the recursion relation can be written in the conventional notation for Re *s* > 0 as

Let us assume that the kernel *m* ^{(r)}(*t*) is a sum of *n* elementary relaxation functions, *n* = 1, 2, … This means that there are *n* + 1 positive amplitudes μ_{k} and positive rates γ_{k} = 1, …, *n* so that

The rates shall be ordered:

It is the goal to derive the proposition: there are *n* + 1 positive amplitudes *f* _{k} and positive rates γ′_{k}, *k* = 0, 1,…, *n* so that

The sequence of rates for φ^{(r+1)}(*t*) is separated by the one for *m* ^{(r)}(*t*):

(p.615)
Suppose that the proposition is correct. Then, one can show by induction the statement made in the first paragraph of this section. For *r* = 0, the statement is correct since the iteration in Sec. 4.2.2 is started with ${\varphi}_{q}^{\left(0\right)}\left(t\right)=\mathrm{exp}\left[-{\Gamma}_{q}t\right],{\Gamma}_{q}>0,q=1,\dots ,M$. If the statement holds for ${\varphi}_{q}^{\left(r\right)}\left(t\right)$, one can write this function as a finite sum ${\sum}_{j}{f}_{q,j}^{\left(r\right)}\mathrm{exp}\left[-{\gamma}_{q,j}^{{\left(r\right)}^{\prime}}\right]$, with all amplitudes and rates being positive. Equation (4.25b) implies ${m}_{q}^{\left(r\right)}\left(t\right)={\displaystyle {\sum}_{k=1}^{n}{\mu}_{q,k}^{\left(r\right)}}\mathrm{exp}\left[-{\gamma}_{q,k}^{\left(r\right)}t\right]$. Formula (4.15a) shows that the rates ${\gamma}_{q,k}^{\left(r\right)}$ are positive, since they are sums of positive terms of the kind ${\gamma}_{k1,j1}^{{\left(r\right)}^{\prime}}+{\gamma}_{k2,j2}^{{\left(r\right)}^{\prime}}+\cdots $ The amplitudes are sums of products of the kind ${V}_{q,{k}_{1}\cdots {k}_{n}}^{\left(n\right)}{f}_{k1,j1}^{\left(r\right)}\cdots {f}_{kn,jn}^{\left(r\right)}$ and, because of Eq. (4.15b), they are not negative. The proposition yields the desired formula (4.28) for ${\varphi}_{q}^{\left(r+1\right)}\left(t\right)$.

The proof of the proposition starts by rewriting Eq. (D.13a) in the equivalent form

One can present this function as ratio of a denominator polynomial of degree *n*, *D*(*s*) = (*s* + γ_{1})(*s* + γ_{2}) … (*s* + γ_{n}) = *s* ^{n} + *O*(*s* ^{n−1}), and a numerator polynomial of degree $n-1,N\left(s\right)=\left({\displaystyle {\sum}_{k=1}^{n}{\mu}_{k}}\right){s}^{n-1}+O\left({s}^{n-2}\right):{\widehat{m}}^{\left(r\right)}\left(s\right)=N\left(s\right)/D\left(s\right)$. Because of Eq. (D.12), the approximant can be presented as ratio of polynomials of degree *n* and $\left(n+1\right)"\text{}{\widehat{\varphi}}^{\left(r+1\right)}\left(s\right)=\left[\tau D\left(s\right)+N\left(s\right)\right]/\left[\tau sD\left(s\right)+D\left(s\right)+sN\left(s\right)\right]$. Function φ^^{(r + 1)} (*s*) is meromorphic, and it can have at most (*n* + 1) poles. Since *D*(*s* = 0) = γ_{1} … γ_{n} ≠ 0, the value *s* = 0 cannot be a pole. One concludes that the poles are the zeros of the function ϕ(*s* = τ + (1/*s*) + *m*^^{(r)}(*s*). Restricted to a function of the real variable *x*, ϕ(*x*) is a real function. It decreases strictly: ∂ϕ(*x*)/∂_{x} < 0. It has (*n* + 1) simple poles for the positions −γ_{n} < −γ_{n−1} < … < −γ_{1} < −γ_{0} = 0. If *x* increases from −γ_{ℓ} + 1 to −γ_{ℓ}, ϕ(*x*) decreases from ∞ to −∞, ℓ = 0, …, *n* − 1. Hence, there are *n* zeroes γ′_{ℓ}, obeying the conditions (D.13d). If *x* decreases from −γ_{n} to −∞, ϕ(*x*) increases from −∞ to τ. Consequently, there is a zero −γ′_{n}, obeying −γ′_{n} < −γ_{n}. Thereby, (*n* + 1) simple poles of the approximant are identified, and one can write the partial-fraction representation:

Since $\partial {\widehat{\varphi}}^{\left(r+1\right)}\left(x\right)/\partial x<0$, there holds *f* _{k} > 0, *k* = 0, …, *n*. Since the representation is equivalent to Eq. (D.13c), the proof is completed.

The discussion of Eq. (D.12) can be modified to one for the recursion defined in Sec. 6.2.1 for the shape functions. Equation (6.114b) can be noted as

(p.616)
This expression is obtained from Eq. (D.12) by specializing to τ = 0. The kernel is quantified by *n* pairs of numbers (μ_{k}, γ_{k}), *k* = 1, … , *n*, as is explained in connection with Eqs. (D.13a,b). The following proposition shall be proven. There are *n* pairs of positive numbers (*f* _{k}, γ′_{k}, *k* = 0, …, *n* − 1 so that

The sequence of rates for function φ̂^{(r+1)}(*t*) separates that for the function *m* ^{(r)}(*t*):

Contrary to what is discussed above for Eq. (D.12), the number of relaxators contributing to φ̂^{(r+1)}(*t*) is the same as that of the relaxators contributing to *m* ^{(r)}(*t*).

The induction proof presented in the paragraph preceding Eq. (D.13e) remains valid. Hence, one can proceed as above and write the kernel as ratio of a polynomial *N*(*s*) of degree (*n* − 1) and a polynomial *D*(*s*) of degree *n* : *m*̂^{(r)}(*s*) = *N*(*s*)/*D*(*s*). Equation (D.14) yields φ̂^{(r+1)}(*s*) = *N*(*s*)/[*D*(*s*)+*sN*(*s*)]. This function is meromorphic. Different to what is deduced above from Eq. (D.12), φ̂^{(r+1)}(*s*) cannot have more than *n* poles. The poles are the zeros of the function ϕ(*s*) = (1/*s*) + *m*̂^{(r)}(*s*). One continues as above. For real values *x* = *s*, the function ϕ(*x*) is strictly decreasing. It has (*n* + 1) simple poles at −γ_{n} < −γ_{n−1} < … < γ_{1} < 0 = γ_{0}. Consequently, there are *n* poles at γ′_{0},…,γ′_{n−1}, which obey the condition (D.15b). As above, one shows that the residues *f* _{k} are positive. Hence, there holds the partial fraction representation

Since this formula is equivalent to Eq. (D.15a), the proof is completed.

# D.3 The maximum-eigenvalue inequality

In this section, the inequality *E*(*P*) ≤ 1 for the maximum eigenvalue *E*(*P*) = *E*[*P*, *f*(*P*)] for the maximum fixed point ** f**(

*P*) shall be derived. The proof (Götze and Sjögren 1995) is done indirectly. It will be assumed that there is some positive δ so that

*E*(

*P*) = 1+δ. The assumption shall be used to construct a fixed point

**(**

*g**P*) with a component of some label

*q*

_{0}, 1 ≤

*q*

_{0}≤

*M*, obeying

*g*

_{q0}(

*P*) >

*f*

_{q0}(

*P*). This result contradicts the maximum property (4.52e). Consequently, the assumption is illegitimate; and this conclusion implies the desired result (4.60c).

(p.617)
In order to formulate the assumption more explicitly, let *r* denote an eigenvector of the stability matrix (4.60a) for state *P* with eigenvalue *E*(*P*), i.e.,

There is a non-zero component of *r*, say, *r* _{q0} ≠ 0. One of the Frobenius–Perron theorems formulates that one can choose the eigenvector so that none of its components is negative (Gantmacher 1974). Hence, one can request

The transformation (4.22a) shall be applied for the maximum fixed point ** f***(

*P*) =

**(**

*f**P*):

The formula (4.22c) for the transformed coefficients ${\widehat{V}}_{q,p}^{\left(1\right)}$ agrees with the corresponding one for the elements of the stability matrix in Eq. (4.60a). Equation (4.22b) can be written as

The last term combines the contribution due to the coefficients ${\widehat{V}}_{q,k1\dots {k}_{n}}^{\left(n\right)}$ with *n* ≤ 2. Since these coefficients are non-negative, there holds

Let us choose some positive ∈, which is smaller than the inverse of the largest component of ** r**. There holds 0 ≤ ξ

*r*

_{q}< 1 for all

*q*and all ξ obeying 0 < ξ ≤ ∈. The polynomials ${\widehat{\mathcal{F}}}_{q}\left[P,x\right]$ vanish for

*x*= 0. Hence, there is some positive Lipschitz constant

*C*so that: 0 ≤ $0\le {\widehat{\mathcal{F}}}_{q}\left[P,\xi r\right]\le C\xi ,q-=1,\dots ,M$. Introducing the positive number ξ

_{0}= min(∈, δ/(2

*C*)), an array shall be defined by

There holds

Array ** f**̂

^{(0)}is used as starting element of a sequence of arrays

**̂**

*f*^{(n)},

*n*= 0, 1, …, which shall be defined recursively by

(p.618) The covariance theorem implies that the mapping $\widehat{\mathcal{T}}$ has the same general properties as $\mathcal{T}$. Therefore, Eq. (4.20b) ensures the restrictions

The choice of ξ_{0} yields the estimation: $1+{\widehat{\mathcal{F}}}_{q}\left[P,{\widehat{f}}^{\left(0\right)}\right]\le 1+C{\xi}_{0}\le \left(1+\delta /2\right)$. From Eqs. (D.17b,c), one gets ${\widehat{\mathcal{F}}}_{q}\left[P,{\widehat{f}}^{\left(0\right)}\right]\ge {\displaystyle {\sum}_{p}{A}_{q,p}{\widehat{f}}_{p}^{\left(0\right)}}=\left(1+\delta \right){\widehat{f}}_{q}^{\left(0\right)}$ Combining both inequalities, one derives from Eq. (4.23a): ${\widehat{\mathcal{T}}}_{q}\left[P,{\widehat{f}}^{\left(0\right)}\right]\ge \left[\left(1+\delta \right)/\left(1+\delta /2\right)\right]{\widehat{f}}_{q}^{\left(0\right)}\ge {\widehat{f}}_{q}^{\left(0\right)}$. Consequently, there holds

Using Eq. (4.20c) for the mapping $\widehat{\mathcal{T}}$ , one shows by induction that the preceding result can be extended to:

The increasing sequence of numbers ${f}_{q}^{\left(n\right)},n=0,1,\dots ,$ which is bounded from above, converges towards some non-negative number, say *g*̂_{q}. These numbers form the components of *g*̂:

Since $\widehat{\mathcal{T}}\left[P,x\right]$ is a continuous function of *x*, Eq. (D.18c) implies that *g*̂ is a fixed point:

Equation (4.20b) yields 0 ≤ *g*̂_{q} ≤ 1, *q* = 1,…,*M*. By construction, there holds ${\widehat{f}}_{q}^{\left(0\right)}\le {\widehat{g}}_{q}$ for all q; and one gets from Eq. (D.18b):

The covariance theorem (4.23b) implies that *g* is a fixed point: $\mathcal{T}\left[P,g\right]=g$. From Eq. (D.17a), one derives the desired inequality: *g* _{q0} = *f* _{q0} (*P*) + (1 − *f* _{q0} (*P*)) *g*̂_{q0} > *f* _{q0} (*P*).

# D.4 Further properties of stability matrices

The M-component eigenvectors *a* and *a** and the *M*-by-*M* matrix *R* are defined in Eqs. (4.74), (4.75). These quantities are determined by the stability matrix *A* ^{c} at the bifurcation point *P* ^{c}. The quantities enter all formulas for the asymptotic expansions of solutions of MCT equations for states *P* near *P* ^{c}. For the most relevant case of a primitive irreducible critical stability matrix, the quantities
(p.619)
${a}_{q}^{\ast},{a}_{p},\text{}{R}_{q,p}$, for *q*, *p* = 1,…, *M* can be expressed as limits of powers of the *M* ^{2} elements ${A}_{q,p}^{c}$ of the critical stability matrix *A* ^{c}. The relevant formulas shall be derived in this section. Furthermore, the asymptotic behaviour of the maximum eigenvalue *E*(*P*) will be determined for *P* approaching *P* _{c}.

According to Eq. (4.76b), the maximum η of the moduli of the first (*M* − 1) eigenvalues of *A* ^{c} is smaller than unity.

The Jordan form of *A* ^{c} consists of a 1-by-1 block for the maximum eigenvalue *e* _{M} = 1. The other blocks correspond to the other eigenvalues. The *n*th power of *A* ^{c} leaves fixed the block for *e* ^{M}, and reduces the diagonal elements of the other blocks to ${e}_{k}^{n}$. Consequently, $\left({A}^{cn}\right)q,k={a}_{q}{a}_{k}^{\ast}+O\left({n}^{\ell}{n}^{n}\right),0\le \ell \le M-1$ With increasing exponent *n*, the matrix elements of *A* ^{cn} converge exponentially towards the product of the distinguished eigenvectors:

Imposing the conventions (4.74c,d), the *M* ^{2} products on the right-hand side determine the eigenvectors *a* and *a** uniquely.

The eigenvectors shall be used to define an *M*-by-*M* matrix *A*′ by its elements

The Jordan form of *A*′ agrees with that of *A* ^{c}, except that the distinguished 1-by-1 block is replaced by zero. Hence, the spectral radius of *A*′^{c} does not exceed $\eta :{\left({{A}^{\prime}}^{n}\right)}_{q,k}=O\left({n}^{\ell}{\eta}^{n}\right)$. The Neumann series for *R*′ = (1 − *A*′)^{−1} converges exponentially and determines the matrix elements

There holds

Finally, an *M*-by-*M* matrix *R* shall be defined by

Using Eqs. (4.74b,d), one gets ${\sum}_{q}{a}_{q}^{*}{{A}^{\prime}}_{q,k}=0$. From Eq. (D.23b) one concludes: $\sum}_{q}{a}_{q}^{*}{{R}^{\prime}}_{q,k}={a}_{k}^{*$. Similarly, one derives $\sum}_{q}{{R}^{\prime}}_{qk}{a}_{k}={a}_{q}^{*$. As a result, one obtains:

Consequently, *R* is the distinguished resolvent.

(p.620)
Let *I* denote some *M*-component array. It shall be used to define another array *Y* with the components

If an array *F* is introduced by ${F}_{q}={\displaystyle {\sum}_{k}\left[{\delta}_{q,k}-{A}_{q,k}^{c}\right]{Y}_{k}}$, one gets ${F}_{q}={I}_{q}-{a}_{q}{\displaystyle {\sum}_{p}{a}_{p}^{*}{I}_{p}}$. One concludes: if and only if

*F*=

*I*, i.e.,

If the solubility condition (D.25b) is obeyed, *Y* is a special solution of the preceding *M* linear equations. This result constitutes the non-trivial part of the discussions of Eqs. (4.75a–e).

The results of the preceding paragraph are used in Sec. 4.3.4 in order to derive the $\sqrt{\in}$-law for the change of the form factor of the glass for states near a generic liquid–glass-transition point. The states P are evolving along a path P as specified in Eqs. (4.84)−(4.86). The result (4.91a) can be denoted as

For sufficiently small ∈, the maximum eigenvalue *E*(*P* ^{∈}) remains non-degenerate. Hence, it is a continuous function of ∈. It can be characterized by a positive function δ*E* _{∈}, which vanishes for ∈ = 0:

There hold the equations: ${\sum}_{p}\left[E\left({P}^{\in}\right){\delta}_{q,p}-{A}_{q,p}\left({P}^{\in}\right)\right]{b}_{q}=0,\text{}q=1,\dots ,M.$ The eigenvector *b* for the maximum eigenvalue can be chosen as continuous function of ε, which agrees with *a* for ε = 0. It shall be written as *b* = *a* + δ*b*; the continuous function δ*b* vanishes for ε tending to zero. Writing for the stability matrix ${A}_{q,p}\left({P}^{\in}\right)={A}_{q,p}^{c}+\delta {A}_{q,p}$, the equation for the eigenvalue gets the form of Eq. (D.25c): $\sum}_{p}\left[{\delta}_{q,p}-{A}_{q,p}^{c}\right]\delta {b}_{p}={I}_{q$. Here, the inhomogeneity reads ${I}_{q}=\delta E{b}_{q}+{\displaystyle {\sum}_{p}\delta {A}_{q,p}{b}_{p}}$. The condition (D.25b) is equivalent to $\delta E=-{\displaystyle {\sum}_{q,p}{a}_{p}^{*}\delta {A}_{q,p}{b}_{p}/{\displaystyle {\sum}_{q}{a}_{q}^{*}{b}_{q}}}$. The leading-order result is obtained from δ*A _{q,p}*; the eigenvector

*b*can be replaced by

*a*. According to Eq. (4.60a), the leading contribution to δ

*A*is of order $\sqrt{\in}$; and this is due to the $\sqrt{\in}$-contribution to

_{q,p}*f*

_{q}(

*P*

^{∈}). Using the abbreviations from Eq. (4.71c), one gets $\delta {A}_{q,p}=\sqrt{C\in /{\mu}_{2}^{c}}\left[-{a}_{q}{A}_{q,p}^{c}-{A}_{q,p}^{c}{a}_{p}+2{\displaystyle {\sum}_{k}{A}_{a,kp}^{\left(2\right)c}{a}_{k}}\right]+O(\in )$. Equation (D.25a) is used in order to justify the assumption $\delta b=O\left(\sqrt{\in}\right)$. Hence, $\delta E=-{\displaystyle {\sum}_{q,p}{a}_{q}^{*}\delta {A}_{q,p}{a}_{p}+O(\in )}$. With the aid of the abbreviations (4.88a,b), one arrives at a square-root law: