## Wolfgang Götze

Print publication date: 2008

Print ISBN-13: 9780199235346

Published to Oxford Scholarship Online: May 2009

DOI: 10.1093/acprof:oso/9780199235346.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 March 2017

# (p.611) Appendix D Theorems On Mct Equations

Source:
Complex Dynamics of Glass-Forming Liquids
Publisher:
Oxford University Press

# D.1 Convergence of the approximant sequences

A sequence of arrays of approximants φ(r)(t), r = 0, 1, …, is constructed in connection with Eqs. (4.25)–(4.27). In the present section, the uniform convergence of this sequence will be demonstrated. The proofs are due to Haussmann (1990) for the first version of the equations of motion and due to Götze and Sjögren (1995) for the second one. For the discussions, which go beyond the ones presented in Sec. 4.2.2, the label q for the components of the arrays merely occurs as a fixed parameter. There is no additional reasoning necessary for the extension of the proofs from the case dealing with a single component, M = 1, to the general case with a larger value of M. Therefore, only the one-component case shall be considered.

It is the goal to prove the uniform convergence of the sum $S = ∑ r = 0 ∞ X r ( t )$, which is formed with the positive functions

(D.1)
$Display mathematics$

This result ensures the absolute and uniform convergence of the sum $∑ r = 0 ∞ [ ϕ ( r + 1 ) ( t ) − ϕ ( r ) ( t ) ]$ and, consequently, the existence of the desired uniform limit formulated in Eq. (4.25a). The goal shall be reached by constructing a sequence of positive numbers b r, which define a convergent sum $B = ∑ r = 0 ∞ b r$. The numbers b r depend neither on the time t nor on the coefficients V α, α = 1, … , N, nor on the frequencies Ω and ν or the time τ , which specify the equations of motion. This holds provided the coefficients are restricted to the closed finite intervals specified at the beginning of Sec. 4.2.2 and provided 0 ≤ tt max. Here, t max is some arbitrary positive finite upper limit for the time interval to be considered. The numbers b r shall serve as majorants:

(D.2)
$Display mathematics$

The approximant φ(0)(t) reads exp[−Γt], Γ > 0. For the other approximants, the recursion relations (4.27a,b) shall be written in the form:

(D.3a)
$Display mathematics$
(D.3b)
$Display mathematics$
(p.612) respectively. The kernel abbreviates the function of three variables ξ, η and ζ:
(D.4a)
$Display mathematics$

One infers from Eq. (4.25c) that the three variables are located in finite closed intervals:

(D.4b)
$Display mathematics$

K is a polynomial and, thus, it obeys a Lipschitz condition. Since the variables as well as the coefficients of the mononomials of $F$[P, ξ] are restricted to finite closed intervals, a Lipschitz constant L max can be chosen so that there holds

(D.4c)
$Display mathematics$
for all variables occurring and all coefficients admitted. Using this estimation, one obtains from Eq. (D.3a) for the first version of equations of motion:
(D.5a)
$Display mathematics$

From Eq. (D.3b), one gets for the second version:

(D.5b)
$Display mathematics$

Let us define recursively two versions of sequences of functions a r(t), r = 0, 1, … One writes a 0(t) = 1. For the first version, there holds for r ≥ 1:

(D.6a)
$Display mathematics$
with μ = L maxΩ2. For the second one, there holds for r ≥ 1:
(D.6b)
$Display mathematics$
with κ = L max/τ. One shows by induction that the functions a r(t) are nonnegative and non-decreasing functions of t. The important inequality (4.25c) implies
(D.7)
$Display mathematics$

Let us prove by induction that the preceding inequality can be extended to: X r(t) ≤ a 1(t), a 2(t), … , a r(t), r ≥ 1. As first step, one substitutes the inequality X 1(t) ≤ a 0 into the Eq. (D.5a) in order to get: $X 1 ( t ) ≤ ν ∫ 0 t a 0 ( t ′ ) d t ′ + μ ∫ 0 t { ∫ 0 t ′ [ 2 X 0 ( t ″ ) + a 0 ( t ″ ) ] d t ″ } d t ′ = ν ∫ 0 t a 0 ( t ′ ) d t ′ + 3 μ ∫ 0 t { ∫ 0 t ′ a 0 ( t ″ ) d t ″ } d t ′ = a 1 ( t )$

(p.613) Using Eq. (D.5b), one obtains $X 1 ( t ) ≤ 3 κ ∫ 0 t a 0 ( t ′ ) d t ′ = a 1 ( t )$. Repetition of this step yields X 2(t) ≤ a 1(t). This can be substituted into Eq. (D.5a) in order to get: $X 2 ( t ) ≤ ν ∫ 0 t a 1 ( t ′ ) d t ′ + μ ∫ 0 t { ∫ 0 t ′ [ 2 X 1 ( t ″ ) + a 1 ( t ″ ) ] d t ″ } d t ′$. Using the preceding result, X 1(t″) ≤ a 1(t″), and formula (D.6a), one gets X 2(t) ≤ a 2(t). The second version of inequalities (D.5b,D.6b) yields the same conclusion. This reasoning can be continued. In step number r, one uses Eq. (D.7) to derive X r(t) ≤ a 1(t). From this, there follows X r(t) ≤ a 2(t). This leads to X r(t) ≤ a 3(t), etc. The resulting inequality

(D.8)
$Display mathematics$
establishes the sequence a r(t), r = 0, 1, …, as majorant of the sequence of functions X r(t).

Since 0 ≤ a r(t″) ≤ a r(t′) for 0 ≤ t″ ≤ t′ ≤ t max, one gets the estimation: $∫ 0 t ′ a r − 1 ( t ″ ) d t ″ ≤ t max a r − 1 ( t ′ )$. Defining a further sequence of functions b t(t), r = 0, 1, … recursively by b 0(t) = 1 and $b r ( t ) = ( v + 3 μ t max ) ∫ 0 t b ( t ′ ) d t ′$, one derives from Eq. (D.6a): a r(t) ≤ b r(t). One concludes for both versions of equations of motion

(D.9)
$Display mathematics$
Here A = (ν + 3μt max) and A = 3k, respectively. The desired result (D.2) is established. One can choose b r = B r/r! with $B = ( v max + 3 L max Ω 2 t max ) t max$ and B = 3L maxγmax t max, respectively.

# D.2 Completely monotonic approximants

In this section, the sequence of approximants φ(r)(t) shall be discussed, which is constructed in Sec. 4.2.2 for the second version of equations of motion; r = 0, 1, … It will be demonstrated that all functions $ϕ q ( r ) ( t ) , q = 1 , … , M$, are finite sums of elementary relaxation correlators.

Let us use this paragraph to note some formulas with the aim to simplify the following discussions. All functions F(t) to be considered shall be continuous and exhibit the standard symmetries: F(t) = F(t)* = F(−t). Therefore, all times can be restricted to t ≥ 0. Function F(t) is called completely monotonic if there is a bounded monotonically increasing weight function σ(γ) so that

(D.10a)
$Display mathematics$

This is equivalent to the formula for the Laplace transform

(D.10b)
$Display mathematics$

(p.614) The expression describes also the analytical continuation on the whole plane of complex frequencies z, except for the values z = −iγ, γ ≥ 0. From Eq. (2.46b), one gets for the spectrum

(D.10c)
$Display mathematics$

It is an analytic function of the frequency for all ω ≠ 0. The Laplace transform in the conventional notation is defined by Eq. (A.1) for the half plane of values s, which obey $s > 0 : F ^ ( s ) = ∫ 0 ∞ exp [ − s t ] F ( t )$. From Eq. (D.10a), one gets

(D.11a)
$Display mathematics$

This expression defines the analytical continuation on the whole complex plane except, possibly, for the values s = − γ, γ ≥ 0. In agreement with the general formula (A.3b), there holds the relation

(D.11b)
$Display mathematics$

It is demonstrated in Sec. 4.2.2 that the recursion relation (4.26b) is equivalent to the double-fraction representation: $ϕ q ( r + 1 ) ( z ) = − 1 / { z − 1 / [ i τ q + m q ( r ) ( z ) ] } Im z > 0$. The index q shall be dropped in the following in order to simplify the formulas. According to Eq. (A.3b), the recursion relation can be written in the conventional notation for Re s > 0 as

(D.12)
$Display mathematics$

Let us assume that the kernel m (r)(t) is a sum of n elementary relaxation functions, n = 1, 2, … This means that there are n + 1 positive amplitudes μk and positive rates γk = 1, …, n so that

(D.13a)
$Display mathematics$

The rates shall be ordered:

(D.13b)
$Display mathematics$

It is the goal to derive the proposition: there are n + 1 positive amplitudes f k and positive rates γ′k, k = 0, 1,…, n so that

(D.13c)
$Display mathematics$

The sequence of rates for φ(r+1)(t) is separated by the one for m (r)(t):

(D.13d)
$Display mathematics$

(p.615) Suppose that the proposition is correct. Then, one can show by induction the statement made in the first paragraph of this section. For r = 0, the statement is correct since the iteration in Sec. 4.2.2 is started with $ϕ q ( 0 ) ( t ) = exp [ − Γ q t ] , Γ q > 0 , q = 1 , … , M$. If the statement holds for $ϕ q ( r ) ( t )$, one can write this function as a finite sum $∑ j f q , j ( r ) exp [ − γ q , j ( r ) ′ ]$, with all amplitudes and rates being positive. Equation (4.25b) implies $m q ( r ) ( t ) = ∑ k = 1 n μ q , k ( r ) exp [ − γ q , k ( r ) t ]$. Formula (4.15a) shows that the rates $γ q , k ( r )$ are positive, since they are sums of positive terms of the kind $γ k 1 , j 1 ( r ) ′ + γ k 2 , j 2 ( r ) ′ + ⋯$ The amplitudes are sums of products of the kind $V q , k 1 ⋯ k n ( n ) f k 1 , j 1 ( r ) ⋯ f k n , j n ( r )$ and, because of Eq. (4.15b), they are not negative. The proposition yields the desired formula (4.28) for $ϕ q ( r + 1 ) ( t )$.

The proof of the proposition starts by rewriting Eq. (D.13a) in the equivalent form

(D.13e)
$Display mathematics$

One can present this function as ratio of a denominator polynomial of degree n, D(s) = (s + γ1)(s + γ2) … (s + γn) = s n + O(s n−1), and a numerator polynomial of degree $n − 1 , N ( s ) = ( ∑ k = 1 n μ k ) s n − 1 + O ( s n − 2 ) : m ^ ( r ) ( s ) = N ( s ) / D ( s )$. Because of Eq. (D.12), the approximant can be presented as ratio of polynomials of degree n and $( n + 1 ) " ϕ ^ ( r + 1 ) ( s ) = [ τ D ( s ) + N ( s ) ] / [ τ s D ( s ) + D ( s ) + s N ( s ) ]$. Function φ^(r + 1) (s) is meromorphic, and it can have at most (n + 1) poles. Since D(s = 0) = γ1 … γn ≠ 0, the value s = 0 cannot be a pole. One concludes that the poles are the zeros of the function ϕ(s = τ + (1/s) + m^(r)(s). Restricted to a function of the real variable x, ϕ(x) is a real function. It decreases strictly: ∂ϕ(x)/∂x < 0. It has (n + 1) simple poles for the positions −γn < −γn−1 < … < −γ1 < −γ0 = 0. If x increases from −γ + 1 to −γ, ϕ(x) decreases from ∞ to −∞, ℓ = 0, …, n − 1. Hence, there are n zeroes γ′, obeying the conditions (D.13d). If x decreases from −γn to −∞, ϕ(x) increases from −∞ to τ. Consequently, there is a zero −γ′n, obeying −γ′n < −γn. Thereby, (n + 1) simple poles of the approximant are identified, and one can write the partial-fraction representation:

(D.13f)
$Display mathematics$

Since $∂ ϕ ^ ( r + 1 ) ( x ) / ∂ x < 0$, there holds f k > 0, k = 0, …, n. Since the representation is equivalent to Eq. (D.13c), the proof is completed.

The discussion of Eq. (D.12) can be modified to one for the recursion defined in Sec. 6.2.1 for the shape functions. Equation (6.114b) can be noted as

(D.14)
$Display mathematics$

(p.616) This expression is obtained from Eq. (D.12) by specializing to τ = 0. The kernel is quantified by n pairs of numbers (μk, γk), k = 1, … , n, as is explained in connection with Eqs. (D.13a,b). The following proposition shall be proven. There are n pairs of positive numbers (f k, γ′k, k = 0, …, n − 1 so that

(D.15a)
$Display mathematics$

The sequence of rates for function φ̂(r+1)(t) separates that for the function m (r)(t):

(D.15b)
$Display mathematics$

Contrary to what is discussed above for Eq. (D.12), the number of relaxators contributing to φ̂(r+1)(t) is the same as that of the relaxators contributing to m (r)(t).

The induction proof presented in the paragraph preceding Eq. (D.13e) remains valid. Hence, one can proceed as above and write the kernel as ratio of a polynomial N(s) of degree (n − 1) and a polynomial D(s) of degree n : m̂(r)(s) = N(s)/D(s). Equation (D.14) yields φ̂(r+1)(s) = N(s)/[D(s)+sN(s)]. This function is meromorphic. Different to what is deduced above from Eq. (D.12), φ̂(r+1)(s) cannot have more than n poles. The poles are the zeros of the function ϕ(s) = (1/s) + m̂(r)(s). One continues as above. For real values x = s, the function ϕ(x) is strictly decreasing. It has (n + 1) simple poles at −γn < −γn−1 < … < γ1 < 0 = γ0. Consequently, there are n poles at γ′0,…,γ′n−1, which obey the condition (D.15b). As above, one shows that the residues f k are positive. Hence, there holds the partial fraction representation

(D.15c)
$Display mathematics$

Since this formula is equivalent to Eq. (D.15a), the proof is completed.

# D.3 The maximum-eigenvalue inequality

In this section, the inequality E(P) ≤ 1 for the maximum eigenvalue E(P) = E[P, f(P)] for the maximum fixed point f(P) shall be derived. The proof (Götze and Sjögren 1995) is done indirectly. It will be assumed that there is some positive δ so that E(P) = 1+δ. The assumption shall be used to construct a fixed point g(P) with a component of some label q 0, 1 ≤ q 0M, obeying g q0 (P) > f q0 (P). This result contradicts the maximum property (4.52e). Consequently, the assumption is illegitimate; and this conclusion implies the desired result (4.60c).

(p.617) In order to formulate the assumption more explicitly, let r denote an eigenvector of the stability matrix (4.60a) for state P with eigenvalue E(P), i.e.,

(D.16a)
$Display mathematics$

There is a non-zero component of r, say, r q0 ≠ 0. One of the Frobenius–Perron theorems formulates that one can choose the eigenvector so that none of its components is negative (Gantmacher 1974). Hence, one can request

(D.16b)
$Display mathematics$

The transformation (4.22a) shall be applied for the maximum fixed point f*(P) = f(P):

(D.17a)
$Display mathematics$

The formula (4.22c) for the transformed coefficients $V ^ q , p ( 1 )$ agrees with the corresponding one for the elements of the stability matrix in Eq. (4.60a). Equation (4.22b) can be written as

(D.17b)
$Display mathematics$

The last term combines the contribution due to the coefficients $V ^ q , k 1 … k n ( n )$ with n ≤ 2. Since these coefficients are non-negative, there holds

(D.17c)
$Display mathematics$

Let us choose some positive ∈, which is smaller than the inverse of the largest component of r. There holds 0 ≤ ξr q < 1 for all q and all ξ obeying 0 < ξ ≤ ∈. The polynomials $F ^ q [ P , x ]$ vanish for x = 0. Hence, there is some positive Lipschitz constant C so that: 0 ≤ $0 ≤ F ^ q [ P , ξ r ] ≤ C ξ , q − = 1 , … , M$. Introducing the positive number ξ0 = min(∈, δ/(2C)), an array shall be defined by

(D.18a)
$Display mathematics$

There holds

(D.18b)
$Display mathematics$

Array f̂(0) is used as starting element of a sequence of arrays f̂(n) , n = 0, 1, …, which shall be defined recursively by

(D.18c)
$Display mathematics$

(p.618) The covariance theorem implies that the mapping $T ^$ has the same general properties as $T$. Therefore, Eq. (4.20b) ensures the restrictions

(D.18d)
$Display mathematics$

The choice of ξ0 yields the estimation: $1 + F ^ q [ P , f ^ ( 0 ) ] ≤ 1 + C ξ 0 ≤ ( 1 + δ / 2 )$. From Eqs. (D.17b,c), one gets $F ^ q [ P , f ^ ( 0 ) ] ≥ ∑ p A q , p f ^ p ( 0 ) = ( 1 + δ ) f ^ q ( 0 )$ Combining both inequalities, one derives from Eq. (4.23a): $T ^ q [ P , f ^ ( 0 ) ] ≥ [ ( 1 + δ ) / ( 1 + δ / 2 ) ] f ^ q ( 0 ) ≥ f ^ q ( 0 )$. Consequently, there holds

(D.19a)
$Display mathematics$

Using Eq. (4.20c) for the mapping $T ^$ , one shows by induction that the preceding result can be extended to:

(D.19b)
$Display mathematics$

The increasing sequence of numbers $f q ( n ) , n = 0 , 1 , … ,$ which is bounded from above, converges towards some non-negative number, say ĝq. These numbers form the components of ĝ:

(D.20a)
$Display mathematics$

Since $T ^ [ P , x ]$ is a continuous function of x, Eq. (D.18c) implies that ĝ is a fixed point:

(D.20b)
$Display mathematics$

Equation (4.20b) yields 0 ≤ ĝq ≤ 1, q = 1,…,M. By construction, there holds $f ^ q ( 0 ) ≤ g ^ q$ for all q; and one gets from Eq. (D.18b):

(D.20c)
$Display mathematics$

The covariance theorem (4.23b) implies that g is a fixed point: $T [ P , g ] = g$. From Eq. (D.17a), one derives the desired inequality: g q0 = f q0 (P) + (1 − f q0 (P)) ĝq0 > f q0 (P).

# D.4 Further properties of stability matrices

The M-component eigenvectors a and a* and the M-by-M matrix R are defined in Eqs. (4.74), (4.75). These quantities are determined by the stability matrix A c at the bifurcation point P c. The quantities enter all formulas for the asymptotic expansions of solutions of MCT equations for states P near P c. For the most relevant case of a primitive irreducible critical stability matrix, the quantities (p.619) $a q ∗ , a p , R q , p$, for q, p = 1,…, M can be expressed as limits of powers of the M 2 elements $A q , p c$ of the critical stability matrix A c. The relevant formulas shall be derived in this section. Furthermore, the asymptotic behaviour of the maximum eigenvalue E(P) will be determined for P approaching P c.

According to Eq. (4.76b), the maximum η of the moduli of the first (M − 1) eigenvalues of A c is smaller than unity.

(D.21)
$Display mathematics$

The Jordan form of A c consists of a 1-by-1 block for the maximum eigenvalue e M = 1. The other blocks correspond to the other eigenvalues. The nth power of A c leaves fixed the block for e M, and reduces the diagonal elements of the other blocks to $e k n$. Consequently, $( A c n ) q , k = a q a k ∗ + O ( n ℓ n n ) , 0 ≤ ℓ ≤ M − 1$ With increasing exponent n, the matrix elements of A cn converge exponentially towards the product of the distinguished eigenvectors:

(D.22)
$Display mathematics$

Imposing the conventions (4.74c,d), the M 2 products on the right-hand side determine the eigenvectors a and a* uniquely.

The eigenvectors shall be used to define an M-by-M matrix A′ by its elements

(D.23a)
$Display mathematics$

The Jordan form of A′ agrees with that of A c, except that the distinguished 1-by-1 block is replaced by zero. Hence, the spectral radius of Ac does not exceed $η : ( A ′ n ) q , k = O ( n ℓ η n )$. The Neumann series for R′ = (1 − A′)−1 converges exponentially and determines the matrix elements

(D.23b)
$Display mathematics$

There holds

(D.23c)
$Display mathematics$

Finally, an M-by-M matrix R shall be defined by

(D.24a)
$Display mathematics$

Using Eqs. (4.74b,d), one gets $∑ q a q * A ′ q , k = 0$. From Eq. (D.23b) one concludes: $∑ q a q * R ′ q , k = a k *$. Similarly, one derives $∑ q R ′ q k a k = a q *$. As a result, one obtains:

(D.24b)
$Display mathematics$

Consequently, R is the distinguished resolvent.

(p.620) Let I denote some M-component array. It shall be used to define another array Y with the components

(D.25a)
$Display mathematics$

If an array F is introduced by $F q = ∑ k [ δ q , k − A q , k c ] Y k$, one gets $F q = I q − a q ∑ p a p * I p$. One concludes: if and only if

(D.25b)
$Display mathematics$
there holds F = I, i.e.,
(D.25c)
$Display mathematics$

If the solubility condition (D.25b) is obeyed, Y is a special solution of the preceding M linear equations. This result constitutes the non-trivial part of the discussions of Eqs. (4.75ae).

The results of the preceding paragraph are used in Sec. 4.3.4 in order to derive the $∈$-law for the change of the form factor of the glass for states near a generic liquid–glass-transition point. The states P are evolving along a path P as specified in Eqs. (4.84)−(4.86). The result (4.91a) can be denoted as

(D.26a)
$Display mathematics$

For sufficiently small ∈, the maximum eigenvalue E(P ) remains non-degenerate. Hence, it is a continuous function of ∈. It can be characterized by a positive function δE , which vanishes for ∈ = 0:

(D.26b)
$Display mathematics$

There hold the equations: $∑ p [ E ( P ∈ ) δ q , p − A q , p ( P ∈ ) ] b q = 0 , q = 1 , … , M .$ The eigenvector b for the maximum eigenvalue can be chosen as continuous function of ε, which agrees with a for ε = 0. It shall be written as b = a + δb; the continuous function δb vanishes for ε tending to zero. Writing for the stability matrix $A q , p ( P ∈ ) = A q , p c + δ A q , p$, the equation for the eigenvalue gets the form of Eq. (D.25c): $∑ p [ δ q , p − A q , p c ] δ b p = I q$. Here, the inhomogeneity reads $I q = δ E b q + ∑ p δ A q , p b p$. The condition (D.25b) is equivalent to $δ E = − ∑ q , p a p * δ A q , p b p / ∑ q a q * b q$. The leading-order result is obtained from δAq,p; the eigenvector b can be replaced by a. According to Eq. (4.60a), the leading contribution to δAq,p is of order $∈$; and this is due to the $∈$-contribution to f q(P ). Using the abbreviations from Eq. (4.71c), one gets $δ A q , p = C ∈ / μ 2 c [ − a q A q , p c − A q , p c a p + 2 ∑ k A a , k p ( 2 ) c a k ] + O ( ∈ )$. Equation (D.25a) is used in order to justify the assumption $δ b = O ( ∈ )$. Hence, $δ E = − ∑ q , p a q * δ A q , p a p + O ( ∈ )$. With the aid of the abbreviations (4.88a,b), one arrives at a square-root law:

(D.26c)
$Display mathematics$