Jump to ContentJump to Main Navigation
The Equilibrium Theory of Inhomogeneous Polymers$

Glenn Fredrickson

Print publication date: 2005

Print ISBN-13: 9780198567295

Published to Oxford Scholarship Online: September 2007

DOI: 10.1093/acprof:oso/9780198567295.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 February 2017



The Equilibrium Theory of Inhomogeneous Polymers
Oxford University Press

In order to understand the field-based approach to modelling inhomogeneous fluids, it is necessary to have a basic familiarity with the calculus of functionals (Volterra, 1959). This broad subject includes topics such as functional differentiation, functional integration, and min–max problems. These are typically discussed in texts on functional analysis, calculus of variations, optimization theory, and field theory. Here, we provide a brief tutorial. Physically oriented references where more details can be found include (Fetter and Walecka, 1980; Hansen and McDonald, 1986; Parr and Yang, 1989; Zee, 2003).

C.1 Functionals

In the simplest case, a functional is a mapping between a function f(x) defined over some interval x ∈ [a, b] and a number F that generally depends on the values of the function over all points of the interval. For example, a simple linear functional is just the integral

F 1 | f | = a b d x f ( x )
This formula associates a number F 1 with the integral of f(x) over axb. We adopt the functional “square bracket” notation F 1[f] to indicate that F 1 depends on f(x) at all points over the interval. An example of a nonlinear functional is
F 2 [ f ] = a b d x [ f ( x ) ] 2
Both F 1[f] and F 2[f] are referred to as local functionals because values of f(x) for different x contribute independently (additively) to the value of the functional.

More generally, a functional can depend on the function and its derivatives over the interval. Such functionals are referred to as non-local. For example,

F 3 [ f ] = a b d x ( [ f ( x ) ] 2 + [ f ( x ) ] 4 + [ f ( x ) ] 2 )
is a familiar functional that appears in the Landau–Ginzburg theory of phase transitions (Chaikin and Lubensky, 1995; Goldenfeld, 1992). A second example of a non-local functional is the quadratic expression
F 4 [ f ] = a b d x a b d x f ( x ) K ( x , x ) f ( x )
where the kernel K(x, x′) is an arbitrary function of x and x′.

(p.394) Functionals can also be defined for multi-variable functions f(r) that, e.g., could represent the chemical potential fields w(r) that are central to the subject of this monograph. For example, the extension of eqn (C.3) to functions defined in three dimensions is the Landau–Ginzburg “square gradient” functional

F 5 [ f ] = d r ( [ f ( r ) ] 2 + [ f ( r ) ] 4 + | f | 2 )

C.2 Functional differentiation

The concept of differentiation of functionals is a straightforward extension of the notion of partial differentiation for multi-variable functions. Consider subjecting a function f(x) defined over x ∈ [a, b] to an arbitrary small perturbation δf(x). The perturbation δf(x) is itself a function defined over the same interval of x. For some functional F[f], we then consider its value F[f + δf] when f(x) → f(x)+δf(x). This quantity can be Taylor-expanded in powers of the perturbation δf(x) to yield the general form

F [ f + δ f ] = F [ f ] + a b d x Γ 1 ( x ) δ f ( x ) + 1 2 ! a b d x a b d x Γ 2 ( x , x ) δ f ( x ) δ f ( x ) +
where the functions Γi represent Taylor expansion coefficients. For example, in the case of the functional F 2[f], it is straightforward to show that Γ1(x) = 2f(x) and Γ2(x, x′) = 2δ(xx′), where δ(xx′) is the Dirac delta function defined by ∫ dx′ f(x′)δ(xx′) = f(x).

The coefficient functions Γi are typically written differently so that eqn (C.6) is more suggestive of a functional Taylor series. Namely, the first and second functional derivatives of F with respect to f(x) are defined by

δ F [ f ] δ f ( x ) Γ 1 ( x )
δ 2 F [ f ] δ f ( x ) δ f ( x ) Γ 2 ( x , x )
The first functional derivative δF[f]/δf(x) is a function of x that dictates the rate of change of the functional F[f] when f(x) is perturbed at the point x. Similarly, the second derivative δ2 F[f]/δf(xf(x′) is a function of x and x′ that expresses the rate of change of F[f] when f(x) is simultaneously perturbed at points x and x′.

By explicitly working out the Taylor expansion of eqn (C.6) for a prescribed functional F[f], it is possible to derive functional differentiation formulas that (p.395) closely resemble formulas from ordinary differential calculus. For example, the functional

F 6 [ f ] = a b d x [ f ( x ) ] n
has derivatives
δ F 6 [ f ] δ f ( x ) = n [ f ( x ) ] n 1 , δ 2 F 6 [ f ] δ f ( x ) δ f ( x ) = n ( n 1 ) [ f ( x ) ] n 2 δ ( x x )
Similarly, the functional F 4[f] of eqn (C.4) for a symmetric kernel K(x, x′) = K(x′, x) has derivatives
δ F 4 [ f ] δ f ( x ) = 2 a b d x K ( x , x ) f ( x ) , δ 2 F 4 [ f ] δ f ( x ) δ f ( x ) = 2 K ( x , x )

The computation of functional derivatives according to eqns (C.6)–(C.8) for functionals such as F 3[f] that involve derivatives of f(x) proceeds by one or more integrations by parts. These in turn require boundary conditions to be imposed on the arbitrary perturbation δf(x). For example, we might want to restrict attention to functions f(x) that satisfy the fixed end (Dirichlet) conditions f(a) = f a, f(b) = f b. The variations of a functional F[f] subject to these fixed end conditions can thus be examined by expanding F[f + δf] according to eqn (C.6) with the arbitrary perturbation satisfying the homogeneous end conditions δf(a) = δf(b) = 0. As an explicit example, variation of the functional

F 7 [ f ] = a b d x [ f ( x ) ] 2
subject to fixed end conditions produces the functional derivatives
δ F 7 [ f ] δ f ( x ) = 2 f ( x ) , δ 2 F 7 [ f ] δ f ( x ) δ f ( x ) = d d x d d x δ ( x x )

A similar approach can be used to define and compute functional derivatives for functionals of multivariate functions. For example, variation of F 5[f] subject to fixed conditions at the boundary of the r domain leads to the functional derivative

δ F 5 [ f ] δ f ( r ) = 2 f ( r ) + 4 [ f ( r ) ] 3 2 2 f ( r )

A variety of other useful functional differentiation formulas can be derived. A particularly important relation is the chain rule

δ F [ g ] δ f ( x ) = d x δ F [ g ] δ g ( x ) δ g ( x ) δ f ( x )
Another important expression is (p.396)
δ f ( x ) δ f ( x ) = δ ( x x )
Finally, this expression, combined with the the choice of F = f(x″) in eqn (C.15), leads to
d x δ f ( x ) δ g ( x ) δ g ( x ) δ f ( x ) = δ ( x x )
which shows that δfg and δgf are functional inverses.

C.3 Min–max problems

An important application of the calculus of functionals is to optimization problems. A typical problem involves finding the function f(x) belonging to some function space that minimizes or maximizes a prescribed functional F[f]. For example, in the classical density functional theory of inhomogeneous fluids (Rowlinson and Widom, 1989), f corresponds to a density field and F to a free energy functional. The free energy is minimized for the equilibrium configuration of the density. In Chapter 5, a similar variational principle is used to derive the self-consistent field theory (SCFT) of inhomogeneous polymeric fluids.

The theoretical basis for solving functional min–max problems is the Taylor expansion of eqn (C.6). The first variation δF of a functional F[f] that is subjected to an arbitrary infinitesimal perturbation δF(x) over x ∈ [a, b] is defined by

δ F F [ f + δ f ] F [ f ] = a b d x δ F [ f ] δ F ( x ) δ f ( x )
The functional F[f] attains an extremum value, i.e. a maximum, minimum, or saddle point, when f(x) is adjusted to a function f*(x) such that the first variation vanishes. Because the perturbation δf(x) is arbitrary, this condition implies that the extremum function f*(x) is determined by the vanishing of the first functional derivative
δ F [ f ] δ f ( x ) | f = f * = 0
Thus, just as we locate minima or maxima of an ordinary function f(x) by setting the first derivative f′(x) to zero, the extremum of a functional corresponds to the function f*(x) that causes the first functional derivative to vanish. Equation (C.19) is commonly referred to as an Euler–Lagrange equation and may be an ordinary differential, a partial differential, or an integral equation to solve for f*(x) depending on the form of the functional. For example, the Euler–Lagrange equation that arises from variation of the functional F 3[f] subject to the fixed end conditions f(a) = f a, f(b) = f b is the ordinary differential equation
d 2 d x 2 f * ( x ) + f * ( x ) 2 [ f * ( x ) ] 3 = 0
This equation has a unique solution f*(x) that depends on the prescribed boundary conditions. Correspondingly, the extremum of the functional F 5[f] subject to (p.397) Dirichlet or periodic boundary conditions on f(r) satisfies the partial differential equation
2 f * ( r ) + f * ( r ) 2 [ f * ( r ) ] 3 = 0

The Euler–Lagrange equation (C.19) provides a condition for determining a function f*(x) that corresponds to an extremum of a prescribed functional F[f]. In order to establish whether that extremum is a maximum, minimum, or saddle point, the second functional derivative must be analyzed. This analysis involves the solution of the eigenvalue problem

a b d x δ 2 F [ f ] δ f ( x ) δ f ( x ) | f = f * φ i ( x ) = Λ i φ i ( x )
If the eigenvalues Λi are all positive, then f* represents a local minimum of F[f]. Correspondingly, if Λi < 0 for all i, f* is a local maximum of F[f]. In the intermediate case of eigenvalues of mixed sign, we conclude that f* corresponds to a saddle point of the functional. Establishing whether a particular extremum is a global, rather than local, minimum or maximum is a more difficult problem in optimization theory (Nocedal and Wright, 1999) that remains unsolved for arbitrary F[f], although physical intuition is often helpful in specific contexts.

As a final note, the above optimization scheme can be extended to include linear and nonlinear constraints through the introduction of Lagrange multipliers. Interested readers should consult the literature (Riley et al., 1998; Weinstock, 1974).

C.4 Functional integration

In addition to taking the derivative of a functional with respect to a function, it is also possible to define the integral of a functional over all functions belonging to some function space (Feynman and Hibbs, 1965; Simon, 1979; Zee, 2003). Such an integral is referred to as a functional integral, or more specifically a path integral, if the function f(x) corresponds to the trajectory q(t) of a particle at various times t or the configuration r(s) of a polymer molecule at various contour locations s.

A generic functional integral will be written in the form

I = D f F [ f ]
where the notation ∫ D f is understood to represent an integral over all functions f(x) defined over x ∈ [a, b] belonging to some function space. The relevant function space is determined by smoothness and boundary conditions on f. For example, if we were interested in summing over all possible shapes of a polymer that is clamped at both ends, eqn (C.23) could be interpreted as an integral over all continuous and infinitely differentiable functions f(x) that satisfy f(a) = f(b) = 0.

(p.398) How does one define such a functional integral? One approach is to discretize the function over the interval. In the clamped polymer example, a sensible strategy would be to sample f(x) at a set of N equally spaced interior points, x i = a + i(ba)/(N + 1), i = 1, 2, …, N. The function can thus be approximated by an N-vector f = (f 1, f 2, …, f N) with components f if(x i). For a prescribed N, the N-dimensional integral

I N = d f 1 d f N F ( f )
can thus be viewed as an approximation to the functional integral I. In this equation we use the conventional notation F(f) of a multivariate function to indicate the discrete approximation to a functional F[f]. The formal transition from an ordinary multi-dimensional integral to an infinite-dimensional functional integral is through the limit limN→∞ IN = I. Depending on the form of the functional F[f], this limit may or may not exist. However, in the statistical mechanics of classical fields, we are normally interested in average quantities that can be expressed as the ratio of two functional integrals. In such cases the limiting procedure usually converges to a finite result for the ratio, even if the limits of the individual integrals do not exist.

A second way to interpret a functional integral such as eqn (C.23) is through a spectral or normal-mode representation of the function. For example, in the tethered polymer situation with f(a) = f(b) = 0, a Fourier sine series representation would be appropriate:

f ( x ) = n 1 a n sin ( n π ( x a ) b a )
The functional integral would then be interpreted as an integral over all the Fourier coefficients a = (a 1, a 2, a 3, …) according to
I = [ Π n = 1 d a n ] F ( a )
Again, the expression on the right may not exist, but the ratio of two such formulas, corresponding to a thermodynamic average, will usually be well defined.

With the exception of Gaussian integrals, very few functional integrals can be evaluated analytically. Two important Gaussian integral formulas that can be viewed as infinite-dimensional versions of eqns (B.12) and (B.13) are

D f exp [ ( 1 / 2 ) d x d x f ( x ) A ( x , x ) f ( x ) + d x J ( x ) f ( x ) ] D f exp [ ( 1 / 2 ) d x d x f ( x ) A ( x , x ) f ( x ) ] = exp ( 1 2 d x d x J ( x ) A 1 ( x , x ) J ( x ) )
D f exp [ ( 1 / 2 ) d x d x f ( x ) A ( x , x ) f ( x ) + i d x J ( x ) f ( x ) ] D f exp [ ( 1 / 2 ) d x d x f ( x ) A ( x , x ) f ( x ) ] = exp ( 1 2 d x d x J ( x ) A 1 ( x , x ) J ( x ) )
where A(x; x′) is assumed to be real, symmetric, and positive definite. The functional inverse of A, A −1, is defined in accordance with eqn (C.17) by
d x A ( x , x ) A 1 ( x , x ) = δ ( x x )
When these formulas are applied to interacting particle models in classical statistical physics, such as those described in Chapter 4, J represents a microscopic density operator, and A −1 is a pair potential function. The function f is an auxiliary potential that serves to decouple particle–particle interactions. In this context, eqns (C.27) and (C.28) are generically referred to as Hubbard–Stratonovich transformations (Chaikin and Lubensky, 1995).