# (p.393) APPENDIX C CALCULUS OF FUNCTIONALS

# (p.393) APPENDIX C CALCULUS OF FUNCTIONALS

In order to understand the field-based approach to modelling inhomogeneous fluids, it is necessary to have a basic familiarity with the calculus of functionals (Volterra, 1959). This broad subject includes topics such as functional differentiation, functional integration, and min–max problems. These are typically discussed in texts on functional analysis, calculus of variations, optimization theory, and field theory. Here, we provide a brief tutorial. Physically oriented references where more details can be found include (Fetter and Walecka, 1980; Hansen and McDonald, 1986; Parr and Yang, 1989; Zee, 2003).

# C.1 Functionals

In the simplest case, a *functional* is a mapping between a function *f*(*x*) defined over some interval *x* ∈ [*a, b*] and a number *F* that generally depends on the values of the function over *all* points of the interval. For example, a simple linear functional is just the integral

*F*

_{1}with the integral of

*f*(

*x*) over

*a*≤

*x*≤

*b*. We adopt the functional “square bracket” notation

*F*

_{1}[

*f*] to indicate that

*F*

_{1}depends on

*f*(

*x*) at

*all*points over the interval. An example of a

*nonlinear*functional is

*F*

_{1}[

*f*] and

*F*

_{2}[

*f*] are referred to as

*local*functionals because values of

*f*(

*x*) for different

*x*contribute independently (additively) to the value of the functional.

More generally, a functional can depend on the function and its derivatives over the interval. Such functionals are referred to as *non-local*. For example,

*K*(

*x, x′*) is an arbitrary function of

*x*and

*x*′.

(p.394)
Functionals can also be defined for multi-variable functions *f*(**r**) that, e.g., could represent the chemical potential fields *w*(**r**) that are central to the subject of this monograph. For example, the extension of eqn (C.3) to functions defined in three dimensions is the Landau–Ginzburg “square gradient” functional

# C.2 Functional differentiation

The concept of differentiation of functionals is a straightforward extension of the notion of partial differentiation for multi-variable functions. Consider subjecting a function *f*(*x*) defined over *x* ∈ [*a, b*] to an arbitrary small perturbation δ*f*(*x*). The perturbation δ*f*(*x*) is itself a function defined over the same interval of *x*. For some functional *F*[*f*], we then consider its value *F*[*f* + δ*f*] when *f*(*x*) → *f*(*x*)+δ*f*(*x*). This quantity can be Taylor-expanded in powers of the perturbation δ*f*(*x*) to yield the general form

_{i}represent Taylor expansion coefficients. For example, in the case of the functional

*F*

_{2}[

*f*], it is straightforward to show that Γ

_{1}(

*x*) = 2

*f*(

*x*) and Γ

_{2}(

*x, x′*) = 2δ(

*x*−

*x′*), where δ(

*x*−

*x′*) is the Dirac delta function defined by ∫

*dx′ f*(

*x′*)δ(

*x*−

*x′*) =

*f*(

*x*).

The coefficient functions Γ_{i} are typically written differently so that eqn (C.6) is more suggestive of a *functional Taylor series*. Namely, the first and second *functional derivatives* of *F* with respect to *f*(*x*) are defined by

*F*[

*f*]/δ

*f*(

*x*) is a function of

*x*that dictates the rate of change of the functional

*F*[

*f*] when

*f*(

*x*) is perturbed at the point

*x*. Similarly, the second derivative δ

^{2}

*F*[

*f*]/δ

*f*(

*x*)δ

*f*(

*x′*) is a function of

*x*and

*x′*that expresses the rate of change of

*F*[

*f*] when

*f*(

*x*) is simultaneously perturbed at points

*x*and

*x′*.

By explicitly working out the Taylor expansion of eqn (C.6) for a prescribed functional *F*[*f*], it is possible to derive functional differentiation formulas that
(p.395)
closely resemble formulas from ordinary differential calculus. For example, the functional

*F*

_{4}[

*f*] of eqn (C.4) for a symmetric kernel

*K*(

*x, x′*) =

*K*(

*x′, x*) has derivatives

The computation of functional derivatives according to eqns (C.6)–(C.8) for functionals such as *F* _{3}[*f*] that involve derivatives of *f*(*x*) proceeds by one or more integrations by parts. These in turn require *boundary conditions* to be imposed on the arbitrary perturbation δ*f*(*x*). For example, we might want to restrict attention to functions *f*(*x*) that satisfy the fixed end (Dirichlet) conditions *f*(*a*) = *f* _{a}, *f*(*b*) = *f* _{b}. The variations of a functional *F*[*f*] subject to these fixed end conditions can thus be examined by expanding *F*[*f* + δ*f*] according to eqn (C.6) with the arbitrary perturbation satisfying the *homogeneous* end conditions δ*f*(*a*) = δ*f*(*b*) = 0. As an explicit example, variation of the functional

A similar approach can be used to define and compute functional derivatives for functionals of multivariate functions. For example, variation of *F* _{5}[*f*] subject to fixed conditions at the boundary of the **r** domain leads to the functional derivative

A variety of other useful functional differentiation formulas can be derived. A particularly important relation is the *chain rule*

*F*=

*f*(

*x″*) in eqn (C.15), leads to

*f*/δ

*g*and δ

*g*/δ

*f*are functional inverses.

# C.3 Min–max problems

An important application of the calculus of functionals is to optimization problems. A typical problem involves finding the function *f*(*x*) belonging to some function space that minimizes or maximizes a prescribed functional *F*[*f*]. For example, in the classical density functional theory of inhomogeneous fluids (Rowlinson and Widom, 1989), *f* corresponds to a density field and *F* to a free energy functional. The free energy is minimized for the equilibrium configuration of the density. In Chapter 5, a similar variational principle is used to derive the self-consistent field theory (SCFT) of inhomogeneous polymeric fluids.

The theoretical basis for solving functional min–max problems is the Taylor expansion of eqn (C.6). The *first variation* δ*F* of a functional *F*[*f*] that is subjected to an arbitrary infinitesimal perturbation δ*F*(*x*) over *x* ∈ [*a, b*] is defined by

*F*[

*f*] attains an extremum value, i.e. a maximum, minimum, or saddle point, when

*f*(

*x*) is adjusted to a function

*f**(

*x*) such that the first variation vanishes. Because the perturbation δ

*f*(

*x*) is arbitrary, this condition implies that the extremum function

*f**(

*x*) is determined by the vanishing of the first functional derivative

*f*(

*x*) by setting the first derivative f′(

*x*) to zero, the extremum of a functional corresponds to the function

*f**(

*x*) that causes the first functional derivative to vanish. Equation (C.19) is commonly referred to as an

*Euler–Lagrange equation*and may be an ordinary differential, a partial differential, or an integral equation to solve for

*f**(

*x*) depending on the form of the functional. For example, the Euler–Lagrange equation that arises from variation of the functional

*F*

_{3}[

*f*] subject to the fixed end conditions

*f*(

*a*) =

*f*

_{a},

*f*(

*b*) =

*f*

_{b}is the

*ordinary*differential equation

*f**(

*x*) that depends on the prescribed boundary conditions. Correspondingly, the extremum of the functional

*F*

_{5}[

*f*] subject to (p.397) Dirichlet or periodic boundary conditions on

*f*(

**r**) satisfies the

*partial*differential equation

The Euler–Lagrange equation (C.19) provides a condition for determining a function *f**(*x*) that corresponds to an extremum of a prescribed functional *F*[*f*]. In order to establish whether that extremum is a maximum, minimum, or saddle point, the second functional derivative must be analyzed. This analysis involves the solution of the eigenvalue problem

_{i}are all positive, then

*f** represents a

*local minimum*of

*F*[

*f*]. Correspondingly, if Λ

_{i}< 0 for all

*i, f** is a

*local maximum*of

*F*[

*f*]. In the intermediate case of eigenvalues of mixed sign, we conclude that

*f** corresponds to a

*saddle point*of the functional. Establishing whether a particular extremum is a

*global*, rather than local, minimum or maximum is a more difficult problem in optimization theory (Nocedal and Wright, 1999) that remains unsolved for arbitrary

*F*[

*f*], although physical intuition is often helpful in specific contexts.

As a final note, the above optimization scheme can be extended to include linear and nonlinear *constraints* through the introduction of Lagrange multipliers. Interested readers should consult the literature (Riley *et al.*, 1998; Weinstock, 1974).

# C.4 Functional integration

In addition to taking the derivative of a functional with respect to a function, it is also possible to define the integral of a functional over all functions belonging to some function space (Feynman and Hibbs, 1965; Simon, 1979; Zee, 2003). Such an integral is referred to as a *functional integral*, or more specifically a *path integral*, if the function *f*(*x*) corresponds to the trajectory *q*(*t*) of a particle at various times *t* or the configuration **r**(*s*) of a polymer molecule at various contour locations *s*.

A generic functional integral will be written in the form

*D f*is understood to represent an integral over all functions

*f*(

*x*) defined over

*x*∈ [

*a, b*] belonging to some function space. The relevant function space is determined by smoothness and boundary conditions on

*f*. For example, if we were interested in summing over all possible shapes of a polymer that is clamped at both ends, eqn (C.23) could be interpreted as an integral over all continuous and infinitely differentiable functions

*f*(

*x*) that satisfy

*f*(

*a*) =

*f*(

*b*) = 0.

(p.398)
How does one define such a functional integral? One approach is to *discretize* the function over the interval. In the clamped polymer example, a sensible strategy would be to sample *f*(*x*) at a set of *N* equally spaced interior points, *x* _{i} = *a* + *i*(*b* − *a*)/(*N* + 1), *i* = 1, 2, …, *N*. The function can thus be approximated by an *N*-vector **f** = (*f* _{1}, *f* _{2}, …, *f* _{N}) with components *f* _{i} ≡ *f*(*x* _{i}). For a prescribed *N*, the *N*-dimensional integral

*I*. In this equation we use the conventional notation

*F*(

**f**) of a

*multivariate function*to indicate the discrete approximation to a functional

*F*[

*f*]. The formal transition from an ordinary multi-dimensional integral to an infinite-dimensional functional integral is through the limit lim

_{N}→∞ I

_{N}=

*I*. Depending on the form of the functional

*F*[

*f*], this limit may or may not exist. However, in the statistical mechanics of classical fields, we are normally interested in

*average*quantities that can be expressed as the

*ratio*of two functional integrals. In such cases the limiting procedure usually converges to a finite result for the ratio, even if the limits of the individual integrals do not exist.

A second way to interpret a functional integral such as eqn (C.23) is through a spectral or normal-mode representation of the function. For example, in the tethered polymer situation with *f*(*a*) = *f*(*b*) = 0, a Fourier sine series representation would be appropriate:

**a**= (

*a*

_{1},

*a*

_{2},

*a*

_{3}, …) according to

With the exception of Gaussian integrals, very few functional integrals can be evaluated analytically. Two important Gaussian integral formulas that can be viewed as infinite-dimensional versions of eqns (B.12) and (B.13) are

*A*(

*x; x′*) is assumed to be real, symmetric, and positive definite. The functional inverse of

*A, A*

^{−1}, is defined in accordance with eqn (C.17) by

*J*represents a microscopic density operator, and

*A*

^{−1}is a pair potential function. The function

*f*is an auxiliary potential that serves to decouple particle–particle interactions. In this context, eqns (C.27) and (C.28) are generically referred to as

*Hubbard–Stratonovich transformations*(Chaikin and Lubensky, 1995).