## Marc Mézard and Andrea Montanari

Print publication date: 2009

Print ISBN-13: 9780198570837

Published to Oxford Scholarship Online: September 2009

DOI: 10.1093/acprof:oso/9780198570837.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 February 2017

# (p.541) Appendix A Symbols and notation

Source:
Information, Physics, and Computation
Publisher:
Oxford University Press

In this appendix, we summarize the conventions that we have adopted throughout the book for symbols and notation. Secs. A.1 and A.2 deal with equivalence relations and orders of growth. Sec. A.3 presents the notation used in combinatorics and probability. The table in Section A.4 gives the main mathematical notation, and that in Section A.5 gives the notation for information theory. The table in Section A.6 summarizes the notation used for factor graphs and graph ensembles. The table in Section A.7 focuses on the notation used in message passing, belief propagation and survey propagation, and the cavity method.

# A.1 Equivalence relations

As usual, the symbol ‘=’ denotes equality. We also use ‘≡’for definitions and ‘≈’for ‘numerically close to’. For instance, we may say that the Euler–Mascheroni constant is given by

(A.1)
$Display mathematics$

When dealing with two random variables X and Y, we write $X = d Y$ if X and Y have the same distribution. For instance, given n + 1 i.i.d. Gaussian variables X 0,…,X n, with zero mean and unit variance, we can write

(A.2)
$Display mathematics$

We have adopted several equivalence symbols to denote the asymptotic behavior of functions as their argument tends to some limit. For the sake of simplicity, we assume here that the argument is an integer n → ∞. The limit to be considered in each particular case should be clear from the context. We write $f ( n ) = · g ( n )$ if f and g are equal ‘to the leading exponential order’ as n → ∞, i.e. if

(A.3)
$Display mathematics$

For instance, we may write

(A.4)
$Display mathematics$

(p.542) We write instead f(n)g(n) if f and g are asymptotically equal ‘up to a constant’, i.e. if

(A.5)
$Display mathematics$
for some constant C ≠ 0. For instance, we have
(A.6)
$Display mathematics$

Finally, the symbol ‘≃’ is reserved for asymptoric equality, i.e. if

(A.7)
$Display mathematics$

(p.543) For instance, we have

(A.8)
$Display mathematics$

The symbol ‘≅’ denotes equality up to a constant. If p(·)and q(·)are two measures on the same finite space χ‎ (not necessarily normalized), we write p(x)q(x) if there = exists C > 0 such that

(A.9)
$Display mathematics$
for any x ∈ χ‎. The definition generalizes straightforwardly to infinite sets χ‎: the Radon–Nikodyn derivative between p and q is a positive constant.

# A.2 Orders of growth

We have used a few symbols to denote the order of growth of functions when their arguments tend to some definite limit. For the sake of definiteness, we refer here to functions of an integer n → ∞. As above, the adaptation to any particular context should be straightforward.

We write f(n) = Θ‎(g(n)), and say that f(n) is of order g(n), if there exist two positive constants C 1 and C 2 such that

(A.10)
$Display mathematics$
for any n large enough. For instance, we have
(A.11)
$Display mathematics$

We write instead f(n) = o(g(n))if

(A.12)
$Display mathematics$

For instance,

(A.13)
$Display mathematics$

Finally, f(n) = O(g(n)) if there exist a constant C such that

(A.14)
$Display mathematics$
for any n large enough. For instance,
(A.15)
$Display mathematics$

Note that both f(n) = Θ‎(g(n)) and f(n) = o(g(n)) imply f(n) = O(g(n)). As the last example shows, the converse is not necessarily true.

# A.3 Combinatorics and probability

The standard notation is used for multinomial coefficients. For any n ≥ 0, l > 2, and n 1,…, n 1 ≥ 0 such that n 1 + … + n 1 = n, we have

(A.16)
$Display mathematics$

For binomial coefficients (i.e. for l = 2), the usual shorthand is

(A.17)
$Display mathematics$

In combinatorics, certain quantities are most easily described in terms of their generating functions. Given a formal power series f(x), coeff{f(x), x n} denotes the coefficient of the monomial x n in the series. More formally,

(A.18)
$Display mathematics$

For instance,

(A.19)
$Display mathematics$

Some standard random variables are as follows:

• A Bernoulli p variable is a random variable X taking values in {0, 1} such that ℙ(X = 1) = p.

• B(n, p) denotes a binomial random variable with parameters n and p. This is defined as a random variable taking values in {0, …, n} and having a probability distribution

(A.20)
$Display mathematics$

• (p.544)
• A Poisson random variable X with parameter λ‎ takes integer values and has the following probability distribution:

(A.21)
$Display mathematics$

The parameter λ‎ is the mean of X.

Finally, we have used the symbol δ‎a for the Dirac ‘delta function’. This is in fact a measure that attributes unit mass to the point a. In formulae, for any set A,

(A.22)
$Display mathematics$

# A.4 Summary of mathematical notation

=

Equal.

Defined as.

Numerically close to.

$= d$

Equal in distribution.

$= ·$

Equal to the leading exponential order.

Asymptotically equal up to a constant.

Equal up to a normalization constant (for probabilities; see Eq. (14.3)).

Θ‎(f)

Of the same order as f (see Sec. A.2).

o(f)

Grows more slowly than f (see Sec. A.2).

argmax f(x)

The set of values of x where the real-valued function f reaches its maximum.

⌈·⌉

Integer part. [x] is the largest integer n such that n > x.

⌈·⌉

[x] is the smallest integer n such that n > x.

F

The set of integer numbers.

The set of real numbers.

β‎ ↓ β‎c

β‎ goes to β‎c through values > β‎c.

β‎ ↑ β‎c

β‎ goes to β‎c through values > β‎c.

]a,b[

Open interval of real numbers x such that a < x < b.

]a,b[

Interval of real numbers x such that a < xb.

c

The field of integers modulo 2.

a⊕b

The sum modulo 2 of the two integers a and b.

I(·)

Indicator function: $I=( ⋅ )$(A) = 1 if the logical statement A is true, and $I$(A) = 0 if the statement A is false.

A≿0

The matrix A is positive semidefinite.

# (p.545) A.5 Information theory

H X

Entropy of the random variable X (See Eq. (1.7)).

I XY

Mutual information of the random variables X and Y (see Eq. (1.25)).

$H ( p )$

Entropy of a Bernoulli variable with parameter p.

$M(X)$

Space of probability distributions over a finite set $X$.

Codebook.

BMS(1) ≤ BMS(2): channel BMS(2) is physically degraded with respect to BMS(1).

Bhattacharya parameter of a channel.

# A.6 Factor graphs

G N(K,M)

Random k-factor graph with M function nodes and N variable nodes.

G N(K,α‎)

Random k-factor graph with M functions nodes and N variable nodes. Each function node is present independently with probability

D N(Λ‎,P)

Degree-constrained random tree factor graph ensemble.

T r(Λ‎,P)

Degree-constrained random tree factor graph ensemble.

T r(K, α‎)

Shorthand for the random tree factor graph $T r ( Λ ( x ) ) = e k a ( x − 1 ) , P ( x ) = x k$.

Λ‎(x)

Degree profile of variable nodes.

P(x)

Degree profile of function nodes.

λ‎(i)

Edge-perspective degree profile of variable nodes.

ρ‎(x)

Edge-perspective degree profile of function nodes.

Bi,r(F)

Neighbourhood of radius r of variable node i.

Bi→a,t(F)

Directed neigbourhood of an edge

# A.7 Cavity and message-passing methods

 v i→a(Xi) BP messages (variable to function node). $v ^ a → i ( x i )$ BP messages (function to variable node). Φ‎ Free entropy. F(v) Bethe free entropy (as a function of messages). F e(v) Bethe energy (as a function of min-sum messages). f RS Bethe (RS) free-entropy density. Qi→a(υ‎) 1RSB cavity message/SP message (variable to function node). $Q ^ a → i ( v ^ )$ 1RSB cavity message/SP message (function to variable node). x Parisi 1RSB parameter. $ℑ ( x )$ Free-entropy density of the auxiliary model counting BP fixed points. Σ‎(φ‎) Complexity. $F R S B ( Q )$ 1RSB cavity free entropy (Bethe free entropy of the auxiliary model, a function of the messages). fRSB 1RSB cavity free-entropy density. y Zero-temperature Parisi 1RSB parameter (y = limβ‎→∞β‎x). $ℑ e ( y )$ Free-entropy density of the auxiliary model counting min-sum fixed points. Σ‎e(e) Energetic complexity. F RSB,e(Q) Energetic 1RSB cavity free entropy (Bethe free entropy of the auxiliary model, a function of the messages). fRSB,e Energetic 1RSB cavity free-entropy density.
(p.546)