(p.541) Appendix A Symbols and notation
(p.541) Appendix A Symbols and notation
In this appendix, we summarize the conventions that we have adopted throughout the book for symbols and notation. Secs. A.1 and A.2 deal with equivalence relations and orders of growth. Sec. A.3 presents the notation used in combinatorics and probability. The table in Section A.4 gives the main mathematical notation, and that in Section A.5 gives the notation for information theory. The table in Section A.6 summarizes the notation used for factor graphs and graph ensembles. The table in Section A.7 focuses on the notation used in message passing, belief propagation and survey propagation, and the cavity method.
A.1 Equivalence relations
As usual, the symbol ‘=’ denotes equality. We also use ‘≡’for definitions and ‘≈’for ‘numerically close to’. For instance, we may say that the Euler–Mascheroni constant is given by
When dealing with two random variables X and Y, we write $X\stackrel{\text{d}}{=}Y$ if X and Y have the same distribution. For instance, given n + 1 i.i.d. Gaussian variables X _{0},…,X _{n}, with zero mean and unit variance, we can write
We have adopted several equivalence symbols to denote the asymptotic behavior of functions as their argument tends to some limit. For the sake of simplicity, we assume here that the argument is an integer n → ∞. The limit to be considered in each particular case should be clear from the context. We write $f(n)\stackrel{\xb7}{=}g(n)$ if f and g are equal ‘to the leading exponential order’ as n → ∞, i.e. if
For instance, we may write
(p.542) We write instead f(n) ∼ g(n) if f and g are asymptotically equal ‘up to a constant’, i.e. if
Finally, the symbol ‘≃’ is reserved for asymptoric equality, i.e. if
(p.543) For instance, we have
The symbol ‘≅’ denotes equality up to a constant. If p(·)and q(·)are two measures on the same finite space χ (not necessarily normalized), we write p(x) ≅ q(x) if there = exists C > 0 such that
A.2 Orders of growth
We have used a few symbols to denote the order of growth of functions when their arguments tend to some definite limit. For the sake of definiteness, we refer here to functions of an integer n → ∞. As above, the adaptation to any particular context should be straightforward.
We write f(n) = Θ(g(n)), and say that f(n) is of order g(n), if there exist two positive constants C _{1} and C _{2} such that
We write instead f(n) = o(g(n))if
For instance,
Finally, f(n) = O(g(n)) if there exist a constant C such that
Note that both f(n) = Θ(g(n)) and f(n) = o(g(n)) imply f(n) = O(g(n)). As the last example shows, the converse is not necessarily true.
A.3 Combinatorics and probability
The standard notation is used for multinomial coefficients. For any n ≥ 0, l > 2, and n _{1},…, n _{1} ≥ 0 such that n _{1} + … + n _{1} = n, we have
For binomial coefficients (i.e. for l = 2), the usual shorthand is
In combinatorics, certain quantities are most easily described in terms of their generating functions. Given a formal power series f(x), coeff{f(x), x ^{n}} denotes the coefficient of the monomial x ^{n} in the series. More formally,
For instance,
Some standard random variables are as follows:

• A Bernoulli p variable is a random variable X taking values in {0, 1} such that ℙ(X = 1) = p.

• B(n, p) denotes a binomial random variable with parameters n and p. This is defined as a random variable taking values in {0, …, n} and having a probability distribution
(A.20)$$\mathbb{P}\{B(n,p)=k\}=\left(\begin{array}{c}n\\ k\end{array}\right){p}^{k}{(1p)}^{nk}.$$
(p.544)

• A Poisson random variable X with parameter λ takes integer values and has the following probability distribution:
(A.21)$$\mathbb{P}\{X=k\}=\frac{{\text{\lambda}}^{\text{k}}}{k!}{e}^{\text{\lambda}}.$$The parameter λ is the mean of X.
Finally, we have used the symbol δ_{a} for the Dirac ‘delta function’. This is in fact a measure that attributes unit mass to the point a. In formulae, for any set A,
A.4 Summary of mathematical notation
= 
Equal. 

≡ 
Defined as. 
≈ 
Numerically close to. 
$\stackrel{d}{=}$ 
Equal in distribution. 
$\stackrel{\xb7}{=}$ 
Equal to the leading exponential order. 
∼ 
Asymptotically equal up to a constant. 
≅ 
Equal up to a normalization constant (for probabilities; see Eq. (14.3)). 
Θ(f) 
Of the same order as f (see Sec. A.2). 
o(f) 
Grows more slowly than f (see Sec. A.2). 
argmax f(x) 
The set of values of x where the realvalued function f reaches its maximum. 
⌈·⌉ 
Integer part. [x] is the largest integer n such that n > x. 
⌈·⌉ 
[x] is the smallest integer n such that n > x. 
F 
The set of integer numbers. 
ℝ 
The set of real numbers. 
β ↓ β_{c} 
β goes to β_{c} through values > β_{c}. 
β ↑ β_{c} 
β goes to β_{c} through values > β_{c}. 
]a,b[ 
Open interval of real numbers x such that a < x < b. 
]a,b[ 
Interval of real numbers x such that a < x ≤ b. 
ℤ_{c} 
The field of integers modulo 2. 
a⊕b 
The sum modulo 2 of the two integers a and b. 
I(·) 
Indicator function: $\mathbb{I=(}\cdot \text{)}$(A) = 1 if the logical statement A is true, and $\mathbb{I}$(A) = 0 if the statement A is false. 
A≿0 
The matrix A is positive semidefinite. 
(p.545) A.5 Information theory
H _{X} 
Entropy of the random variable X (See Eq. (1.7)). 

I _{XY} 
Mutual information of the random variables X and Y (see Eq. (1.25)). 
$\mathcal{H}\left(p\right)$ 
Entropy of a Bernoulli variable with parameter p. 
$\mathcal{M(X)}$ 
Space of probability distributions over a finite set $\mathcal{X}$. 
ℭ 
Codebook. 
≾ 
BMS(1) ≤ BMS(2): channel BMS(2) is physically degraded with respect to BMS(1). 
ℬ 
Bhattacharya parameter of a channel. 
A.6 Factor graphs
G ^{N(K,M)} 
Random kfactor graph with M function nodes and N variable nodes. 

G ^{N}(K,α) 
Random kfactor graph with M functions nodes and N variable nodes. Each function node is present independently with probability 
D ^{N}(Λ,P) 
Degreeconstrained random tree factor graph ensemble. 
T ^{r}(Λ,P) 
Degreeconstrained random tree factor graph ensemble. 
T ^{r}(K, α) 
Shorthand for the random tree factor graph ${\mathbb{T}}_{r}\left(\Lambda (x)\right)={e}^{ka(x1)},P(x)={x}^{k}$. 
Λ(x) 
Degree profile of variable nodes. 
P(x) 
Degree profile of function nodes. 
λ(i) 
Edgeperspective degree profile of variable nodes. 
ρ(x) 
Edgeperspective degree profile of function nodes. 
B_{i,r}(F) 
Neighbourhood of radius r of variable node i. 
B_{i→a,t}(F) 
Directed neigbourhood of an edge 
A.7 Cavity and messagepassing methods
v _{i→a}(X_{i}) 
BP messages (variable to function node). 
${\widehat{v}}_{a\to i}({x}_{i})$ 
BP messages (function to variable node). 
Φ 
Free entropy. 
F(v) 
Bethe free entropy (as a function of messages). 
F ^{e}(v) 
Bethe energy (as a function of minsum messages). 
f_{ RS } 
Bethe (RS) freeentropy density. 
Q_{i→a(υ)} 
1RSB cavity message/SP message (variable to function node). 
${\widehat{Q}}_{a\to i}(\widehat{v})$ 
1RSB cavity message/SP message (function to variable node). 
x 
Parisi 1RSB parameter. 
$\Im (x)$ 
Freeentropy density of the auxiliary model counting BP fixed points. 
Σ(φ) 
Complexity. 
${\text{F}}^{RSB}(Q)$ 
1RSB cavity free entropy (Bethe free entropy of the auxiliary model, a function of the messages). 
f^{RSB} 
1RSB cavity freeentropy density. 
y 
Zerotemperature Parisi 1RSB parameter (y = lim_{β→∞}βx). 
${\Im}^{e}(y)$ 
Freeentropy density of the auxiliary model counting minsum fixed points. 
Σ^{e}(e) 
Energetic complexity. 
F ^{RSB,e}(Q) 
Energetic 1RSB cavity free entropy (Bethe free entropy of the auxiliary model, a function of the messages). 
f^{RSB,e} 
Energetic 1RSB cavity freeentropy density. 