(p.565) Appendix BASIC FACTS
(p.565) Appendix BASIC FACTS
This appendix contains auxiliary information that has been mentioned or used in the text. Some of the sections have an independent interest because they contain developments of topics of text. Thus, the question of noncontractivity of the PME in various norms discussed in Section A.11 is an interesting and quite open problem.
A.1 Notations and basic facts
A.1.1 Points and sets
We will use notations that are rather standard in PDE texts, like [229, 261] or equivalent, which we assume known to the reader. As usual, R is the real line, (a, b) denotes an open interval, [a, b] a closed one, and R _{+} = (0, ∞). We denote the space dimension by d = 1, 2, …, according to physics usage. Points in R ^{d} for d > 1 are denoted by x = (x _{1}, …, x _{d}). For vectors u and v ∈ R ^{d} the scalar product is denoted by u · v, and sometimes by 〈u, v〉; e _{i} denotes the unitary vector in the positive ith direction. We denote by B _{R}(x) the open ball of radius R in R ^{d} centred at x ∈ R ^{d}. The set of parts of X is denoted by 𝒫(X) = 2^{X}.
For a subset E of a metric space, E¯ denotes its closure and ∂E its boundary. We denote the Lebesgue measure by dx = dx _{1} … dx _{d} and the Lebesgue measure of a measurable set E ⊂ R ^{d} by E or meas(E). The measure (volume) of the unit ball is given by
We usually denote by Ω ⊂ R ^{d} the domain where the spatial variable lives. A regular domain is a domain whose boundary Γ = ∂Ω is locally a C ^{k,α} hypersurface for some k ≥ 1 and α ∈ (0, 1). Typically, Γ ∈ C ^{2,α}. But we will also consider the generality of domains with a Lipschitz boundary, which means that Γ can be viewed locally as the graph of a Lipschitz function after an appropriate rotation of the coordinate axes, and in addition Ω is locally on one side of Γ, cf. [277]. This generality allows for domains with corners which are found in some applications. Unless mentioned to the contrary, the boundary will be assumed to be C ^{k} regular (p.566) with k ≥ 2. For x ∈ Ω we define the distance to the boundary as
We will often deal with spacetime domains. Q is the cylinder Ω × R _{+} and for 0 < T < ∞ we write Q _{T} = Ω × (0, T) and Q ^{T} = Ω × (T, ∞). The lateral boundary of Q is denoted by ∑ = ∂Ω × [0, ∞), while ∑_{T} = ∂Ω × [0, T].
A.1.2 Functions
The characteristic function of a set E is denoted by χ_{E}: its value is 1 for x ∈ E, 0 otherwise. We sometimes use the notations Dom(f) = D(f) and Im(f) = R(f) to denote the domain and range of a function respectively. If the domain is a set E we may write Im(f) = f(E).
The symbols (s)_{+}, (s)^{+} mean max{s, 0}, i.e. the positive part of the number s, and (s)_{−} = (s)^{−} = max{−s, 0}, the negative part. For a function we have
There will also be frequent use of cutoff functions. The basic cutoff function is a function ζ(x) ∈ C ^{∞}(R ^{d}) which satisfies the following conditions: 0 ≤ ζ ≤ 1, ζ(x) = 1 if and only if x ≤ 1, and ζ(x) = 0 if x ≥ 2. We will use its scalings: ζ_{r}(x) = ζ(x/r) for r > 0.
If f is a onedimensional function, the expression lim_{x→a−} f(x) means lim f(x) as x → a with x < a. Similar meaning for lim_{x→a+} f(x).
We use the notations O and o in the sense of Landau.
(p.567) A.1.3 Integrals and derivatives
Integrals without limits are understood to extend to the whole domain under consideration, Ω, Q or Q _{T}, depending on the context. We use different notations for partial derivatives, like u _{t} = ∂_{t} u = ∂u/∂t and so on, the first being most common in the literature, the second one being convenient to avoid confusion with subindexes. Especially in regularity theory, we use the notation D ^{α} u where α = (α_{1}, …, α_{d}) is a multiindex, to denote the derivative of order α = ∑_{i} α_{i} which is taken α_{i} times with respect to the variable x _{i}. We usually write ∇u, sometimes ∇_{x} u, for the spatial gradient of a function. We also use the symbol ∮ to denote average, see Section 7.1.
A.1.4 Functional spaces
C(Ω), C ^{k}(Ω) and C ^{∞}(Ω) denote the spaces of continuous, ktimes differentiable and infinitely differentiable functions in $\Omega ,\text{}\text{}D(\Omega )={C}_{c}^{\infty}(\Omega )$ denotes the C ^{∞}smooth functions with compact support in Ω and 𝒟′(Ω) the space of distributions. We use C _{0}(Ω) for continuous functions that vanish on the boundary. For 0 < α < 1, C ^{α}(Ω̄) is the Banach space of functions which are uniformly Hölder continuous in Ω. In case they are only uniformly continuous in the interior we get the space C ^{α}(Ω) which is not a normed space, but a metric space. Functions with Hölder continuous derivatives form the spaces C ^{k,α}(Ω̄) and C ^{k,α}(Ω). When α = 1 we get the Lipschitz spaces, like Lip(Ω). Note that the notation C ^{1}(Ω̄) for that space becomes inconsistent in that case, since the symbol is already in use for functions with one continuous derivative. Hence, Lip(Ω) is sometimes denoted as C ^{0,1}(Ω̄). The concept of modulus of continuity will be introduced in Section 7.5.1.
For 1 ≤ p ≤ ∞ we denote the usual Lebesgue spaces by L ^{p}(Ω) with norm  · _{p}, while H ^{1}(Ω) and ${H}_{0}^{1}(\Omega )$ are the usual Sobolev spaces; the subscript loc refers to local spaces. A general reference for Sobolev spaces is [4]. When dealing with functions in Sobolev spaces, derivatives mean distributional derivatives. As a rule, we will identify Lebesgue measurable real functions defined in Ω up to a set of measure zero. We will abridge the expression almost everywhere in the usual form as a.e. Embedding and compactness theorems (Sobolev embeddings and the like) are assumed as defined for instance in [4, 229, 372]. Let us recall the Rellich–Kondrachov theorem: Let Ω be a bounded domain with C ^{1} boundary. Then,
(p.568) Similar statements apply to functions defined in Q, Q _{T} or their closures. C ^{2,1}(Q) denotes those functions being twice differentiable in the space variables and once in time. For a function u(x, t), we use the abbreviated notation u(t) to denote the functionvalued map t ↦ u(·, t).
We will use frequently classes of nonnegative solutions. In that sense, L ^{p}(Ω)_{+} denotes the set of functions f ∈ L ^{p}(Ω) such that f ≥ 0. We will sometimes use weighted spaces, like ${L}_{\delta}^{1}(\Omega )$ in Section 6.6. The space H ^{−1}(Ω) is described and used in Section 6.7.
Spaces of vector valued functions are used in the abstract settings, especially in Chapter 10. Care must be taken with some subtleties when the values are taken in an infinite dimensional metric space X. Thus, not all absolutely continuous functions R → X are differentiable everywhere. Cf. in this respect the appendix of [128]. Let us only mention that given a measure space (Ω, 𝒫, μ), a Banach space X has the Radon–Nikodým property with respect to μ if for every bounded variation, countably additive μcontinuous vector measure ν valued in X, there is a Bochner integrable function g : Ω → X such that ν(E) = ∫_{E} g dμ for every μ measurable set E. In that case, every absolutely continuous function f : [a, b] → X is also a.e. differentiable. By default μ is the Lebesgue measure in R ^{d}. Every reflexive Banach space is RN, but L ^{1}(Ω) and L ^{∞}(Ω) are not.
A.1.5 Some integrals and constants
We list some of the integrals that enter the calculation of the best constants in the smoothing effect.

(i) EULER'S GAMMA FUNCTION is defined as
$$\text{\Gamma}(p)={\displaystyle {\int}_{0}^{\infty}{t}^{p1}{e}^{t}\text{}dt,\text{}\text{}\text{}\text{}p>0.}$$$$\text{\Gamma}(p)\sim {(p/e)}^{p}{(2\pi p)}^{\mathrm{\xbd}}.$$ 
(ii) EULER'S BETA FUNCTION is defined for p, q > 0 as
$$B(p,q)={\displaystyle {\int}_{0}^{1}{t}^{p1}{(1t)}^{q1}dt=2}{\displaystyle {\int}_{0}^{1}{s}^{2p1}{(1{s}^{2})}^{q1}ds.}$$$$B(p,q)=\frac{\text{\Gamma}(p)\text{\Gamma}(q)}{\text{\Gamma}(p+q)},$$$$B(p,q)=r{\displaystyle {\int}_{0}^{1}{s}^{rp1}{(1{s}^{r})}^{q1}ds=r\text{}\text{}{\displaystyle {\int}_{0}^{\infty}\frac{{x}^{rq1}}{{\left(1+{x}^{r}\right)}^{p+q}}dx.}}$$
(p.569) A.1.6 Various
We will devote the next sections to developing a number of less standard topics that are needed or convenient to read the book. Other notations and concepts are explained in the text as they occur.
A.2 Nonlinear operators
The theory of nonlinear operators in a Banach space is a main tool of the theory developed in Chapter 10. We recall that in its more general nonlinear and possibly multivalued version, an operator A in a Banach space X is a map A from a subset of D(A) ⊂ X into the set of parts of X, 𝒫(X). We write A(x) or Ax for the image of x (it is a subset of X). We always take as D(A) the essential domain, D(A) = {x : A(x) ¦ ∅}. We denote by R(A) the range of A, a subset of X:
In this generality, it is often convenient to identify the operator with its graph, Γ(A), a subset of X × X. We say that an operator B extends an operator A if Γ(A) ⊂ Γ(B). This is an order relation. Thus, A ^{o} is extended by A. An operator is closed if and only if its graph is a closed subset of X × X. We say that B if the closure of A iff Γ(B) is the closure of Γ(A). The sum of two operators is defined as
The inverse A ^{−1} is easily understood in the sense of graphs, just changing the order of domain and image
We refer to Chapter 10 for the definitions of monotone and accretive operators and their variants. Use is made in that chapter of integrals of vectorvalued maps f ∈ L ^{1}(0, T : X) where X is a Banach space. The integral is understood in the sense of Bochner with respect to Lebesgue measure in (0, T) ⊂ R; it means that (p.570) the functions are strong measurable and
We point out that the family of resolvent operators associated to an operator A is defined as
More on accretive definitions in Subsection 10.2.3. Monotone operators in Hilbert spaces are treated in Section 10.1. A very important class of maximal monotone operators is given by the subdifferentials of proper convex functions, that have been defined at the end of Section 10.1.
A.3 Maximal monotone graphs
We will study nonlinear parabolic equations like u _{t} = Δϕ(u) + f, and their elliptic counterparts, like −Δv + β(v) = f. To simplify, we may assume that ϕ is a continuous and monotone increasing function of its argument u ∈ R, and then β is its inverse function. Making the requirement of parabolicity on the first equation leads to the condition ϕ′(s) > 0 for all s. However, the second equation, which is used to solve the first, does not need such a strong requirement. It is then possible and useful to consider a greater generality in which ϕ and β can be any maximal monotone graph in R ^{2}.
For the concept and applications of maximal monotone graph (m.m.g. for short) we may refer the reader to Brezis' treatise [128], which covers the much more general theory of maximal monotone operators in Hilbert spaces. Let us remark that this generality has been introduced into nonlinear analysis because of its interest in modelling a number of physical applications, most notably to formulate variational inequalities.
Here is a summary of the main facts that we need: a m.m.g. ϕ in R ^{2} is the natural generalization of the concept of monotone nondecreasing real function to treat in an efficient way the cases where there are discontinuities; since we are dealing with monotone functions, they must be jump discontinuities. We want to fill in these ‘gaps’ for the benefit of obtaining existence of solutions of the equations where ϕ appears. Then, the function must become multivalued and contain vertical segments (corresponding to the jumps). The multivalued function ϕ is defined in a maximal interval D(ϕ) which is not necessarily R, and can be open or closed on either end. If one of the ends of D(ϕ) is finite and not included in D(ϕ), then there is a vertical asymptote at this end; if it is included, (p.571) there is a semiinfinite vertical segment in the graph. Typical maximal monotone graphs appearing in the nonlinear ODEs and PDEs of Mathematical Physics are the sign function
One of the main advantages of this generality, which will be used here, is the fact that the inverse of a m.m.g. is again a m.m.g.; actually, both graphs are symmetric with respect to the main bisectrix in R ^{2}.
The standard and somewhat awkward notation when using multivalued operators is set inclusion, so that when (a, b) is a point in the graph ϕ we write b ∈ ϕ(a) instead of b = ϕ(a), since generally ϕ(a) is not a singleton.
A.3.1 Comparison of maximal monotone graphs
In the study of the filtration equation (GPME) we will be interested in comparing the concentrations of solutions of two equations with different nonlinearities ϕ. This final goal will be prepared with a result for elliptic equations. We introduce the following concepts.
Definition A.1 We say that a maximal monotone graph φ_{1} is weaker than another one φ_{2}, and we write φ_{1} ≺ φ_{2}, if they have the same domains, D(φ_{1}) = D(φ_{2}), and there is a contraction γ : R → R such that
By contraction we mean γ(a) − γ(b) ≤ a − b. This implies in particular φ_{1} must have horizonal points (or horizontal intervals) at the same values of the argument as φ_{2}, and maybe some more. We also assume that φ_{1} does not accept vertical intervals (i.e., it is onevalued). Note that for smooth graphs condition (A.1) just means that
In the development of the corresponding elliptic theory we will need to rephrase this condition in terms of the inverse graphs β_{i} entering the equations (p.572) of the form
A.4 Measures
In the study of the initial value problem we use Radon measures as initial data. We recall that a Radon measure μ is in principle defined as a (realvalued) linear map on C _{c}(Ω), [455, 473], where Ω will be for us an open subset of R ^{d}. The Riesz theorem allows to associate to a Radon measure a Borel measure, which is a realvalued map on sets, that we will also denote by μ. Actually, the measure is Borel regular and locally finite. Note the alternative notations for integrals with respect to a measure: ∫ f(x) μ(dx) = ∫ f(x)dμ(x). Both appear in the literature. The family of Borel subsets of X is denoted by ℬ(X).
The space of Radon measures on a separable metric space (or more generally a locally compact Hausdorff space) X is denoted by ℳ(X), the subset of positive measures by ℳ^{+}(X), the subset of finite measures by ℳ_{b}(X). Given a measure μ ∈ ℳ, we denote by μ = μ^{+} − μ^{−} the Hahn–Jordan decomposition of μ into nonnegative measures. We denote by 𝒫(X) is the family of all probability measures, nonnegative measures with total mass 1.
Convergence of measures
The natural convergence in ℳ(X) is defined by the rule that μ_{n} → μ iff
(p.573) In general, weak convergence needs some extra property. This is wellknown in probability. A family of probability measures μ_{i} on a metric space M is said to be tight if for every ɛ > 0 there exists a compact K such that
The same result holds if probability measures are replaced by nonnegative Radon measures with finite and fixed total mass.
BV functions
We will also need the space of functions of bounded variation, BV(Ω): it consists of the functions f ∈ L ^{1}(Ω) whose distributional gradient Du is a Radon measure with bounded variation defined as
A.5 Marcinkiewicz spaces
Different classes of functional spaces are natural in the study of symmetrization, for instance the Lebesgue spaces L ^{p}(Ω). Also the Marcinkiewicz spaces play a role. The Marcinkiewicz space M ^{p}(R ^{d}), 1 < p < ∞, is defined as set of $f\in {L}_{loc}^{1}({\mathbb{R}}^{d})$ such that
Marcinkiewicz spaces will be important in our study of symmetrization, tied to the idea of ‘worst case strategy’ that plays an important role in our study of smoothing effects, [515]. They appear also in potential theory.
(p.574) A.6 Some ideas of potential theory
Potential theory is usually done in dimensions d ≥ 3, while dimensions d = 1, 2 are a bit special and need a different treatment. Therefore, we restrict our considerations to d ≥ 3 in a first stage. Consider the fundamental solution of the Laplace equation on R ^{d}, d ≥ 3:
A.7 A lemma from measure theory
We show here a version of the result that says that a continuous function can not have derivatives that are measures supported in sets where the function takes a discrete set of values.
Lemma A.1 Let u(x) be a continuous function in a domain Q of R ^{n} and let t be one of the coordinates. If we assume that u _{t} is a bounded Radon measure and ${u}_{t}\text{}\in {L}_{loc}^{1}(\left\{u\ne 0\right\})$, then u _{t} is an integrable function.
Proof It is immediate to see that the measure μ = u _{t} can be split into the
A.8 Results for semiharmonic functions
The theory of the Cauchy problem for the PME exploits at several places the fact that nonnegative solutions satisfy an estimate of the form
Lemma A.2 Let g be any nonnegative, smooth, bounded and integrable function in R such that
(p.576) Proof Let f(x) = g ^{p}. Then, Δf ≥ −K. Therefore, the function
(i) In the latter case, 0 < p < 1, we can use (A.12) to estimate f at an arbitrary point x _{0} as follows:
(ii) For p > 1 we modify the calculation as follows: we pick a point x _{0} of maximum for g and estimate g(x _{0}) as follows:
(iii) Dimensional analysis shows that the exponents in formula (A.10) are correct. Actually, we only need to prove the formula for g_{1} = 1, R = 1. ■
There is a local version of this result that we need in Chapters 12 and 18.
(p.577) Lemma A.3 Let g be any nonnegative, smooth, bounded function in the ball B _{2} = B _{2R}(a) ⊂ R ^{d}, and assume that g ∈ L ^{1}(B _{2}) and

(i) We have
(A.15)$$\Vert g{\Vert}_{{L}^{\infty}({B}_{R}(0))}\le C(p,d)\text{}\text{}\left(\Vert g{\Vert}_{{L}^{1}({B}_{2R}(0))}{R}^{d}+{K}^{1/p}{R}^{2/p}\right).$$ 
(ii) If g_{1} is very small compared with R and K the estimate takes the form
(A.16)with ρ = 2/(2p + d) and σ = d/(2p + d). The smallness condition is$$\Vert g{\Vert}_{{L}^{\infty}({B}_{1})}\le C(p,d)\Vert g{\Vert}_{{L}^{1}({B}_{2})}^{\rho}{K}^{\sigma},$$(A.17)$$\Vert g{\Vert}_{{L}^{1}}^{P}\le cK{R}^{dp+2}.$$
Proof If 0 < p ≤ 1 it is very similar to the previous one replacing R ^{d} by B _{2R}(a). Indeed, part (i) can be repeated to get for every x _{0} with x _{0} ≤ r, with 0 < r < R, integrating in B = B _{r}(x _{0}) to get:
(ii) When p > 1 the technique of proof has to be changed. Actually, the result is implied by Theorem 9.20 of GilbargTrudinger [261], which use Aleksandrov's maximum principle. ■
A.9 Three notes on the Giant and elliptic problems
We review here some approaches to the construction of the Giant, i.e., the positive selfsimilar solution of the PME with separated variables form, u(x, t) = t ^{−α} f(x). As we have said, it is equivalent to solving the nonlinear elliptic problem Δf ^{m} + α f = 0, with f = 0 on ∂Ω. As we have said in Section 5.9, it is best written in the form (5.70)
(p.578) A.9.1 Nonlinear elliptic approach. Calculus of variations
For experts in elliptic equations, the typical approach to solving the semilinear elliptic equation (A.18) is to view the solution g as a critical point of the functional
Theorem A.4 The positive solution of (A.18) is the minimum of J in ${H}_{0}^{1}(\Omega )$.
Proof (i) J is well defined in ${H}_{0}^{1}(\Omega )$: simply observe that 1 + 1/m < 2 and use Sobolev embeddings.
(ii) J is bounded from below in ${H}_{0}^{1}(\Omega )$: in fact, using Poincaré's inequality we get
(iii) The infimum is negative, hence it cannot correspond to the trivial function. Take a family of functions of the form g _{s}(x) = s g _{1}(x) with some ${g}_{1}\in {H}_{0}^{1}(\Omega )$, g _{1} ≥ 0. Then
(iv) Along any minimizing sequence there is convergence in ${H}_{0}^{1}(\Omega )$ and the infimum is taken, hence it is a minimum.
Observe first that J(g _{n}) converges to J _{min}. Then ∇g _{n} is uniformly bounded in L ^{2}(Ω), hence g _{n} converges weakly in ${H}_{0}^{1}(\Omega )$ and strongly in L ^{2}(Ω) to some ${H}_{0}^{1}(\Omega )$. In the limit we have by the standard argument of lower semicontinuity of the integral of the gradient square:
(p.579) (v) The minimum satisfies equation (A.18).
Let g be the minimum. Consider the family g _{ɛ} = g + ɛφ where $\phi \in {C}_{c}^{\infty}(\Omega ).$ is any nonnegative test function and ɛ is a real number. Write J(g _{ɛ}) − J(g) ≥ 0 as
(vi) Any solution of (A.18) satisfies ∫∇g^{2} dx = α ∫ g^{(m+1)/m} dx, hence
(vii) The uniqueness of the positive solution in this kind of ‘nonlinear eigenvalue problems’ is a wellknown result in the calculus of variations. It comes from a general result of functional analysis, Krein–Rutman's theorem. ■
Note The constant α > 0 in (20.15), (A.18) plays no role since it can be given any value after a rescaling. Indeed, if g is a solution of (A.18) and we put
A.9.2 Another dynamical proof of existence
We construct the Giant, i.e., a positive selfsimilar solution of the separated variables form, u(x, t) = t ^{−α} f(x), by a different method, based also on the properties of the evolution. As we have said, it is equivalent to solving the nonlinear elliptic problem
A.9.3 Another construction of the Giant
The giant can also be obtained as the limit of the socalled fundamental solutions, i.e., the solutions u _{c}(x, t) of the problem with initial data
(p.581) A.10 Optimality of the asymptotic convergence for the PME
We devote this section to a first exploration of the sharpness of the convergence rates in Theorem 18.1 for general classes of initial data. This is taken from [509], pp. 91–93.
Counterexample
Given any decreasing function ρ(t) → 0, there exists a solution of the Cauchy problem with integrable and nonnegative initial data of mass M > 0 such that
Construction
(i) We recall that the proof need only be done for M = 1 since the scaling transformation
(ii) We construct solutions u _{k} with initial data of the form
(iii) The iterative construction of the u _{k} starts as follows. We may take c _{1} as we like, e.g., c _{1} = 1, then r _{1} = (2ω)^{−1/n}, and find the solution u _{1}(x, t) with data u _{1}(x, 0) = c _{1}χ_{1}(x). Its mass is M _{1} = 1/2 for all times. As said above, for sufficiently large times we have
(iv) Iteration step. Assuming that we have constructed u _{2}, …, u _{k−1} by solving the equation with data (A.29), we proceed to choose c _{k}, and a _{k} and construct u _{k} as follows. We can take any c _{k} > 0, and then find a _{k} large enough so that the support of the solution v _{k} with initial data v _{k}(x, 0) = 𝒳_{k}(x − a _{k}) does not intersect the support of u _{k−1} until a time t _{k} > 2 t _{k−1} (and we can even estimate how far a _{k} must be located for large t _{k} because we have a precise control of the support of u _{k−1} for large times, thanks to Theorem 18.8). Then, it is immediate to see that
(p.583) u _{k} − 𝒰(M) ≥ 𝒰(M) − 𝒰(M _{k−1}) − u _{k} − 𝒰(M _{k−1}) with M = 1, we get
(v) In the final step we take the limit
(vi) The construction can be easily modified so that the data u _{0} are radially symmetric by defining χ_{k} to be the characteristic function of the annulus A _{k} = {x : a _{k} ≤ x ≤ a _{k} + r _{k}} and imposing that c _{k} times the volume of A _{k} to equal 2^{−k}. The construction is repeated with the same attention to be given to a _{k}, i.e., to the far location of the A _{k}.
(vii) For the L ^{1} part we just observe that, taking t _{k} large enough we have at time t = t _{k} and in a very large ball B _{k} (as large as we please by the iteration construction) the equality u = u _{k} and the approximation
A.11 Noncontractivity of the PME flow in L ^{p} spaces
We complete here the analysis started in Section 4.5.2 about lack of contractivity of the PME flow in different L ^{p} spaces. This is the result that comes out of the blowup example.
Theorem A.5 The PME flow posed in the whole space is not contractive in the spaces L ^{p}(R ^{d}) if m ≥ 2 and p _{c} < p ≤ ∞ with
Proof We divide the proof in several steps for more clarity. We start with the Cauchy problem.
(p.584) 1. Case L^{∞}

(i) We have seen in formula (4.48) an example of two nonnegative solutions whose pressures are ordered and differ by a constant C(t) for every fixed t and this constant grows with time. When m = 2 pressure and density are proportional, so that this example shows a particular case of increase of the norm
$${d}_{\infty}\left({U}_{2},{U}_{1},t\right):=\text{}\text{}\parallel {U}_{2}\left({.}_{,}t\right){U}_{1}\left({.}_{,}t\right)\parallel \infty .$$ 
(ii) It can be objected that the example is constructed in the class of growing solutions and not in a more natural class, like bounded solutions. Such an objection is easily overcome by continuity. We construct two increasing families of solutions u _{in}(x, t), i = 1, 2, with nonnegative data u _{in}(x, 0) ∈ L ^{1}(R ^{d}) ∩ L ^{∞}(R ^{d}) so that u _{in}(x, 0) ↑ U _{i}(x, 0) a.e. The functions u _{in}(x, 0) can be even smooth and compactly supported. If we perform the construction in a suitable way we have
$${d}_{\infty}({u}_{2n},{u}_{1n},0)\le {d}_{\infty}({U}_{2},{U}_{1},0)=C(0).$$$${d}_{\infty}({u}_{2n},{u}_{1n},t)\ge {d}_{\infty}({U}_{2},{U}_{1},t)\epsilon =C(t)\epsilon .$$ 
(iii) The situation for m > 2 is even better since ${U}_{i}=a{V}_{i}^{\text{\gamma}}$ with γ = 1/(m − 1) < 1. We now observe that given C > 0, the function
$$f(v)={(v+C)}^{\text{\gamma}}{v}^{\text{\gamma}},$$$${U}_{2}{U}_{1}=a({V}_{2}^{\text{\gamma}}{V}_{1}^{\text{\gamma}})$$$${d}_{\infty}({U}_{2},{U}_{1},t)={U}_{2}(0,t){U}_{1}(0,t)=a({C}_{2}^{1/(m1)}{C}_{1}^{1/(m1)})/{(Tt)}^{\alpha},$$
The adaptation to the class of bounded and integrable solutions is done as before.
(p.585) 2. Case L^{p}, p large

(i) We modify the argument of Case 1 with the problem posed in the whole space by defining U _{2} as a modification of U _{1} in the two available parameters, T and C. In this case we take V _{1} as blowup solution with C = 0 and T = 1
$${V}_{1}\left(x,t\right)=\frac{Kx{}^{2}}{1t}$$$${V}_{2}\left(x,t\right)=\frac{C{\left(Tt\right)}^{2\beta}+Kx{}^{2}}{Tt}$$$${N}_{p}\left({U}_{2},{U}_{1},t\right)={\displaystyle {\int}_{{\mathbb{R}}^{d}}{\left({U}_{2}\left(x,t\right){U}_{1}\left(x,t\right)\right)}_{+}^{p}}dx$$ 
(ii) In order to estimate it at t = 0 we calculate the point where both initial functions are equal as
$${x}^{2}\left(0\right)\approx C/K\epsilon ,$$$${N}_{p}\left({U}_{2},{U}_{1},0\right)\le c{C}^{p\text{\gamma}}x{\left(0\right)}^{d}=c{C}^{p\text{\gamma}}+d/{2}_{\epsilon}d/2.$$ 
(iii) We now estimate N _{p} at a time t _{1} = 1 − τ. We have the values of the pressure at the origin
$${V}_{2}\left(0,{t}_{1}\right)=\frac{C}{\left(\tau +\epsilon \right)\alpha \left(m1\right)},\text{}{V}_{1}\left(0,t\right)=0.$$$${x}^{2}\left({t}_{1}\right)\approx \frac{C{\left(\tau +\epsilon \right)}^{2\beta}\tau}{\tau +\epsilon}$$$$\begin{array}{ll}{N}_{p}\left({U}_{2},{U}_{1},{t}_{1}\right)\ge & c\frac{{C}^{p\text{\gamma}}}{{\left(\tau +\epsilon \right)}^{p\alpha}}\text{}\frac{{C}^{d/2}{\left(\tau +\epsilon \right)}^{d\beta}{\tau}^{d/2}}{{\left(\tau +\epsilon \right)}^{d/2}}\\ \hfill =& c{C}^{p\text{\gamma}+d/2\text{}\text{}}\frac{{\left(\tau +\epsilon \right)}^{d\beta}{\tau}^{d/2}}{{\left(\tau +\epsilon \right)}^{pd\beta +n/2}}.\end{array}$$$$\frac{{(\tau +\epsilon )}^{(p1)d\beta +d/2}}{{\tau}^{d/2}{\epsilon}^{d/2}}\to 0.$$In order to revert to the comparison of L ^{p} norms we replace U _{2} by the solution U _{3} with initial data
$${U}_{3}(0)=\mathrm{max}\{{U}_{1}(0),{U}_{2}(0)\}.$$$$\Vert {U}_{3}(0){U}_{1}(0){\Vert}_{p}={N}_{p}({U}_{2},{U}_{1},0).$$$$\Vert {U}_{3}({t}_{1}){U}_{1}({t}_{1}){\Vert}_{p}\ge {N}_{p}({U}_{2},{U}_{1},{t}_{1})>{N}_{p}({U}_{2},{U}_{1},0).$$
3. The Dirichlet data
The approximation process that we have mentioned before can be done with solutions of Dirichlet problems or Neumann in expanding balls. We conclude that for some of these balls there is an example of noncontraction for the same m and p as in the Cauchy problem. Since the PME is invariant under scaling the result is true for all balls.
For the case of a general domain replace the balls of radius R → ∞ by scaled copies of the domain and argue in the same way as before.
A.11.1 Other contractivity properties

(i) Contractivity in H ^{−1}(Ω) is discussed in Sections 6.7 and 10.1.4. It applies to the GPME.

(ii) The PME semigroup in R is contractive with respect to all Wasserstein distances defined in Section 10.4. The focusing solutions are used in [514] to show that the PME semigroup in R ^{d} is not contractive in these Wasserstein (p.587) metrics d _{p} if p is large enough, including p = ∞. However, the semigroup is contractive in the case p = 2 [152]. Thus has been used very elegantly by Toscani in proving sharp asymptotics [494].
See also [158] where the asymptotic complexity of the patterns of the GPME is studied.