## Juan Luis Vazquez

Print publication date: 2006

Print ISBN-13: 9780198569039

Published to Oxford Scholarship Online: September 2007

DOI: 10.1093/acprof:oso/9780198569039.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 26 February 2017

# (p.565) Appendix BASIC FACTS

Source:
The Porous Medium Equation
Publisher:
Oxford University Press

This appendix contains auxiliary information that has been mentioned or used in the text. Some of the sections have an independent interest because they contain developments of topics of text. Thus, the question of non-contractivity of the PME in various norms discussed in Section A.11 is an interesting and quite open problem.

# A.1 Notations and basic facts

## A.1.1 Points and sets

We will use notations that are rather standard in PDE texts, like [229, 261] or equivalent, which we assume known to the reader. As usual, R is the real line, (a, b) denotes an open interval, [a, b] a closed one, and R + = (0, ∞). We denote the space dimension by d = 1, 2, …, according to physics usage. Points in R d for d > 1 are denoted by x = (x 1, …, x d). For vectors u and v ∈ R d the scalar product is denoted by u · v, and sometimes by 〈u, v〉; e i denotes the unitary vector in the positive i-th direction. We denote by B R(x) the open ball of radius R in R d centred at x ∈ R d. The set of parts of X is denoted by 𝒫(X) = 2X.

For a subset E of a metric space, E¯ denotes its closure and ∂E its boundary. We denote the Lebesgue measure by dx = dx 1dx d and the Lebesgue measure of a measurable set E ⊂ R d by |E| or meas(E). The measure (volume) of the unit ball is given by

$Display mathematics$
where Γ is Euler's gamma function. S d−1 denotes the unit sphere {x : |x| = 1} in R d. Its element of area is denoted by dS or dσ. Its total area (i.e., its (d − 1)-dimensional measure) is dωd.

We usually denote by Ω ⊂ R d the domain where the spatial variable lives. A regular domain is a domain whose boundary Γ = ∂Ω is locally a C k hypersurface for some k ≥ 1 and α ∈ (0, 1). Typically, Γ ∈ C 2,α. But we will also consider the generality of domains with a Lipschitz boundary, which means that Γ can be viewed locally as the graph of a Lipschitz function after an appropriate rotation of the coordinate axes, and in addition Ω is locally on one side of Γ, cf. [277]. This generality allows for domains with corners which are found in some applications. Unless mentioned to the contrary, the boundary will be assumed to be C k regular (p.566) with k ≥ 2. For x ∈ Ω we define the distance to the boundary as

$Display mathematics$
For a compact set K ⊂ Ω we define
$Display mathematics$
These distances are always positive.

We will often deal with space-time domains. Q is the cylinder Ω × R + and for 0 < T < ∞ we write Q T = Ω × (0, T) and Q T = Ω × (T, ∞). The lateral boundary of Q is denoted by ∑ = ∂Ω × [0, ∞), while ∑T = ∂Ω × [0, T].

## A.1.2 Functions

The characteristic function of a set E is denoted by χE: its value is 1 for xE, 0 otherwise. We sometimes use the notations Dom(f) = D(f) and Im(f) = R(f) to denote the domain and range of a function respectively. If the domain is a set E we may write Im(f) = f(E).

The symbols (s)+, (s)+ mean max{s, 0}, i.e. the positive part of the number s, and (s) = (s) = max{−s, 0}, the negative part. For a function we have

$Display mathematics$
so that f = f +f . Sometimes, f + and f are used for convenience. The function sign, better called sign0, is defined as
$Display mathematics$
Note that, strictly speaking, sign is a multivalued operator, cf. Section A.3. The function $sign 0 +$ is defined as
$Display mathematics$
and
$Display mathematics$
We have $sign 0 ( s ) = sign 0 + ( s ) + sign 0 − ( s )$. We will often write sign(s) instead of sign0(s) if no confusion arises.

There will also be frequent use of cut-off functions. The basic cut-off function is a function ζ(x) ∈ C (R d) which satisfies the following conditions: 0 ≤ ζ ≤ 1, ζ(x) = 1 if and only if |x| ≤ 1, and ζ(x) = 0 if |x| ≥ 2. We will use its scalings: ζr(x) = ζ(x/r) for r > 0.

If f is a one-dimensional function, the expression limxa f(x) means lim f(x) as xa with x < a. Similar meaning for limxa+ f(x).

We use the notations O and o in the sense of Landau.

## (p.567) A.1.3 Integrals and derivatives

Integrals without limits are understood to extend to the whole domain under consideration, Ω, Q or Q T, depending on the context. We use different notations for partial derivatives, like u t = ∂t u = ∂u/∂t and so on, the first being most common in the literature, the second one being convenient to avoid confusion with subindexes. Especially in regularity theory, we use the notation D α u where α = (α1, …, αd) is a multi-index, to denote the derivative of order |α| = ∑i αi which is taken αi times with respect to the variable x i. We usually write ∇u, sometimes ∇x u, for the spatial gradient of a function. We also use the symbol ∮ to denote average, see Section 7.1.

## A.1.4 Functional spaces

C(Ω), C k(Ω) and C (Ω) denote the spaces of continuous, k-times differentiable and infinitely differentiable functions in $Ω , D ( Ω ) = C c ∞ ( Ω )$ denotes the C -smooth functions with compact support in Ω and 𝒟′(Ω) the space of distributions. We use C 0(Ω) for continuous functions that vanish on the boundary. For 0 < α < 1, C α(Ω̄) is the Banach space of functions which are uniformly Hölder continuous in Ω. In case they are only uniformly continuous in the interior we get the space C α(Ω) which is not a normed space, but a metric space. Functions with Hölder continuous derivatives form the spaces C k(Ω̄) and C k(Ω). When α = 1 we get the Lipschitz spaces, like Lip(Ω). Note that the notation C 1(Ω̄) for that space becomes inconsistent in that case, since the symbol is already in use for functions with one continuous derivative. Hence, Lip(Ω) is sometimes denoted as C 0,1(Ω̄). The concept of modulus of continuity will be introduced in Section 7.5.1.

For 1 ≤ p ≤ ∞ we denote the usual Lebesgue spaces by L p(Ω) with norm | · |p, while H 1(Ω) and $H 0 1 ( Ω )$ are the usual Sobolev spaces; the subscript loc refers to local spaces. A general reference for Sobolev spaces is [4]. When dealing with functions in Sobolev spaces, derivatives mean distributional derivatives. As a rule, we will identify Lebesgue measurable real functions defined in Ω up to a set of measure zero. We will abridge the expression almost everywhere in the usual form as a.e. Embedding and compactness theorems (Sobolev embeddings and the like) are assumed as defined for instance in [4, 229, 372]. Let us recall the Rellich–Kondrachov theorem: Let Ω be a bounded domain with C 1 boundary. Then,

$Display mathematics$
All these injections are compact. In particular, W 1,p(Ω) ⊂ L p(Ω) with compact injection for all p ≥ 1. In Ω = R d, the above injections are compact in local topology (convergence on compact subsets).

(p.568) Similar statements apply to functions defined in Q, Q T or their closures. C 2,1(Q) denotes those functions being twice differentiable in the space variables and once in time. For a function u(x, t), we use the abbreviated notation u(t) to denote the function-valued map tu(·, t).

We will use frequently classes of non-negative solutions. In that sense, L p(Ω)+ denotes the set of functions fL p(Ω) such that f ≥ 0. We will sometimes use weighted spaces, like $L δ 1 ( Ω )$ in Section 6.6. The space H −1(Ω) is described and used in Section 6.7.

Spaces of vector valued functions are used in the abstract settings, especially in Chapter 10. Care must be taken with some subtleties when the values are taken in an infinite dimensional metric space X. Thus, not all absolutely continuous functions R → X are differentiable everywhere. Cf. in this respect the appendix of [128]. Let us only mention that given a measure space (Ω, 𝒫, μ), a Banach space X has the Radon–Nikodým property with respect to μ if for every bounded variation, countably additive μ-continuous vector measure ν valued in X, there is a Bochner integrable function g : Ω → X such that ν(E) = ∫E g dμ for every μ measurable set E. In that case, every absolutely continuous function f : [a, b] → X is also a.e. differentiable. By default μ is the Lebesgue measure in R d. Every reflexive Banach space is R-N, but L 1(Ω) and L (Ω) are not.

## A.1.5 Some integrals and constants

We list some of the integrals that enter the calculation of the best constants in the smoothing effect.

1. (i) EULER'S GAMMA FUNCTION is defined as

$Display mathematics$
We have Γ(p) = (p − 1) Γ(p − 1), and $Γ ( 1 \ 2 ) = π .$. As p → ∞ we have
$Display mathematics$

2. (ii) EULER'S BETA FUNCTION is defined for p, q > 0 as

$Display mathematics$
We have B(p, q) = B(q, p) and the basic relation
$Display mathematics$
as well as the equivalent expressions with parameter r > 0
$Display mathematics$
These expressions are usually found for the value r = 2.

## (p.569) A.1.6 Various

We will devote the next sections to developing a number of less standard topics that are needed or convenient to read the book. Other notations and concepts are explained in the text as they occur.

## A.2 Nonlinear operators

The theory of nonlinear operators in a Banach space is a main tool of the theory developed in Chapter 10. We recall that in its more general nonlinear and possibly multivalued version, an operator A in a Banach space X is a map A from a subset of D(A) ⊂ X into the set of parts of X, 𝒫(X). We write A(x) or Ax for the image of x (it is a subset of X). We always take as D(A) the essential domain, D(A) = {x : A(x) ¦ ∅}. We denote by R(A) the range of A, a subset of X:

$Display mathematics$
For xD(A) we denote by A o x the element with minimal norm in Ax, and we have D(A o) = D(A).

In this generality, it is often convenient to identify the operator with its graph, Γ(A), a subset of X × X. We say that an operator B extends an operator A if Γ(A) ⊂ Γ(B). This is an order relation. Thus, A o is extended by A. An operator is closed if and only if its graph is a closed subset of X × X. We say that B if the closure of A iff Γ(B) is the closure of Γ(A). The sum of two operators is defined as

$Display mathematics$
on the domain D(A + B) = D(A) ∩ D(B). There is no problem is defining λA for λ ∈ R. The composition AB = AB is defined as
$Display mathematics$
on the domain where that definition is not empty, D(AB) = {x : B(x) ∩ D(A) ¦ ∅}.

The inverse A −1 is easily understood in the sense of graphs, just changing the order of domain and image

$Display mathematics$
The ease in defining inverses is one of the strong points of using multivalued operators. Generally speaking, the inverse A −1 of also a multivalued operator, but there are cases in which A is multivalued and A −1 is single valued.

We refer to Chapter 10 for the definitions of monotone and accretive operators and their variants. Use is made in that chapter of integrals of vector-valued maps fL 1(0, T : X) where X is a Banach space. The integral is understood in the sense of Bochner with respect to Lebesgue measure in (0, T) ⊂ R; it means that (p.570) the functions are strong measurable and

$Display mathematics$
cf. [529]. In this setting an absolutely continuous function need not be differentiable a.e., so this condition has to be added when needed.

We point out that the family of resolvent operators associated to an operator A is defined as

$Display mathematics$
They are in principle multivalued operators, that come from solving equations of the form u + λAuf which is equivalent to uJ λ f. Note that D(J λ) = R(I + λA). For monotone or accretive operators, as defined in Chapter 10, the resolvent is a single-valued (non-strictly) contractive map.

More on accretive definitions in Subsection 10.2.3. Monotone operators in Hilbert spaces are treated in Section 10.1. A very important class of maximal monotone operators is given by the subdifferentials of proper convex functions, that have been defined at the end of Section 10.1.

# A.3 Maximal monotone graphs

We will study nonlinear parabolic equations like u t = Δϕ(u) + f, and their elliptic counterparts, like −Δv + β(v) = f. To simplify, we may assume that ϕ is a continuous and monotone increasing function of its argument u ∈ R, and then β is its inverse function. Making the requirement of parabolicity on the first equation leads to the condition ϕ′(s) > 0 for all s. However, the second equation, which is used to solve the first, does not need such a strong requirement. It is then possible and useful to consider a greater generality in which ϕ and β can be any maximal monotone graph in R 2.

For the concept and applications of maximal monotone graph (m.m.g. for short) we may refer the reader to Brezis' treatise [128], which covers the much more general theory of maximal monotone operators in Hilbert spaces. Let us remark that this generality has been introduced into nonlinear analysis because of its interest in modelling a number of physical applications, most notably to formulate variational inequalities.

Here is a summary of the main facts that we need: a m.m.g. ϕ in R 2 is the natural generalization of the concept of monotone non-decreasing real function to treat in an efficient way the cases where there are discontinuities; since we are dealing with monotone functions, they must be jump discontinuities. We want to fill in these ‘gaps’ for the benefit of obtaining existence of solutions of the equations where ϕ appears. Then, the function must become multivalued and contain vertical segments (corresponding to the jumps). The multivalued function ϕ is defined in a maximal interval D(ϕ) which is not necessarily R, and can be open or closed on either end. If one of the ends of D(ϕ) is finite and not included in D(ϕ), then there is a vertical asymptote at this end; if it is included, (p.571) there is a semi-infinite vertical segment in the graph. Typical maximal monotone graphs appearing in the nonlinear ODEs and PDEs of Mathematical Physics are the sign function

$Display mathematics$
its positive part, denoted by sign+(s), where we modify the sign so that sign+(s) = 0 for s < 0 and sign+(0) = [0, 1]; the Stefan graph, defined by H(s) = cs + L sign+(s) with constants c, L > 0; and the angle graph, A(s) = 0 for s ≥ 0, A(0) = (−∞, 0], which is defined in D(A) = [0, ∞).

One of the main advantages of this generality, which will be used here, is the fact that the inverse of a m.m.g. is again a m.m.g.; actually, both graphs are symmetric with respect to the main bisectrix in R 2.

The standard and somewhat awkward notation when using multi-valued operators is set inclusion, so that when (a, b) is a point in the graph ϕ we write b ∈ ϕ(a) instead of b = ϕ(a), since generally ϕ(a) is not a singleton.

## A.3.1 Comparison of maximal monotone graphs

In the study of the filtration equation (GPME) we will be interested in comparing the concentrations of solutions of two equations with different nonlinearities ϕ. This final goal will be prepared with a result for elliptic equations. We introduce the following concepts.

Definition A.1 We say that a maximal monotone graph φ1 is weaker than another one φ2, and we write φ1 ≺ φ2, if they have the same domains, D1) = D2), and there is a contraction γ : R → R such that

(A.1)
$Display mathematics$

By contraction we mean |γ(a) − γ(b)| ≤ |ab|. This implies in particular φ1 must have horizonal points (or horizontal intervals) at the same values of the argument as φ2, and maybe some more. We also assume that φ1 does not accept vertical intervals (i.e., it is one-valued). Note that for smooth graphs condition (A.1) just means that

(A.2)
$Display mathematics$
which is easier to remember or to manipulate. We will see that φ′ is interpreted as the diffusivity in many parabolic problems, so that relation (A.2) can be phrased as: φ1 is less diffusive than φ2. This explains why it will be important in the evolution analysis.

In the development of the corresponding elliptic theory we will need to rephrase this condition in terms of the inverse graphs βi entering the equations (p.572) of the form

$Display mathematics$
It then means that there is a contraction γ: R → R such that
(A.3)
$Display mathematics$
To be precise, we also have to specify the relation of the domains, D1) = γ(D2)). But, as a general rule, we will prefer to stick to comparisons of diffusivities, φ = β− 1.

# A.4 Measures

In the study of the initial value problem we use Radon measures as initial data. We recall that a Radon measure μ is in principle defined as a (real-valued) linear map on C c(Ω), [455, 473], where Ω will be for us an open subset of R d. The Riesz theorem allows to associate to a Radon measure a Borel measure, which is a real-valued map on sets, that we will also denote by μ. Actually, the measure is Borel regular and locally finite. Note the alternative notations for integrals with respect to a measure: ∫ f(x) μ(dx) = ∫ f(x)dμ(x). Both appear in the literature. The family of Borel subsets of X is denoted by ℬ(X).

The space of Radon measures on a separable metric space (or more generally a locally compact Hausdorff space) X is denoted by ℳ(X), the subset of positive measures by ℳ+(X), the subset of finite measures by ℳb(X). Given a measure μ ∈ ℳ, we denote by μ = μ+ − μ the Hahn–Jordan decomposition of μ into non-negative measures. We denote by 𝒫(X) is the family of all probability measures, non-negative measures with total mass 1.

### Convergence of measures

The natural convergence in ℳ(X) is defined by the rule that μn → μ iff

$Display mathematics$
n for every fC c(X). Technically, this is the weak-* convergence, its topology is described as σ(ℳ(X); C c(X)), and it is also referred to as vague convergence. In the weak-* topology the usual compactness statement applies: bounded families contain convergent subsequences. The problem is that the limit measure can be defective, i.e., can have less mass than the limit of the masses in the convergent family, the explanation is that some mass can go to infinity or to the boundary. The problem is avoided by weak convergence, where we take test functions fC b(X), the set of all bounded and continuous functions on X, the topology being denoted by σ(ℳ(X);C b(X)). Weak convergence of measures, also called narrow convergence in probability theory, is stricter than vague convergence and the total mass is conserved in the limit. It coincides with weak-* convergence is X is compact.

(p.573) In general, weak convergence needs some extra property. This is well-known in probability. A family of probability measures μi on a metric space M is said to be tight if for every ɛ > 0 there exists a compact K such that

$Display mathematics$
Prokhorov's theorem gives a criterion for weak convergence: Suppose that a sequence of probability measures on a space M is tight. Then there exists a subsequence μn k which converges weakly to a limit probability measure μ.

The same result holds if probability measures are replaced by non-negative Radon measures with finite and fixed total mass.

### BV functions

We will also need the space of functions of bounded variation, BV(Ω): it consists of the functions fL 1(Ω) whose distributional gradient Du is a Radon measure with bounded variation defined as

$Display mathematics$
It is a Banach space normed by |u|BV = |u|1 + |Du|. We have W 1, 1(Ω) ⊂ BV(Ω) and in fact it is the natural closure of W 1, 1(Ω) in the sense that bounded sequences in W 1, 1(Ω) converge in the weak-* topology of BV(Ω) after passing to a subsequence.

# A.5 Marcinkiewicz spaces

Different classes of functional spaces are natural in the study of symmetrization, for instance the Lebesgue spaces L p(Ω). Also the Marcinkiewicz spaces play a role. The Marcinkiewicz space M p(R d), 1 < p < ∞, is defined as set of $f ∈ L l o c 1 ( ℝ d )$ such that

(A.4)
$Display mathematics$
for all subsets K of finite measure, cf. [87]. The minimal C in (A.4) gives a norm in this space, i.e.,
(A.5)
$Display mathematics$
Since functions in L p(R d) satisfy inequality (A.4) with C = |f|L p (by Hölder's inequality), we conclude that L p(R d) ⊂ M p(R d) and |f|M p ≤ |f|L p. The Marcinkiewicz space is a particular case of Lorentz space, precisely L p, ∞(R d), and is also called weak L p space.

Marcinkiewicz spaces will be important in our study of symmetrization, tied to the idea of ‘worst case strategy’ that plays an important role in our study of smoothing effects, [515]. They appear also in potential theory.

# (p.574) A.6 Some ideas of potential theory

Potential theory is usually done in dimensions d ≥ 3, while dimensions d = 1, 2 are a bit special and need a different treatment. Therefore, we restrict our considerations to d ≥ 3 in a first stage. Consider the fundamental solution of the Laplace equation on R d, d ≥ 3:

(A.6)
$Display mathematics$
The Newtonian potential of an L p(R d), 1 ≤ p ≤ ∞ is defined by convolution with E d:
(A.7)
$Display mathematics$
It is known that the map fN(f) sends L 1(R d) into the Marcinkiewicz space M q(R d) = L q, ∞(R d) with p = d/(d − 2), and L p(R d) into C b(R d) if p > d/2. We have
$Display mathematics$
In the case of a bounded subdomain Ω ⊂ R d, d ≥ 1, we use the Green function with zero boundary conditions, G = G Ω(x), to define $G f ∈ W 0 1 , 1 ( Ω )$ by
(A.8)
$Display mathematics$
and then − Δ𝒢(f) = f. Clearly, 0 ≤ G Ω(x, y) ≤ E d(xy) if d ≥ 3.

# A.7 A lemma from measure theory

We show here a version of the result that says that a continuous function can not have derivatives that are measures supported in sets where the function takes a discrete set of values.

Lemma A.1 Let u(x) be a continuous function in a domain Q of R n and let t be one of the coordinates. If we assume that u t is a bounded Radon measure and $u t ∈ L l o c 1 ( { u ≠ 0 } )$, then u t is an integrable function.

Proof It is immediate to see that the measure μ = u t can be split into the

$Display mathematics$
where f is the restriction of u t to the open set {u ¦ 0}, hence an $L l o c 1$ function by assumption, and μ0 is the restriction to the closed set K ≔ {u = 0}, a measure in principle. We also have
$Display mathematics$
(p.575) We want to prove that μ0 = 0. In order to do that, we select a function p = p ɛC 1(R) such that 0 ≤ pɛ(s) ≤ 1 and
$Display mathematics$
It is clear that p(u) = u on the set K ɛ = {|u| < ɛ}, a neighbourhood of K, so that p(u)t restricted to K is just μ0. Moreover, it can be easily proved by approximation that
$Display mathematics$
Take now a test function $η ∈ C c 1 ( Q )$. We have 〈p ɛ(u)t, η〉 = − ∫ p ɛ(ut dxdt, hence
$Display mathematics$
On the other hand, if G ɛ = {−ɛ < u < 0} ∪ {0 < u < ɛ} we get
$Display mathematics$
which goes to zero as ɛ → 0 since G ɛ tends to the empty set. Therefore,
$Display mathematics$
and we conclude that 〈μ0, η〉 = 0. Since $η ∈ C c 1 ( Q )$ is arbitrary, we get μ0 = 0. ■

# A.8 Results for semiharmonic functions

The theory of the Cauchy problem for the PME exploits at several places the fact that non-negative solutions satisfy an estimate of the form

$Display mathematics$
This property is technically called semi-subharmonicity of the pressure, and appears also in other nonlinear theories. It has some consequences for the size of the solution, a fact that we explore here. Some of them have been used in the proofs. Here is the technical result that we use in Chapter 9, Lemma 9.9.

Lemma A.2 Let g be any non-negative, smooth, bounded and integrable function in R such that

(A.9)
$Display mathematics$
for some p and K > 0. Then, gL (R) and |g| depends only on p, K, d and |g|1 in the form
(A.10)
$Display mathematics$
with ρ = 2/(2p + d) and σ = d/(2p + d).

(p.576) Proof Let f(x) = g p. Then, Δf ≥ −K. Therefore, the function

(A.11)
$Display mathematics$
is subharmonic in R d for every x 0 ∈ R d. Then, for every R > 0 we have
(A.12)
$Display mathematics$
where B = B R(x 0) and ∮B denotes average on B. The argument will continue in a different way for p > 1 and for 0 < p ≤ 1.

(i) In the latter case, 0 < p < 1, we can use (A.12) to estimate f at an arbitrary point x 0 as follows:

(A.13)
$Display mathematics$
d denotes the volume of the unit ball). Minimization of the last expression with respect to R > 0 gives
$Display mathematics$
which is equivalent to (A.10).

(ii) For p > 1 we modify the calculation as follows: we pick a point x 0 of maximum for g and estimate g(x 0) as follows:

$Display mathematics$
putting y = g(x 0), we can write this expression in the form
$Display mathematics$
which after an elementary calculation gives
$Display mathematics$
Minimization of this expression in R gives (A. 10).

(iii) Dimensional analysis shows that the exponents in formula (A.10) are correct. Actually, we only need to prove the formula for |g|1 = 1, R = 1. ■

There is a local version of this result that we need in Chapters 12 and 18.

(p.577) Lemma A.3 Let g be any non-negative, smooth, bounded function in the ball B 2 = B 2R(a) ⊂ R d, and assume that gL 1(B 2) and

(A.14)
$Display mathematics$
for some p and K > 0. Then gL (B 1) with B 1 = B R(a)) and |g| depends only on s, K, d, R and |g|L 1(B 2). More precisely,
1. (i) We have

(A.15)
$Display mathematics$

2. (ii) If |g|1 is very small compared with R and K the estimate takes the form

(A.16)
$Display mathematics$
with ρ = 2/(2p + d) and σ = d/(2p + d). The smallness condition is
(A.17)
$Display mathematics$

Proof If 0 < p ≤ 1 it is very similar to the previous one replacing R d by B 2R(a). Indeed, part (i) can be repeated to get for every x 0 with |x 0| ≤ r, with 0 < r < R, integrating in B = B r(x 0) to get:

$Display mathematics$
Minimization in 0 ≤ rR gives a bound for g(x 0). In particular, when |g|1 is small enough the minimum takes place for 0 < r < R and we obtain the stated result in this case by the same calculation as in the previous lemma. The smallness condition r min < R is implied by (A.17).

(ii) When p > 1 the technique of proof has to be changed. Actually, the result is implied by Theorem 9.20 of Gilbarg-Trudinger [261], which use Aleksandrov's maximum principle. ■

# A.9 Three notes on the Giant and elliptic problems

We review here some approaches to the construction of the Giant, i.e., the positive self-similar solution of the PME with separated variables form, u(x, t) = t −α f(x). As we have said, it is equivalent to solving the nonlinear elliptic problem Δf m + α f = 0, with f = 0 on ∂Ω. As we have said in Section 5.9, it is best written in the form (5.70)

(A.18)
$Display mathematics$
with α = 1/(m − 1). Up to a constant, it is the same as (4.6). See also (20.16). We can view this equation as a nonlinear eigenvalue problem.

## (p.578) A.9.1 Nonlinear elliptic approach. Calculus of variations

For experts in elliptic equations, the typical approach to solving the semilinear elliptic equation (A.18) is to view the solution g as a critical point of the functional

(A.19)
$Display mathematics$
defined in $H 0 1 ( Ω )$.

Theorem A.4 The positive solution of (A.18) is the minimum of J in $H 0 1 ( Ω )$.

Proof (i) J is well defined in $H 0 1 ( Ω )$: simply observe that 1 + 1/m < 2 and use Sobolev embeddings.

(ii) J is bounded from below in $H 0 1 ( Ω )$: in fact, using Poincaré's inequality we get

$Display mathematics$

(iii) The infimum is negative, hence it cannot correspond to the trivial function. Take a family of functions of the form g s(x) = s g 1(x) with some $g 1 ∈ H 0 1 ( Ω )$, g 1 ≥ 0. Then

$Display mathematics$
for some positive A, B. Hence J(g s) < 0 for some s near 0.

(iv) Along any minimizing sequence there is convergence in $H 0 1 ( Ω )$ and the infimum is taken, hence it is a minimum.

Observe first that J(g n) converges to J min. Then |∇g n| is uniformly bounded in L 2(Ω), hence g n converges weakly in $H 0 1 ( Ω )$ and strongly in L 2(Ω) to some $H 0 1 ( Ω )$. In the limit we have by the standard argument of lower semi-continuity of the integral of the gradient square:

$Display mathematics$
Note that (m + 1)/m < 2. But J min is the minimum, hence there must be equality. This implies that
$Display mathematics$
which means that g ng in $H 0 1 ( Ω )$ [Explanation: We are using the lemma: if f nf weakly in L 2(Ω) and |f n|2 → |f|2, then the convergence is strong. The proof consists of writing the difference
$Display mathematics$
and taking limits.]

(p.579) (v) The minimum satisfies equation (A.18).

Let g be the minimum. Consider the family g ɛ = g + ɛφ where $φ ∈ C c ∞ ( Ω ) .$ is any non-negative test function and ɛ is a real number. Write J(g ɛ) − J(g) ≥ 0 as

$Display mathematics$
where g˜ɛ(x) is a value between g(x) and g(x) + ɛφ(x) (mean value theorem). Take now ɛ > 0 and pass to the limit ɛ → 0 to get
$Display mathematics$
When ɛ < 0 we get the converse inequality. Therefore, (A.18) holds in the sense of distributions (This classical calculation is called in the calculus of variations ‘obtaining the Euler–Lagrange equation’.)

(vi) Any solution of (A.18) satisfies ∫|∇g|2 dx = α ∫ |g|(m+1)/m dx, hence

(A.20)
$Display mathematics$
The absolute minimum corresponds therefore to the maximal stationary solution which is the positive one.

(vii) The uniqueness of the positive solution in this kind of ‘nonlinear eigenvalue problems’ is a well-known result in the calculus of variations. It comes from a general result of functional analysis, Krein–Rutman's theorem. ■

Note The constant α > 0 in (20.15), (A.18) plays no role since it can be given any value after a rescaling. Indeed, if g is a solution of (A.18) and we put

(A.21)
$Display mathematics$
then G satisfies ΔG + G 1/m = 0. This is a curious property of some nonlinear problems, that linear eigenvalue problems do not have.

## A.9.2 Another dynamical proof of existence

We construct the Giant, i.e., a positive self-similar solution of the separated variables form, u(x, t) = t −α f(x), by a different method, based also on the properties of the evolution. As we have said, it is equivalent to solving the nonlinear elliptic problem

$Display mathematics$
with f = 0 on ∂Ω. The idea is to take a sequence of solutions with data
(A.22)
$Display mathematics$
(p.580) we obtain a unique weak solution u n(x, t) ≥ 0 of the PME. (Note: the reader may prefer to take u 0n(x) = nφ(x), where φ is a nice smooth and positive function in Ω that vanishes on the boundary. He is welcome.) The family {u n}n is monotone increasing in n (maximum principle). There exists a limit
$Display mathematics$
and this limit is finite, since it satisfies the universal estimate U(x, t) ≤ C t −α. The scaling transformation
$Display mathematics$
produces out of a solution of equation (20.1) with data u 0(x) another solution of the same equation with initial data (𝒯k u)(x, 0) = ku 0(x). It thus transforms u n into u nk. In the limit it transforms U again into U. Therefore, U is scaling invariant:
$Display mathematics$
for all x ∈ Ω and k, t > 0. In other words, setting k m−1 t = 1,
$Display mathematics$
It is clear that $g = f m ∈ H 0 1 ( Ω )$ is a positive and bounded solution of the nonlinear eigenvalue problem (A.18).

## A.9.3 Another construction of the Giant

The giant can also be obtained as the limit of the so-called fundamental solutions, i.e., the solutions u c(x, t) of the problem with initial data

(A.23)
$Display mathematics$
where a is any point in Ω and δa(x) is Dirac's delta function with singularity located at a. Such solutions exist in the weak sense and the data are taken as initial traces in the sense of bounded measures. It can be proved that
(A.24)
$Display mathematics$
The convergence to the Giant has been justified for the similar situation occurring for the equation of diffusion-absorption
(A.25)
$Display mathematics$
with 0 ≤ qm, cf. [162], Theorem 7.1. It is then enough to take p = 1 and make a change of variables v = u e t with corresponding scaling of time to obtain the desired result for the PME. Note that depending on the equation other types of limit may occur. A classification of the four different types of limits of the fundamental solutions as c → ∞ which are possible for nonlinear heat equations has been performed by Vázquez and Véron in [518].

# (p.581) A.10 Optimality of the asymptotic convergence for the PME

We devote this section to a first exploration of the sharpness of the convergence rates in Theorem 18.1 for general classes of initial data. This is taken from [509], pp. 91–93.

### Counterexample

Given any decreasing function ρ(t) → 0, there exists a solution of the Cauchy problem with integrable and non-negative initial data of mass M > 0 such that

(A.26)
$Display mathematics$
Moreover, we can also get
(A.27)
$Display mathematics$
We can also ask the solution to be radially symmetric with respect to the space variable.

Construction

(i) We recall that the proof need only be done for M = 1 since the scaling transformation

(A.28)
$Display mathematics$
reduces a solution of mass M > 0 to a solution of mass 1 if c = M −1/(m−1). We take an initial function of the form
$Display mathematics$
where χk(x) is the characteristic function of the ball of radius r k centred at 0. The sequences a k, c k and r k have to be determined in a suitable way. In the first place, we impose the conditions c k, r k ≥ 0 and $c k r k n = 2 − k / ω$ (where ω is the volume of the ball of radius 1). Then, $M = ω ∑ 1 ∞ c k r k n = 1$.

(ii) We construct solutions u k with initial data of the form

(A.29)
$Display mathematics$
and we proceed to choose c k and a k in an iterative way. In any case the mass of u k is M k = 1 − 2k, and we observe that (by the main convergence result) for every ɛ > 0 there must be a time t k(ɛ) (which depends also on the precise choice of the initial data) such that
$Display mathematics$
(p.582) for all tt k(ɛ). We now recall that 𝒰(0, t; M) = cM t −α, so that the difference between t α𝒰(0, t; M) and t α𝒰(0, t; M′) is constant in time, and in fact it can be estimated as larger than
$Display mathematics$
with the same constant k 1 > 0 for all 1 ≥ M > M′ ≥ 1/2.

(iii) The iterative construction of the u k starts as follows. We may take c 1 as we like, e.g., c 1 = 1, then r 1 = (2ω)−1/n, and find the solution u 1(x, t) with data u 1(x, 0) = c 1χ1(x). Its mass is M 1 = 1/2 for all times. As said above, for sufficiently large times we have

$Display mathematics$
We can also find t 1 such that ρ(t 1) < (1/2)k 1 (MM 1) = k 1/4. Using the estimate for the difference of source-type solutions and the triangular inequality, and taking ɛ small enough (ɛ ≤ k 1/4), we get for all tt 1
(A.30)
$Display mathematics$
(A.31)
$Display mathematics$

(iv) Iteration step. Assuming that we have constructed u 2, …, u k−1 by solving the equation with data (A.29), we proceed to choose c k, and a k and construct u k as follows. We can take any c k > 0, and then find a k large enough so that the support of the solution v k with initial data v k(x, 0) = 𝒳k(xa k) does not intersect the support of u k−1 until a time t k > 2 t k−1 (and we can even estimate how far a k must be located for large t k because we have a precise control of the support of u k−1 for large times, thanks to Theorem 18.8). Then, it is immediate to see that

$Display mathematics$
for all x ∈ R n and 0 ≤ tt k (i.e., superposition holds as long as the supports are disjoint). Indeed, this means that for all 0 ≤ tt k−1 we also have u k(0, t) = u k−2(0, t), and by iteration we conclude that
$Display mathematics$
We now remark that t k can be delayed as much as we like (on the condition of taking a k far away). If we choose t k large enough, the main asymptotic theorem implies the behaviour
$Display mathematics$
We want the error to be less than k 1(1 − M k)/2 = 2−(k+1) k 1. We also suggest to wait until ρ(t k) ≤ 2−(2k+1) k 1. Using again the triangle inequality:

(p.583) |u k − 𝒰(M)| ≥ |𝒰(M) − 𝒰(M k−1)| − |u k − 𝒰(M k−1)| with M = 1, we get

$Display mathematics$

(v) In the final step we take the limit

$Display mathematics$
By what was said before we may conclude that for tt k we have u(0, t) = u k(0, t), so that
$Display mathematics$
This concludes the proof of the L -estimate.

(vi) The construction can be easily modified so that the data u 0 are radially symmetric by defining χk to be the characteristic function of the annulus A k = {x : a k ≤ |x| ≤ a k + r k} and imposing that c k times the volume of A k to equal 2k. The construction is repeated with the same attention to be given to a k, i.e., to the far location of the A k.

(vii) For the L 1 part we just observe that, taking t k large enough we have at time t = t k and in a very large ball B k (as large as we please by the iteration construction) the equality u = u k and the approximation

$Display mathematics$
since the mass of u contained outside this ball is known (2k), and that of 𝒰 is zero there. The result follows if the t k have been chosen as before, ρ(t k)2k → 0. ■

# A.11 Non-contractivity of the PME flow in Lp spaces

We complete here the analysis started in Section 4.5.2 about lack of contractivity of the PME flow in different L p spaces. This is the result that comes out of the blow-up example.

Theorem A.5 The PME flow posed in the whole space is not contractive in the spaces L p(R d) if m ≥ 2 and p c < p ≤ ∞ with

(A.32)
$Display mathematics$
Neither is the flow in a bounded domain with homogeneous Dirichlet or Neumann boundary conditions.

Proof We divide the proof in several steps for more clarity. We start with the Cauchy problem.

### (p.584) 1. Case L∞

1. (i) We have seen in formula (4.48) an example of two non-negative solutions whose pressures are ordered and differ by a constant C(t) for every fixed t and this constant grows with time. When m = 2 pressure and density are proportional, so that this example shows a particular case of increase of the norm

$Display mathematics$

2. (ii) It can be objected that the example is constructed in the class of growing solutions and not in a more natural class, like bounded solutions. Such an objection is easily overcome by continuity. We construct two increasing families of solutions u in(x, t), i = 1, 2, with non-negative data u in(x, 0) ∈ L 1(R d) ∩ L (R d) so that u in(x, 0) ↑ U i(x, 0) a.e. The functions u in(x, 0) can be even smooth and compactly supported. If we perform the construction in a suitable way we have

$Display mathematics$
On the other hand, we know by the local regularity theory that the families of solutions u 1n, u 2n are locally uniformly Hölder continuous as long as they are locally bounded, and this happens for 0 < t < T − ɛ. We conclude that u in(x, t) → U i(x, t) locally uniformly, so that given ɛ > 0 we have
$Display mathematics$
for all large n and 0 < t < T − ɛ. Since C(t) > C(0) the proof is complete in this case.

3. (iii) The situation for m > 2 is even better since $U i = a V i γ$ with γ = 1/(m − 1) < 1. We now observe that given C > 0, the function

$Display mathematics$
is decreasing in v for 0 ≤ v < ∞. This implies that, since V 2 = V 1 + C(t), for every fixed time the maximum of the difference
$Display mathematics$
is taken at the minimum value of V 1, i.e., at x = 0. But then we have
$Display mathematics$
and this goes to infinity as tT.

The adaptation to the class of bounded and integrable solutions is done as before.

### (p.585) 2. Case Lp, p large

1. (i) We modify the argument of Case 1 with the problem posed in the whole space by defining U 2 as a modification of U 1 in the two available parameters, T and C. In this case we take V 1 as blow-up solution with C = 0 and T = 1

$Display mathematics$
where K is an unimportant universal constant, and
$Display mathematics$
where C > 0 and T = 1 + ɛ with ɛ small. Let us define
$Display mathematics$
for all 0 < t < 1. We are going to prove that this quantity increases with time for all large p.

2. (ii) In order to estimate it at t = 0 we calculate the point where both initial functions are equal as

$Display mathematics$
where we have used the fact that T = 1 + ɛ ≈ 1. In this interval the maximum value of the difference U 2(x, 0) − U 1(x, 0) is of order C γ, γ = 1/(m − 1), and we have
$Display mathematics$

3. (iii) We now estimate N p at a time t 1 = 1 − τ. We have the values of the pressure at the origin

$Display mathematics$
Moreover, V 1(x, t 1) ≤ V 2(0, t 1)/2 in a ball of radius
$Display mathematics$
so that
$Display mathematics$
(p.586) We therefore need the inequality
$Display mathematics$
This holds for p − 1 > 1/2β, i.e., p > 2 + (d(m − 1)/2) with τ ∼ ɛ.

In order to revert to the comparison of L p norms we replace U 2 by the solution U 3 with initial data

$Display mathematics$
Then, we have
$Display mathematics$
On the other hand, the observation that U 3U 2 and U 3U 1 leads to the inequality
$Display mathematics$
The proof of increase of the L p norm of the difference is thus complete, modulo approximation with bounded solutions if we want to prove the result in that class.

### 3. The Dirichlet data

The approximation process that we have mentioned before can be done with solutions of Dirichlet problems or Neumann in expanding balls. We conclude that for some of these balls there is an example of non-contraction for the same m and p as in the Cauchy problem. Since the PME is invariant under scaling the result is true for all balls.

For the case of a general domain replace the balls of radius R → ∞ by scaled copies of the domain and argue in the same way as before.

### Open problems

1. (1) What is the best bound for p c in the above result. Is p c = 2? Is p c = 1?

2. (2) Extend the result to the range 1 < m < 2.

## A.11.1 Other contractivity properties

1. (i) Contractivity in H −1(Ω) is discussed in Sections 6.7 and 10.1.4. It applies to the GPME.

2. (ii) The PME semigroup in R is contractive with respect to all Wasserstein distances defined in Section 10.4. The focusing solutions are used in [514] to show that the PME semigroup in R d is not contractive in these Wasserstein (p.587) metrics d p if p is large enough, including p = ∞. However, the semigroup is contractive in the case p = 2 [152]. Thus has been used very elegantly by Toscani in proving sharp asymptotics [494].

See also [158] where the asymptotic complexity of the patterns of the GPME is studied.