Analysis III ETH
Analysis III ETH
Mikaela Iacobelli
1 Preliminaries 3
1.1 Partial differential equations . . . . . . . . . . . . . . . . . . . . . . 3
1.2 What is a well-posed problem? . . . . . . . . . . . . . . . . . . . . . 5
1.3 Initial and boundary conditions . . . . . . . . . . . . . . . . . . . . 5
1.4 Classification properties of PDEs . . . . . . . . . . . . . . . . . . . 6
1.5 Modelling a stock market . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Method of characteristics 11
2.1 First order equations . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Quasilinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Method for first order linear PDEs . . . . . . . . . . . . . . . . . . 17
2.4 Existence and uniqueness questions . . . . . . . . . . . . . . . . . . 20
2.5 Examples of existence and uniqueness . . . . . . . . . . . . . . . . . 23
2.6 The existence and uniqueness theorem . . . . . . . . . . . . . . . . 26
3
Contents
5 Separation of variables 57
5.1 Heat equation with Dirichlet boundary conditions . . . . . . . . . . 57
5.2 Wave equation with Neumann boundary conditions . . . . . . . . . 62
5.3 Inhomogeneous PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 Uniqueness with the energy method . . . . . . . . . . . . . . . . . . 69
6 Elliptic equations 71
6.1 Classification of linear second order PDEs . . . . . . . . . . . . . . 71
6.2 Laplace’s and Poisson’s equations . . . . . . . . . . . . . . . . . . . 73
6.3 Basic properties of elliptic problems . . . . . . . . . . . . . . . . . . 73
6.4 Harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7 Maximum principles 79
7.1 Weak maximum principle . . . . . . . . . . . . . . . . . . . . . . . 79
7.2 Mean value principle . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.3 Strong maximum principle . . . . . . . . . . . . . . . . . . . . . . . 81
7.4 Maximum principle for Poisson’s equation . . . . . . . . . . . . . . 82
7.5 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.6 Maximum principle for parabolic equations . . . . . . . . . . . . . . 84
4
Notation
1
CHAPTER 1
PRELIMINARIES
3
Chapter 1. Preliminaries
4
1.2. What is a well-posed problem?
It is not obvious that a given model is consistent, in the sense that it leads to a
solvable PDE. Furthermore we wish the solution to be unique and to be stable
under small perturbation of the data.
By “problem” we mean a PDE supplemented with initial or boundary condi-
tions. A problem is well-posed if it satisfies the following criteria:
3. A small change in the equation and/or in the side conditions gives rise to a
small change in the solution (stability).
If one or more of these conditions do not hold, then the problem is said to be
ill-posed.
5
Chapter 1. Preliminaries
about u in time it makes sense to couple this equation with an information about
the concentration of the pollutant at time zero. Hence we consider the initial value
problem (
ut + cux = 0 , (x, t) ∈ R × R+
u(x, 0) = g(x) , x ∈ R ,
where the function g > 0 represents the concentration of the pollutant at time
zero.
The second equation expresses two boundary conditions (the string is fixed at
position 0 and L) and the last two equations express the initial conditions: they
tell us what happens at time zero in terms of the deflection f (x) and of the speed
g(x).
Remark 1.3.3. The domain of the PDE is defined only in the interior of the interval
because the function u may not be differentiable on the boundary.
Definition 1.3.4. We say that the solution of a PDE is strong if all the derivatives
of the solution that appear in the PDE exist and are continuous. Otherwise the
solution is said to be weak.
Weak solutions have points in their domain where the derivatives do not exist
(or are not continuous), so a weak solution cannot directly be plugged into the
equation.
Remark 1.3.5. There is no universal meaning for weak solution, the definition
depends on the type of PDE and we will see this later when studying conservation
laws.
6
1.4. Classification properties of PDEs
(m)
where f (x) and ai1 ,...,im are functions of the variable x = (x1 , . . . , xn ). Namely, a
PDE is linear if every summand consists of a function multiplied by u or by one
of its derivatives. Equivalently, if u and v solves (1.4.1), then u − v solves (1.4.1)
with f = 0.
Remark 1.4.5. Observe that the general form of a linear PDE of the first order for
an unknown function u in two independent variables x, y is
a(x, y)uxx + b(x, y)uyy + 2c(x, y)uxy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y) .
• ut = ux + u2 is not linear;
7
Chapter 1. Preliminaries
Remark 1.4.9. We can denote with L[u] any linear operator acting on u, as in
(1.4.1). For example, given the linear operator L[u] = ut − ux , the transport
equation can be written as L[u] = 0.
Example 1.4.10. • ut + ux = 0 is linear, homogeneous;
Note that the functions a, b, c may depend also on u but not on ux , uy . The general
form of a quasilinear PDE of the 2nd order is instead
8
1.5. Modelling a stock market
9
Chapter 1. Preliminaries
where r > 0 is the growth rate. We now analyze the evolution of X(t) using the
Merton model. We assume that, given a time step τ > 0, the value at time t + τ
is given by X(t + τ ) = X(t) ± δ, for some δ > 0. We choose to add or subtract
δ with probability 1/2 each. Let us define p(x, t) := Prob(X(t) = x). Then the
equation for X(t + τ ) gives
1 1
p(x, t + τ ) = p(x + δ, t) + p(x − δ, t) .
2 2
Rearranging the terms, we thus obtain that
As a result, the probability density p must fulfill the heat equation, whose solution
is well-known and it is given by
1 x2
p(x, t) = √ e− 4πkt .
4πkt
Summary
10
CHAPTER 2
METHOD OF CHARACTERISTICS
In this chapter we present an approach to solve first order quasilinear PDEs known
as method of characteristics. This method relies on a powerful geometrical inter-
pretation of first order PDEs, reducing them to a system of ODEs.
is the solution surface and it is indeed a surface in R3 with normal vector at a point
(x, y, u(x, y)) given by (ux , uy , −1) (see Figure 2.1). Hence, observe that equation
(2.1.2) relates the graph surface to its normal, or equivalently to its tangent plane,
which is the plane orthogonal to the normal. In fact, the tangent plane at each
point of the surface graph(u) is the plane spanned by the vectors (1, 0, ux ) and
(0, 1, uy ). Indeed note that (1, 0, ux ) and (0, 1, uy ) are linearly independent and
orthogonal to the normal (ux , uy , −1). See Figure 2.2 for a representation in one
dimension less.
The point of this discussion is that a first order PDE can be seen geometrically
as a relation between the solution surface and its tangent plane.
11
Chapter 2. Method of characteristics
graph(u)
u(x, y)
y
(x, y)
graph(f )
(1, f 0 )
(f 0 , −1)
12
2.2. Quasilinear equations
or equivalently as
∂
u(x, y)e−c0 x = c1 (x, y)e−c0 x .
∂x
Integrating both sides over an interval of the form [x0 (y), x], we have
ˆ x
−c0 x −c0 x0 (y)
u(x, y)e − u(x0 (y), y)e = c1 (ξ, y)e−c0 ξ dξ .
x0 (y)
This means that once we prescribe the value of u on the curve {(x0 (y), y) : y ∈ R},
we can reconstruct the value of u everywhere (see Figure 2.3).
Depending on the initial conditions, we may have one solution, no solutions or
infinitely many solutions. Let us look into some specific examples.
13
Chapter 2. Method of characteristics
y
(x0 (y), y)
Figure 2.3: Once we prescribe the value of u on the curve (x0 (y), y), for example at
the intersection with the dotted line, we can reconstruct the value of u everywhere
on the dotted line.
z z
u(0, y) = y eco x ȳ
Initial ȳ
condition
ȳ y x
Figure 2.4: Figures illustrating the case of one solution in three dimensions. Here
the initial datum is just one point in the plane {y = ȳ}, which implies both
existence and uniqueness of the solution.
14
2.2. Quasilinear equations
z z
u(x, 0) = ax
y x
Figure 2.5: The exponential curves of the right figure are what the PDE wants the
solution to be. For this reason, in this case we have no solution at all.
Thus we only need to impose that u(x0 (0), 0)e−c0 x0 (0) = 1. In particular, we
can take any curve y 7→ (x0 (y), y) and choose the value of u on this curve as
we want, provided that u(x0 (0), 0) = ec0 x0 (0) . For example, we can consider
the curve y 7→ (0, y) and set x0 (0) = 0, u(0, y) = 1 + Ay 2 . Then
u(x, y) = ec0 x (1 + Ay 2 )
15
Chapter 2. Method of characteristics
Figure 2.6: Once we prescribe the value of u at one point on the curve {y = 0}
then we obtain the value of u at all other points on the curve {y = 0}. If the
value prescribe at {y = 0} is compatible then we have infinitely many solutions,
otherwise no solutions.
ux (x, y) = c0 u(x, y)
So for all y ∈ R we take x0 (y) ∈ R, defining a curve y 7→ (x0 (y), y). Then the
value of u at each point (x0 (y), y) defines u along all the horizontal line passing
through that point.
• In the first example the initial condition was y = u(0, y) and we have a
unique solution.
• In the third example the initial condition u(x, 0) = ec0 x is compatible with
the PDE but it leaves “too much choice”. In fact we have infinitely many
solutions of the PDE.
The moral of the story is that boundary conditions and initial conditions are very
important. We need to be careful to impose appropriate conditions in order to
obtain a well-posed PDE.
16
2.3. Method for first order linear PDEs
a(x, y)ux (x, y) + b(x, y)uy (x, y) = c0 (x, y)u(x, y) + c1 (x, y) . (2.3.1)
The idea is to assign the value of the solution u along a parametric curve and then
“propagate” this value along “characteristic curves”.
So, given a curve s 7→ (x0 (s), y0 (s)), we prescribe the value of u along such
curve as
u(x0 (s), y0 (s)) = ũ0 (s)
for all s ∈ R, for some function ũ0 of one variable. Hence, if we obtain a solution
u, the parametric curve in R3
Γ = Γ(s) = (x0 (s), y0 (s), ũ0 (s))
is contained in graph(u). We say that Γ is the initial curve.
Now observe that equation (2.3.1) can be rewritten as
(a, b, c0 u + c1 ) · (ux , uy , −1) = 0 .
Namely we ask the vector ~v := (a, b, c0 u + c1 ) to be orthogonal to the normal
vector (ux , uy , −1) and thus tangent to the surface graph(u). As a result, if we
integrate the vector field ~v , i.e., we consider the ODE ~z˙ = ~v (~z), then the curve ~z
is contained in the surface graph(u) (see Figure 2.7).
In other words, to find a solution to (2.3.1), we look for a surface S ⊆ R3
(which will then be graph(u)) such that at each point (x, y, u) ∈ S we have that
(a(x, y), b(x, y), c(x, y, u)) ∈ T(x,y,u) S ,
where c(x, y, u) := c0 (x, y)u(x, y) + c1 (x, y).
In order to do this, for all s, we consider the curve (x(t, s), y(t, s), ũ(t, s)) given
by solving the following system of ODEs
dx(t, s)
= a(x(t, s), y(t, s))
dt
dy(t, s)
= b(x(t, s), y(t, s)) (2.3.2)
dt
dũ(t, s) = c (x(t, s), y(t, s))ũ(t, s) + c (x(t, s), y(t, s)) ,
0 1
dt
with initial conditions
x(0, s) = x0 (s)
y(0, s) = y0 (s)
ũ(0, s) = ũ0 (s) ,
17
Chapter 2. Method of characteristics
z
graph(u) (x0 (s), y0 (s), ũ0 (s)) = Γ(s)
~v
Figure 2.7: The initial curve Γ and the construction of the solution surface.
i.e., we require the initial point to be Γ(s). The solutions to this system of ODEs
(as one varies s) are the characteristic equations associated to the PDE (2.3.1) in
consideration.
Note that, by definition, the characteristic curves t 7→ (x(t, s), y(t, s), ũ(t, s))
have tangent vector
(a(x(t, s), y(t, s)), b(x(t, s), y(t, s)), c(x(t, s), y(t, s), ũ(t, s))) .
dx(t)
= a(x(t), y(t))
dt
dy(t) = b(x(t), y(t))
dt
18
2.3. Method for first order linear PDEs
and consider also the curve t 7→ u(x(t), y(t)). Then, applying the chain rule, we
get
d
[u(x(t), y(t))] = ẋux + ẏuy = aux + buy = c(x(t), y(t), u(x(t), y(t))) .
dt
In other words, u along the curve (x(t), y(t)) coincides with the solution of the
ODE
d
ũ(t) = c(x(t), y(t), ũ(t)) ,
dt
provided of course that they start from the same initial condition.
Example 2.3.2. Consider the following Cauchy problem
(
ux + uy = 1
u(x, 0) = 2x3 .
We parameterize the initial condition with the curve Γ(s) = (x0 (s), y0 (s), ũ0 (s)) =
(s, 0, 2s3 ).
In the notation of (2.3.1), here we have a = 1, b = 1, c0 = 0 and c1 = 1. Hence,
following the procedure described above, we obtain the ODE system
dx(t, s)
= a(x(t, s), y(t, s)) = 1
dt
dy(t, s)
= b(x(t, s), y(t, s)) = 1
dt
dũ(t, s) = c (x(t, s), y(t, s))ũ(t, s) + c (x(t, s), y(t, s)) = 1 ,
0 1
dt
together with initial conditions
x(0, s) = x0 (s) = s
y(0, s) = y0 (s) = 0
ũ(0, s) = ũ0 (s) = 2s3 .
Since we are looking for a solution in (x, y) coordinates, we have to find the inverse
map (t, s) 7→ (x, y) to find u(x, y). In this case it is very easy, indeed
( (
x(t, s) = s + t t=y
=⇒
y(t, s) = t s = x−t = x−y.
19
Chapter 2. Method of characteristics
Summary
• First order PDEs relate the solution surface to its tangent plane.
We now discuss some conditions that guarantee local existence and uniqueness.
The questions is: under which conditions does there exist a unique integral surface
for (2.2.1) that contains the initial curve Γ?
To solve our Cauchy problem, we need to solve the characteristic equations
using the points we selected on Γ as an initial condition for the system of ODEs
(2.3.2). Assuming that the coefficients of the ODEs are smooth (a and b have to
be C 1 ), we can apply the Cauchy–Lipschitz theorem for ODEs that guarantees
local existence in time and uniqueness of the solution. Hence, for all s ∈ R
there exists some time interval Is = (s − ε, s + ε) ⊆ R such that the solution
t 7→ (x(t, s), y(t, s), ũ(t, s)) exists uniquely for all t ∈ Is .
Once solved the ODE system, we have an expression for ũ in the variables (t, s).
The fundamental relation between ũ(t, s) and u(x, y) (the desired solution) is given
by ũ(t, s) = u(x(t, s), y(t, s)). Some difficulties may arise in the inversion of the
transformation from (t, s) to (x, y), because the mapping x = x(t, s), y = y(t, s)
may be not invertible. Thanks to the implicit function theorem we know that this
20
2.4. Existence and uniqueness questions
(ii) If the characteristics t 7→ (x(t, s), y(t, s)) intersect the Cauchy curve Γ more
than once, then global existence may fail. This is because the characteristic
equation is well-posed for a single initial condition. Think about the fact
that a characteristic curve “carries” with it a charge of information from its
intersection point with Γ. If a characteristic curve intersects Γ more than
once these two “information charges” might be in conflict (see Figure 2.9).
(iii) If the vector field (a, b) vanishes at some point, then the corresponding PDE
may only have a solution outside of a neighborhood of this point.
(iv) If the characteristics intersect within the domain of interest, then existence
can break down at the intersection points.
21
Chapter 2. Method of characteristics
(i) either the characteristic curves coincide before the projection and thus there
are infinitely many solutions;
(ii) or these curves do not coincide, which means that graph(u) should take
different values on π(Γ), which is impossible.
π(Γ)
u ≡ u0 (s)
(x0 (s), y0 (s))
π(Γ)
Figure 2.9: Projection of a characteristic curve crossing π(Γ) twice. The value of
u at (x0 (s0 ), y0 (s0 )) may not be uniquely defined since it should both be equal to
u0 (s0 ) and to u(x(t0 , s0 ), y(t0 , s0 )).
22
2.5. Examples of existence and uniqueness
Example 2.5.2. Let us now consider the same PDE as in the previous example,
but with a different initial condition, namely
(
ux + uy = 1
u(x, x) = h(x) ,
for a function h in one variable. As a parametrization of the initial curve we choose
Γ(s) := (s, s, h(s)). Hence we have the following characteristic equations
dx(t, s)
= 1, x(0, s) = s =⇒ x(t, s) = t + s
dt
dy(t, s)
= 1, y(0, s) = s =⇒ y(t, s) = t + s
dt
dũ(t, s) = 1, ũ(0, s) = h(s) =⇒ ũ(t, s) = h(s) + t .
dt
23
Chapter 2. Method of characteristics
y
π(Γ)
projection of s
the characteristics
u
=
g(
s)
+
t
Figure 2.10: A plot of the characteristics in Example 2.5.1.
In this case the relation between (t, s) and (x(t, s), y(t, s)) cannot be inverted. This
can be seen evaluating the determinant
∂x ∂y
∂t ∂t 1 1
det = det = 0.
∂x ∂y 1 1
∂s ∂s
Note that the projection of the initial curve is the diagonal {x = y}, but this is
also the projection of a characteristic curve. In this case where h(x) = x + c for
a constant c ∈ R, we obtain ũ(t, s) = s + t + c. Thenit isnot necessary to invert
the mapping (x(t, s), y(t, s)) because u = x+y
2
+ c + f x−y
2
is a solution for every
differentiable function f that vanishes at the origin. However, for any other choice
of h the problem has no solutions.
25
Chapter 2. Method of characteristics
If we assume s > 0, we note that s, t act as polar coordinates, so we can invert the
relation above to obtain ( p
s = x2 + y 2
t = arctan(y/x) .
This gives us the solution
p
u(x, y) = ψ( x2 + y 2 )earctan(y/x) .
26
2.6. The existence and uniqueness theorem
y
characteristics
u = ψ(s)et
u = ψ(−s) t u = ψ(s)
x
(ii) The projection of a characteristic may intersect π(Γ) more than once, in
which case the value of u may not be uniquely determined.
We have the following local existence and uniqueness result.
Theorem 2.6.1. Consider the Cauchy problem
(
a(x, y, u)ux + b(x, y, u)uy = c(x, y, u)
Γ(s) = (x0 (s), y0 (s), u0 (s)) .
Assume that there exists s0 ∈ R such that the transversality condition holds at
(0, s0 ), i.e.,
∂x ∂y
(0, s0 ) (0, s0 )
∂t ∂t
det 6= 0 .
∂x ∂y
(0, s0 ) (0, s0 )
∂s ∂s
Then there exists a unique solution u of the Cauchy problem defined in a neigh-
borhood of (x0 (s0 ), y0 (s0 )).
Sketch of the proof. We first solve the characteristic equations for s close to s0 . By
existence and uniqueness for ODEs, we know that there exists a unique solution
(x(t, s), y(t, s), ũ(t, s)) defined for (t, s) close to (0, s0 ). Thanks to the transver-
sality condition, we know that we can apply the implicit function theorem and
27
Chapter 2. Method of characteristics
the map (t, s) 7→ (x(t, s), y(t, s)) is invertible close to (0, s0 ). So this allows
us to find a formula for u as u(x, y) = ũ(x(t, s), y(t, s)) in a neighborhood of
(x(0, s0 ), y(0, s0 )) = (x0 (s0 ), y0 (s0 )).
Summary
• We need to express the solution for u in terms of (x, y) but the map
(t, s) 7→ (x, y) may be not invertible.
28
CHAPTER 3
CONSERVATION LAWS AND SHOCK WAVES
In this chapter we study an important class of first order PDEs called conser-
vation laws, which are PDEs that prescribe conserved quantities such as mass,
electric charge, number of cars (in traffic dynamics), number of people (in crowd
dynamics), etc. We see some examples (as Burgers’ equation with various initial
data) and how we can apply the method of characteristics to solve conservation
laws. However, solutions of conservation laws may develop discontinuities even
for smooth initial data, for which reason we need to introduce the notion of weak
solution.
We recall that the local existence theorem for first order quasilinear PDEs
states that, under suitable conditions, one can find local solutions to first order
quasilinear PDEs using the method of characteristics. We see in some examples
that, even if a classical solution ceases to exist, the phenomenon (say for example
the traffic flow) that we are modelling certainly does not. Therefore we broaden
our definition of solution to allow us to make predictions about the phenomenon
under study after the time when a classical solution ceases to exist.
29
Chapter 3. Conservation laws and shock waves
Example 3.1.2. The easiest example of conservation law is the transport equation
uy + cux = 0 , (3.1.3)
for a constant c ∈ R, i.e., c(u) = c in this case. Note that uy + cux = uy + (cu)x ,
thus the flux is f (u) = cu and f 0 (u) = c(u) = c.
If u ≥ 0, u can represent the concentration of a pollutant in a river at time
y and position x (see also Example 1.3.1). The constant c ∈ R represents the
velocity of the river: if c > 0 the flow is from the left to right, if c < 0 the flow
goes from right to left. Moreover, the total amount of pollutant in an interval [a, b]
at time y is
ˆ b
u(x, y) dx .
a
The initial value problem (or Cauchy problem) for equation (3.1.3) is
(
uy + cux = 0 , (x, y) ∈ R × (0, ∞)
u(x, 0) = g(x) , x ∈ R ,
dx(t, s)
= c, x(0, s) = s =⇒ x(t, s) = ct + s
dt
dy(t, s)
= 1, y(0, s) = 0 =⇒ y(t, s) = t
dt
dũ(t, s) = 0, ũ(0, s) = g(s)
=⇒ ũ(t, s) = g(s) .
dt
Now we need to invert the function (t, s) 7→ (x(t, s), y(t, s)), which we can do and
we obtain
( (
y(t, s) = t t(x, y) = y
=⇒
x(t, s) = ct + s = cy + s s(x, y) = x − cy .
As a result, we get
30
3.2. Example: Burgers’ equation
c>0
Figure 3.1: A travelling wave. On the left the initial condition g(x) at y = 0. On
the right, the solution g(x − cy) when c > 0 after time y > 0.
which models the flow of a mass with concentration u(x, y), where the speed of the
flow depends on the concentration. The variable y has the physical interpretation
of a time and h(x) is the initial condition, so the concentration of mass at time
y = 0.
Remark 3.2.2. Burgers’ equation is in the form uy + c(u)ux = 0 with c(u) = u,
thus it is equivalent to the equation
1 2
uy + u = 0.
2 x
In particular the flux is f (u) = u2 /2 and the wave speed is c(u) = f 0 (u) = u.
Since (3.2.1) is a first order equation, we can use the method of characteristics.
The parameterized initial condition is Γ(s) = (s, 0, h(s)) and the characteristic
equations are given by
dx(t, s)
= ũ(t, s) , x(0, s) = s =⇒ x(t, s) = s + h(s)t
dt
dy(t, s)
= 1 , y(0, s) = 0 =⇒ y(t, s) = t
dt
dũ(t, s) = 0 , ũ(0, s) = h(s)
=⇒ ũ(t, s) = h(s) .
dt
31
Chapter 3. Conservation laws and shock waves
Inverting the function (t, s) 7→ (x(t, s), y(t, s)) = (s + h(s)t, t) as in the example of
the transport equation is not possible, we just obtain that y = t and x = s + h(s)y.
Therefore the solution of the PDE is implicitly given by
Note that the initial value of u (namely h) determines the slope of the characteristic
equations.
Now, recalling that x = s + h(s)y and ũ(t, s) = h(s), we have that s = x − ũy.
As a result, the solution can be written implicitly also as
Remark 3.2.3. This last implicit solution does not come unexpected (looking back
at the solution of the transport equation) and it is actually a very general formula.
Indeed, if you are solving a PDE in the form
(
uy + c(u)ux = 0 for (x, y) ∈ R × (0, ∞)
u(x, 0) = u0 (x) for x ∈ R ,
Remark 3.2.4. Let us verify the transversality condition for (3.2.1) at a point (0, s).
We have that
∂x ∂y
∂t ∂t u 1
det = det = −1 6= 0 .
∂x ∂y 1 0
∂s ∂s
Therefore all the points of the initial curve Γ are not degenerate and, if h is
continuously differentiable, Theorem 2.6.1 ensures that the conservation law has a
unique solution on some time interval [0, yc ) (the subscript c stands for “critical”),
where yc > 0 is sufficiently small.
Let us now determine the critical time yc when the “classical” (or strong)
solution breaks down.
Recall that x = s + h(s)y. Let us fix y = ȳ and look at the map
s 7→ s + h(s)ȳ = x(ȳ, s) .
Assume also that, for all s ∈ R, 1 + h0 (s)ȳ > 0. This implies that the map s 7→
x(ȳ, s) is strictly increasing, thus there exists its unique inverse map. Therefore,
for ȳ fixed, we can invert the relation s 7→ s + h(s)ȳ provided that 1 + h0 (s)ȳ > 0.
If we assume for instance that h0 is globally bounded, then 1 + h0 (s)ȳ > 0 for
ȳ ≥ 0 small enough. What is the first value ȳ > 0 for which we cannot invert the
relation s 7→ s + h(s)ȳ? This is given by the first ȳ for which there exists s ∈ R
such that 1 + h0 (s)y = 0. If we denote by yc such a ȳ, we can say that
1 0
yc = inf − 0 : s ∈ R, h (s) < 0 . (3.2.2)
h (s)
At time yc , there is a problem with the solution u. To see it, we can differentiate
the relation u(s + h(s)y, y) = h(s) with respect to s to get
ux (s + h(s)y, y)[1 + h0 (s)y] = h0 (s) .
Thus
h0 (s)
ux (s + h(s)y, y) = ,
1 + h0 (s)y
which shows that the derivative of u explodes when we take s and yc such that
1+h0 (s)yc = 0. Hence yc is the critical time after which there is no smooth solution
to the problem.
Remark 3.2.5. Note that the formula (3.2.2) is specific to Burgers’ equation.
33
Chapter 3. Conservation laws and shock waves
Remark 3.3.2. Solutions of conservation laws are constant along their characteris-
tics, which are straight lines. Indeed, for each s ∈ R, the characteristic through a
point (s, 0) is the line in the (x, y)-plane going through (s, 0) with slope 1/c(u0 (s))
and on this line u is equal to the constant u0 (s).
Remark 3.3.3. If c(u0 (s))s < 0, then there exists a time when the characteristics
cross. Heuristically you can think about the latter condition as when a faster
characteristic starts from a point behind a slower characteristic. If c(u0 (s))s is
never decreasing, there are not singularities, however such data are exceptional.
y
s 7→ s + yh(s) not invertible
s1 + ȳh(s1 ) s2 + ȳh(s2 )
ȳ
u ≡ h(s1 ) u ≡ h(s2 )
Γ(s) ≡ y = 0
u(s1 , 0) = h(s1 ) u(s2 , 0) = h(s2 )
34
3.5. Rankine-Hugoniot condition
• if u satisfies the classical formulation, then it satisfies also the integral for-
mulation;
Thus, in our notion of weak solution, we relax the requirement of a global clas-
sical solution and we allow solutions that are a combination of classical solutions
on each Di with possibly jumps between them. We now see a very important
condition that has to be verified in order to have a global weak solution.
In the limit as a → σ(y)− and b → σ(y)+ , the integral vanishes and we have
f+ − f−
f + − f − = σ 0 (y)(u+ − u− ) =⇒ σ 0 (y) = , (RH)
u+ − u−
35
Chapter 3. Conservation laws and shock waves
y u−
Γ = (σ(y), y)
u+
D− D+ x
2 2
Note that in this case c(u) = u and u0 (x) = e−x . Observe that c(u0 (s)) = e−s
is decreasing for s > 0. Hence this conservation law has a classical solution for
y ∈ [0, yc ) where
2
es
1
yc = min − = min .
s>0 c0 (u0 (s))u00 (s) s>0 2s
2
Let y∗ (s) := es /2s, then lims→0 y∗ (s) = lims→+∞ y∗ (s) = +∞. Therefore y∗ is
minimized when
dy∗ s2 1 1
= 0 ⇐⇒ e 1 − 2 = 0 ⇐⇒ s2 = .
ds 2s 2
√
Therefore √s = 1/ 2 is√the unique critical point of y∗ in (0, ∞). We conclude that
yc = y∗ (1/ 2) = e1/2 / 2. Then, for y < yc the solution is
2
u(x, y) = u0 (s) − e−s ,
2
where s is the unique solution of x = s + c(u0 (s))y = s + es y.
36
3.6. The entropy condition
Since the initial condition h(x) is not monotonously increasing, the solution de-
velops a singularity at time y = yc , where
1
yc = inf − 0 .
c(u0 (s))s <0 c (u0 (s))u00 (s)
Note that c(u0 (s)) = c(h(s)) = h(s), however the initial datum h(x) is not dif-
ferentiable. Nevertheless c(u0 (s)) = 1 − s/α is decreasing for s ∈ (0, α) and we
expect u to become discontinuous at time
1
yc = inf − = inf {α} = α .
s∈(0,α) c(u0 (s))s s∈(0,α)
Therefore, for y < yc = α, the solution is u(x, y) = h(s), where (x, y) lies on
the characteristic through (s, 0). Since h is defined piecewise, we have to consider
three cases:
37
Chapter 3. Conservation laws and shock waves
u0 (x) = h(x)
α x
38
3.6. The entropy condition
which is a weak solution (or shock wave). In conclusion, we constructed the fol-
lowing weak solution
if x ≤ y, y ∈ [0, α)
1,
α−x
α − y , if y ≤ x ≤ α, y ∈ [0, α)
u(x, y) =
0, if x ≥ α, y ∈ [0, α)
1, if x < (y + α)/2, y ∈ [α, ∞)
if x > (y + α)/2, y ∈ [α, ∞) .
0,
u≡1 u≡0
α−x
u= α−y
yc = α ...
α x
α(x−s)
y =x−s y= α−s
x=s
Example 3.6.2. We refer to [Pin05, Example 2.15]. In this example we see how,
by allowing for weak solutions, we lose uniqueness. Consider the Burgers’ equation
(
uy + uux = 0
u(x, 0) = h(x) ,
Since c(h(s))s = h0 (s) ≥ 0, there is no critical time yc > 0, where the characteristics
intersect. On the contrary, the characteristics diverge. In this situation we talk
39
Chapter 3. Conservation laws and shock waves
Let us now look at the case when α → 0. Then h(x) becomes the step function
(
0 , if x ≤ 0
h(x) =
1 , if x ≥ 1 .
If a conservation law does not have a unique weak solution, then how can we
select the “right” one? The answer comes from the following entropy condition.
40
3.6. The entropy condition
u≡1 u≡0
x
u= y
One can see that the solution (3.6.1) satisfies the entropy condition trivially
because it has no shocks, while (3.6.2) does not satisfy the entropy condition, since
c(u+ ) = 1, c(u− ) = 0 and γ 0 = 1/2.
y
y
γ= 2
x
u≡0 u≡1
41
CHAPTER 4
ONE DIMENSIONAL WAVE EQUATION
In this section we study the one dimensional wave equation (which is the archetype
of hyperbolic equation, see Section 6.1) on the real line. We use the reduction to
the canonical form to show that the general solution of the one dimensional wave
equation can be decomposed as superposition of a forward and a backward trav-
eling wave. We also introduce the d’Alembert’s formula that gives us an explicit
solution to the Cauchy problem.
Usually real life applications of the wave equation take place on a finite interval
of times. In that case, we would need to deal with boundary conditions but for
now we consider the simplified setting in absence of boundary conditions, in order
to make some general considerations.
∂
ut = [w(ξ(x, t), η(x, t))]
∂t
= wξ (ξ(x, t), η(x, t)) · ξt (x, t) + wη (ξ(x, t), η(x, t)) · ηt (x, t) = wξ ξt + wη ηt
43
Chapter 4. One dimensional wave equation
and
∂
ux = [w(ξ(x, t), η(x, t))]
∂x
= wξ (ξ(x, t), η(x, t)) · ξx (x, t) + wη (ξ(x, t), η(x, t)) · ηx (x, t) = wξ ξx + wη ηx .
0 = utt − c2 uxx = c2 [wξξ − 2wξη + wηη − wξξ − 2wξη − wηη ] = −4c2 wξη .
Thus we have wξη = 0. Note that wξη = ∂wξ /∂η = 0. This implies that wξ is
independent of η, therefore we can write it as wξ (ξ, η) = f (ξ), for some function
f : R → R. Then we integrate and we get
ˆ ξ
w(ξ, η) = f (α) dα + G(η)
0
´ξ
with G(η) = w(0, η). If we define F (ξ) = 0
f (α) dα, we can write the general
solution for the equation wξη = 0 as follows
44
4.1. Canonical form and general solution
• F (x + ct) is a wave moving to the left with velocity c > 0, thus a backward
wave.
Remark 4.1.1. Equation (4.1.2) shows that any solution of the one dimensional
wave equation is the sum of two traveling waves.
Remark 4.1.2. Observe that the functions F (x + ct) and G(x − ct) are constant
along lines of the form x + ct = α ∈ R and x − ct = β ∈ R, respectively. Those
lines are called characteristics. Hence, for the wave equation the characteristics
are straight lines in the (x, t)-plane with slopes ±1/c. As for first order PDEs, the
“information” is propagated along these curves.
t t
F constant G constant x + ct = α
x α x
Figure 4.1: On the left, the characteristics where F and G are constant. On the
right, the backward wave F (x + ct).
We saw that (4.1.2) is valid for F, G ∈ C 2 (R). Let us now extend the validity
of this equation. Consider F, G real piecewise continuous functions. Let us ap-
proximate F and G by two sequences of C 2 functions {Fn }n∈N , {Gn }n∈N , namely
we demand that
(i) Fn , Gn ∈ C 2 for all n ∈ N;
45
Chapter 4. One dimensional wave equation
Remark 4.1.3. Assume that u is a smooth function except at (x0 , t0 ). Then either
F is not smooth at x0 + ct0 or G is not smooth at x0 − ct0 . Note that there
are two characteristics passing through (x0 , t0 ), which are x − ct = x0 − ct0 and
x + ct = x0 + ct0 . Thus, for any time t1 6= t0 , u is smooth except at one or two
points x± that satisfy
The singularities of solutions of the wave equation are traveling only along char-
acteristics, which is a typical feature of hyperbolic equations.
where f and g represent respectively the amplitude and the velocity at time t = 0.
A solution to the above Cauchy problem can be thought as the amplitude of the
vibration of an infinite string. A classical solution for the Cauchy problem is a
function u that is twice continuously differentiable for all t ∈ R+ and solving
(4.2.1).
Since the general solution to (4.1.1) is given by (4.1.2), we need to find F and
G using the initial conditions. By u(x, 0) = f (x), we deduce
46
4.2. The Cauchy problem and d’Alembert’s formula
On the other hand, subtracting the second equation from the first equation, we
have ˆ
1 x
2G(x) = f (x) − g(y) dy − [F (0) − G(0)] .
c 0
Therefore, the solution of (4.2.1) is given by
Remark 4.2.1. The value of the solution at (x, t) is only influenced by the values
of f and g in [x − ct, x + ct].
Remark 4.2.2. For the wave equation in higher dimension there are formulas similar
to the d’Alembert’s one, but they are more complicated and they go beyond the
scope of these notes.
Example 4.2.3. Consider the Cauchy problem (4.2.1) with c = 1 and initial
conditions given by
( (
0, if |x| > 1 0 , if |x| > 1
f (x) = g(x) =
1 − x2 , if |x| ≤ 1 , 1 , if −1 ≤ x ≤ 1 .
47
Chapter 4. One dimensional wave equation
48
4.3. Domain of dependence and region of influence
t 1
u≡ 2
1
x
−
+
=
2t
2t
=
−
1
x
x
1
+
=
2t
2t
=
−
−
x
1
u≡0 u≡0
−1 1 x
49
Chapter 4. One dimensional wave equation
(x0 , t0 ) Characteristic
triangle ∆(x0 ,t0 )
x0 − ct0 x0 + ct0 x
Domain of dependence
Now we can ask ourselves the dual question: which are the points in the half
plane t > 0 influenced by the initial data on a fixed interval [a, b]? The set of
points influenced by the values of f and g in [a, b] is the region of influence of the
interval [a, b]. From d’Alembert’s formula and the previous discussion, we discover
that the points in [a, b] influence the value of u at a given point (x0 , t0 ) if and only
if [x0 − ct0 , x0 + ct0 ] ∩ [a, b] 6= ∅. Hence, the initial conditions along [a, b] influence
those points (x, t) that satisfy
x − ct ≤ b and x + ct ≥ a ,
50
4.4. The Cauchy problem for the nonhomogeneous wave equation
x + ct = a x − ct = b
Region of influence
of [a, b]
a b x
This Cauchy problem models, for example, the vibration of an ideal string subject
to an external force F (x, t). As in the homogeneous case, f and g are given
functions that represent the shape and the vertical velocity of the string at time
zero.
As for the homogeneous case, we wish to have an analogous derivation of
d’Alembert’s formula. To do this, one integrates over the characteristic triangle
∆(x0 ,t0 ) of a generic point (x0 , t0 ) and obtains
¨ ¨
F (x, t) dx dt = (utt − c2 uxx ) dx dt .
∆(x0 ,t0 ) ∆(x0 ,t0 )
Remark 4.4.2. The value of u at (x0 , t0 ) is given by the value of the data f, g, F
on the whole characteristic triangle. Note that for F = 0 this formula coincides
with d’Alembert’s formula obtained above.
51
Chapter 4. One dimensional wave equation
Now, recalling that cos(α + β) − cos(α − β) = −2 sin(α) sin(β) for every angles α
and β, we get that
1
(x + 2t)2 − (x − 2t)2 (x + 2t)2 + (x − 2t)2
u(x, t) = x +
16 ˆ
sin x t
+ sin(2(t − τ )) dτ
2 0
2 2 sin x τ =t
= x + xt(x + 4t ) + [cos(2(t − τ ))]
4 τ =0
3 3 1
= x + x t + 4xt + sin x 1 − cos(2t) .
4
Remark 4.4.4. Note that u is an odd function of x. Is this a coincidence?
52
4.4. The Cauchy problem for the nonhomogeneous wave equation
Hence, w can be found using d’Alembert’s formula for the homogeneous problem
and the final solution is given by u = v + w.
Since in our case F = F (t) = 2 cos t − t sin t, we look for a function v = v(t)
depending only on t that solves vtt = 2 cos t − t sin t (note that vxx = 0 because
v does not depend on x). Let us choose as a particular solution v(t) = t sin t. Of
course this solution is not unique because we did not impose any initial condition
for v and vt at time 0. Now define w(x, t) = u(x, t) − v(t) with associated PDE
wtt − wxx = 0 ,
(x, t) ∈ R × (0, ∞)
x
w(x, 0) = u(x, 0) − v(0) = xe , x ∈ R
wt (x, 0) = ut (x, 0) − vt (0) = 0 , x ∈ R ,
53
Chapter 4. One dimensional wave equation
where we used that f, g, F are even. Thus v satisfies the same wave equation
with the same boundary conditions as u and therefore v = u, by uniqueness
Theorem 4.4.6.
The odd case (i.e., f (x) = −f (−x), g(x) = −g(−x) and F (x, t) = −F (−x, t))
and the periodic case (i.e., f (x) = f (x + L), g(x) = g(x + L) and F (x, t) =
F (x + L, t) for some L > 0) can be solved analogously defining v(x, t) = −u(−x, t)
and v(x, t) = u(x + L, t), respectively.
Let us now see how we can apply the previous theorem to solve a particular
wave equation with an extra boundary condition. Consider the problem
2
utt − c uxx = 0 , (x, t) ∈ (0, ∞) × (0, ∞)
u(x, 0) = f (x) , x > 0
ut (x, 0) = g(x) , x > 0
t ≥ 0.
u(0, t) = 0 ,
In order to fulfill the boundary condition u(0, t) = 0, we extend f and g in and
odd way as
( (
f (x) , if x ≥ 0 g(x) , if x ≥ 0
f˜(x) := g̃(x) :=
−f (−x) , if x < 0 , −g(−x) , if x < 0 .
54
4.5. Symmetry of the wave equation
The solution u is odd in x because f˜ and g̃ are odd, therefore u satisfies u(0, t) = 0.
Indeed u(x, t) = −u(−x, t) implies that u(0, t) = −u(0, t) and thus u(0, t) = 0.
55
CHAPTER 5
SEPARATION OF VARIABLES
57
Chapter 5. Separation of variables
L) over time, knowing that the initial temperature is equal to f . The boundary
conditions are telling us that the boundary of the metal bar is kept at zero, see
Figure 5.1. This Cauchy problem is also called an initial boundary problem (and
it is homogeneous).
Remark 5.1.1. In order to have compatibility between boundary and initial con-
ditions we assume that f (0) = f (L) = 0.
u(0, t) = 0 u(L, t) = 0
ut = kuxx
L x
u(x, 0) = f (x)
Figure 5.1: The boundary conditions for the heat equation modelled on a metal
bar of length L.
Let us now solve this problem using the method of separation of variables. The
first step consists in seeking for a solution that has the form of a product solution,
or separate solution, i.e.,
u(x, t) = X(x)T (t) ,
where X : [0, L] → R, T : [0, ∞) → R. Note that at this step we are not asking
that u satisfies the initial condition, but only the boundary conditions.
Plugging this into the heat equation, we get
T 0 (t) X 00 (x)
T 0 (t)X(x) − kX 00 (x)T (t) = 0 ⇐⇒ = .
kT (t) X(x)
Note that the term on the left-hand side only depends on t, while the term on the
right hand side only depends on x. Therefore, the only possibility is that these
two functions are equal to a constant −λ, namely
T 0 (t) X 00 (x)
= = −λ .
kT (t) X(x)
58
5.1. Heat equation with Dirichlet boundary conditions
These ODEs are only coupled by the separation constant −λ. Moreover note that
u satisfies the boundary conditions u(0, t) = u(L, t) = 0 if and only if u(0, t) =
X(0)T (t) = 0 and u(L, t) = X(L)T (t) = 0 for all t > 0. These conditions are
fulfilled either if T (t) = 0 (which gives a trivial solution) or if X(0) = X(L) = 0,
which represents the interesting case.
Let us now first consider the ODE in X
(
X 00 (x) = −λX(x) , x ∈ (0, L)
(5.1.1)
X(0) = X(L) = 0 .
59
Chapter 5. Separation of variables
is still a solution to the heat equation that satisfies the boundary conditions.
At this point we can consider the initial condition. If f (x) admits the following
Fourier expansion
X∞ p
f (x) = Cn sin( λn x) ,
n=1
then a natural candidate for a solution is
∞
X p
u(x, t) = Cn sin( λn x)e−kλn t .
n=1
But how to obtain the coefficients Cn√from f ? Fix m ∈ N and multiply the
expansion for f (x) by sin(mπx/L) = sin( λm x), obtaining
p X∞ p p
f (x) sin λm x = Cn sin( λn x) sin( λm x) .
n=1
60
5.1. Heat equation with Dirichlet boundary conditions
for all m ∈ N. In particular the coefficients are uniquely determined by the initial
condition f .
Imposing u(x, 0) = f (x), we note (as before) the the coefficients Bn are the Fourier
coefficients of f , which can we obtained as follows: first of all, by integration by
parts we observe that
ˆ b ˆ b
cos(nx) ib
h cos(nx) h cos(nx) sin(nx) ib
x sin(nx) dx = −x + dx = −x + ,
a n a a n n n2 a
and
ˆ b ˆ b
2 cos(nx) cos(nx)
h ib
2
x sin(nx) dx = −x + dx 2x
a n a a n
ˆ b
2 cos(nx) sin(nx) ib sin(nx)
h
= −x + 2x 2
−2 dx
n n a a n2
h cos(nx) sin(nx) cos(nx) ib
= −x2 + 2x + 2 ,
n n2 n3 a
61
Chapter 5. Separation of variables
π x
Figure 5.2: The boundary conditions for the heat equation of Example 5.1.2.
62
5.2. Wave equation with Neumann boundary conditions
At this stage we do not take into account the initial conditions. Differentiating in
x and t we get utt = X(x)T 00 (t) and uxx = X 00 (x)T (t). Hence, plugging into the
equation, we obtain
T 00 (t) X 00 (x)
X(x)T 00 (T ) = c2 X 00 (x)T (t) ⇐⇒ = = −λ ,
c2 T (t) X(x)
for some λ ∈ R. Therefore we have the following ODEs
(
X 00 (x) = −λX(x) , X 0 (0) = X 0 (L) = 0
T 00 (t) = −c2 λT (t) .
63
Chapter 5. Separation of variables
for some γn , δn ∈ R.
In conclusion, the general solution for the one dimensional wave equation with
Neumann boundary conditions can be written as
∞
X
u(x, t) = Xn (x)Tn (t)
n=0
A0 + B0 t X
∞ nπ h nπc nπc i (5.2.1)
= + cos x An cos t + Bn sin t .
2 n=1
L L L
Since ˆ (
L mπ L , if m = 0
cos x dx =
0 L 0 , if m ≥ 1
64
5.2. Wave equation with Neumann boundary conditions
and
ˆ L
(
nπ mπ L/2 , if n = m 6= 0
cos x cos x dx =
0 L L 0, if n 6= m ,
we obtain
ˆ L
2 mπ
Am = f (x) cos x dx .
L 0 L
The same procedure can be implemented to find the coefficients Bm , since we have
that
∞
B0 X nπc nπ
g(x) = ut (x, 0) = + Bn cos x ,
2 n=1
L L
and we get
ˆ L ˆ L
2 2 mπ
B0 = g(x) dx , Bm = g(x) cos x dx for m ≥ 1 .
L 0 cmπ 0 L
Thanks to the arguments above, we can write the solution u as in (5.2.1). To find
the coefficients of this expression, we impose the initial conditions
∞
A0 X
1 + cos(3πx) + 16 cos(20πx) = u(x, 0) = + An cos(nπx) .
2 n=1
Integrating the left hand and right hand side against cos(nπx) we find immediately
that A0 = 2, A3 = 1, A20 = 16 and Am = 0 for m 6= 0, 3, 20. On the other hand,
using that g(x) = 0, we obtain that Bm = 0 for all m ∈ N. Thus the solution to
the Cauchy problem is
65
Chapter 5. Separation of variables
Recall that, using the separation of variables for the homogeneous heat equation
with Dirichlet boundary condition, the admissible solutions for the ODE in X are
nπ
Xn = αn sin x , n ∈ N.
L
Now, instead of solving also the ODE for T (t), we write a general solution as
∞
X nπ
u(x, t) = Tn (t) sin x ,
n=1
L
Thus, we are now left with the problem of finding Tn . Assume that, for every
t ∈ R+ , cn (t) is the n-th Fourier coefficient of the inhomogeneity h(·, t), namely
ˆ
2 L nπ
cn (t) = h(x, t) sin x dx .
L 0 L
Then we can express h(x, t) as follows
∞
X nπ
h(x, t) = cn (t) sin x .
n=1
L
Remark 5.3.1. As for the homogeneous case, if the boundary conditions are of
Neumann’s type, we obtain an expansion in term of cosines and there is a summand
for n = 0. Moreover, in the case of the inhomogeneous wave equation, we have a
second order ODE for Tn , complemented with two initial conditions (for Tn (0) and
Tn0 (0)) that are linked to the Fourier expansion of u(x, 0) = f (x) and ut (x, 0) =
g(x).
Example 5.3.2. Consider the inhomogeneous wave equation with Neumann bound-
ary conditions
utt − uxx = 4π 2 cos(2πx)t , (x, t) ∈ (0, 1) × R+
u (0, t) = u (1, t) = 0
x x
u(x, 0) = 1 + cos(2πx)
ut (x, 0) = 3 cos(2πx) .
From the method of separation of variables for the homogeneous wave equation
with Neumann boundary conditions we have that the admissible solutions for the
ODE in X are
Xn (x) = cos(nπx) , n ≥ 0.
Hence we try to look for solutions of the form
∞
X
u(x, t) = Tn (t) cos(nπx) .
n=0
67
Chapter 5. Separation of variables
68
5.4. Uniqueness with the energy method
P∞
and, as before, we look for solutions of the form u(x, t) = n=1 Tn (t) sin(nπx).
Plugging this into the equation we have
∞
X
utt − uxx = [Tn00 (t) + n2 π 2 Tn (t)] sin(nπx) = sin(mπx) sin(ωt) .
n=1
provided ω 6= mπ. Using the initial conditions Tm (0) = 0 and Tm0 (0) = 0, we
obtain
1 ω
Tm (t) = 2 sin(mπt) − sin(ωt) ,
ω − m2 π 2 mπ
and the solution u(x, t) is finally given by
1 ω
u(x, t) = sin(mπt) − sin(ωt) sin(mπx) .
ω 2 − m2 π 2 mπ
Remark 5.3.4. We are assuming ω 6= mπ to avoid degeneracy. To deal with the
case ω = πm, we can think it as limit case as ω 6= πm, ω → mπ. Then
1 sin(mπt)
lim u(x, t) = − t cos(mπt) sin(mπx).
ω→mπ 2mπ mπ
69
Chapter 5. Separation of variables
Therefore E(t) is constant, and, since E(0) = 0, it follows that E(t) = 0 for all t.
By looking at the definition of E(t), we realize that E(t) = 0 for all t implies that
wx (x, t) = wt (x, t) = 0 for all x, t, thus w is constant too. Using that w(x, 0) = 0
for all x, we then get w(x, t) = 0. Thus u1 ≡ u2 , which proves uniqueness.
70
CHAPTER 6
ELLIPTIC EQUATIONS
In this chapter we study Laplace’s and Poisson’s equations, which are the archetype
of elliptic equations. We examine the main properties of elliptic equations, the link
between solutions of Laplace’s equation and harmonic functions.
The term auxx + 2buxy + cuyy is the leading term, or principal part, because the
behaviour of the PDE is determined by a, b and c.
Remark 6.1.1. The coefficients a, b and c depends on x, y, i.e., a = a(x, y), b =
b(x, y) and c = c(x, y).
As we already said, being able to properly classify the PDE we wish to inves-
tigate allows us to choose the correct method (if it exists!) to tackle the PDE.
Knowing the “type” of the equation allows one to use the relevant methods to
solve it, which can be quite different depending on the type of the equation.
You probably encountered conic sections and quadratic forms, which are usu-
ally classified into parabolic, elliptic and hyperbolic, based on the discriminant
b2 − 4ac. The same can be done for a second order PDE at a given point.
Given a point (x0 , y0 ), consider the value
• parabolic if δ(L)(x0 , y0 ) = 0;
71
Chapter 6. Elliptic equations
Remark 6.1.2. Since there is the convention that the xy term is 2b, then the
discriminant becomes (2b)2 − 4ac = 4(b2 − ac) = 4δ[L] (and the 4 can be dropped).
Remark 6.1.3. This classification describes a local property. However, we often
study PDEs with constant coefficients, where the classification is global.
• The wave equation utt − uxx = 0 is hyperbolic (we use variables (x, t) instead
of (x, y)).
Similarly to what happens with second order algebraic equations, we can use
a nondegenerate change of variables to reduce the equation to a simpler form.
Definition 6.1.5. A transformation (x, y) 7→ (ξ, η) = (ξ(x, y), η(x, y)) is a change
of coordinates near a point (x0 , y0 ) if
∂x ξ ∂y ξ
det 6= 0 .
∂x η ∂ y η
(x,y)=(x0 ,y0 )
Any second order PDE can be transformed in the so-called canonical form
by using a change of coordinates u(x, y) 7→ w(ξ, η) = w(ξ(x, y), η(x, y)). The
canonical forms are:
˜ ξ + ẽwη + f˜w = g̃;
• hyperbolic: wξη + dw
˜ ξ + ẽwη + f˜w = g̃;
• parabolic: wξξ + dw
˜ ξ + ẽwη + f˜w = g̃.
• elliptic: wξξ + wηη + dw
Example 6.1.6. Consider the wave equation utt − c2 uxx = 0 for t ≥ 0. Let us
apply the transformation (
ξ = x + ct
η = x − ct .
This gives us u(x, t) = w(ξ, η) = w(x + ct, x − ct). Plugging this into the wave
equation gives 0 = utt − c2 uxx = −4c2 wξη . Dividing by −4c2 , we get wξη = 0,
which is in hyperbolic canonical form.
72
6.2. Laplace’s and Poisson’s equations
Remark 6.3.2. Laplace’s and Poisson’s equations are second order linear PDEs.
Laplace’s equation is also homogeneous.
Remark 6.3.3. The linearity of Laplace’s operator implies that a linear combination
of harmonic functions is a harmonic function.
Definition 6.3.4. Let D ⊂ R2 an open set and let ∂D be the boundary of D. Let
ν be the unit outward normal to ∂D. Then we can consider the following Dirichlet
problem for Poisson’s equation
(
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.1)
u(x, y) = g(x, y) , (x, y) ∈ ∂D .
On the other hand, the Neumann problem for Poisson’s equation reads as follows
(
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.2)
∂ν u(x, y) = g(x, y) , (x, y) ∈ ∂D .
73
Chapter 6. Elliptic equations
Finally we can consider the problem of the third kind for Poisson’s equation, that
is (
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.3)
u(x, y) + α∂ν u(x, y) = g(x, y) , (x, y) ∈ ∂D ,
where α and g are given functions.
∆u = ρ ∆u = ρ
D D
∂D ∂D
u=g
∇u
ν ∂ν u = g
We can now ask if a solution to those problem exists. Consider the Neumann
problem, which can model the distribution of the temperature u(x, y) in the domain
D at an equilibrium configuration. This means that the heat flux through the
boundary must be balanced by the temperature production inside the domain.
This simple consideration is encoded in the following lemma.
Lemma 6.3.5. A necessary condition for the existence of a solution to the Neu-
mann problem (6.3.2) is
ˆ ˆ
g(x(s), y(s)) ds = ρ(x, y) dx dy ,
∂D D
74
6.3. Basic properties of elliptic problems
´Remark 6.3.6.
´ If u is a solution
´ of Laplace’s equation ∆u = 0, then we have that
∂
∂A n
u = A
div(∇u) = A
∆u = 0 for every open subset A ⊂ D, where n is the
outward unit normal to ∂A.
An other natural question to ask is if the Cauchy problem for Laplace’s equation
is well-posed, i.e., if a solution exists, if it is unique and it is stable with respect to
the initial conditions. We recall that the Cauchy problem for Laplace’s equation
is
∆u = 0 ,
(x, y) ∈ R × (0, ∞)
u(x, 0) = f (x)
uy (x, 0) = g(x) ,
where y here plays the role of time (see also the wave Equation (4.2.1)).
Consider Laplace’s equation in the half-plane x ∈ R, y > 0. The following
counterexample to well-posedness is due to Hadamard. Consider the following
Cauchy problem
∆u(x, y) = 0 ,
x ∈ R, y > 0
u(x, 0) = 0
uy (x, 0) = sin(nx)/n ,
which implies that Y 00 (y) = n2 Y (y). From the Dirichlet conditions u(x, 0) = 0 it
follows that Y (0) = 0, while by the Neumann condition uy (x, 0) = sin(nx)/n we
have
sin(nx) 1
= uy (x, 0) = sin(nx)Y 0 (0) =⇒ Y 0 (0) = .
n n
Hence, solving the problem for Y we get Y (y) = sinh(ny)/n2 and thus we obtain
the solution of the Cauchy problem
1
u(x, y) = sin(nx) sinh(ny) .
n2
Now, setting un (x, y) = n12 sin(nx) sinh(ny), we realize that in the limit n → ∞
both un (x, 0) and uny (x, 0) tend to zero (the initial conditions describe an arbitrary
75
Chapter 6. Elliptic equations
small perturbation of the trivial solution u = 0). On the other hand, the solution
is not bounded in the half-plane y > 0. Indeed, for any a > 0, we have
1 1
sup |un (x, a)| = sup 2 |sin(nx)| sinh(na) = 2 sinh(na)
x∈R x∈R n n
1 n→∞
= 2 (ena − e−na ) −−−→ ∞ .
2n
Thus, the Cauchy problem for Laplace’s equation is not stable and this implies
that it is not well-posed with respect to the initial conditions.
In the next example we construct an initial datum for which there is no solution
to the Cauchy problem using the Hadamard counterexample.
Example 6.3.7. Consider as before the functions un (x, y) = sin(nx) sinh(ny)/n2
and define
N
N
X un (x, y)
u (x, y) := ,
n=1
n
for which it holds uN (x, 0) = 0 and
N N
X uny (x, 0) X 1
uN
y (x, 0) = = 2
sin(nx) .
n=1
n n=1
n
Remark 6.3.8. These examples demonstrate the difference between elliptic and
hyperbolic problems on the upper half-plane.
For example:
76
6.4. Harmonic functions
• For n = 0, u(x, y) = 1.
77
CHAPTER 7
MAXIMUM PRINCIPLES
Proof. Consider the function uε (x, y) = u(x, y)+ε(x2 +y 2 ), with ε > 0. Assume by
contradiction that uε attains a local maximum at (x, y) ∈ D. Then, ∆uε (x, y) ≤ 0.
On the other hand, since u is harmonic, we have that
∆uε (x, y) = ∆u(x, y) + 4ε = 4ε > 0 ,
which is a contradiction. This proves that uε takes its maximum on the boundary,
maxD uε = max∂D uε . Thus, since u ≤ uε and D is bounded, we get
max u ≤ max uε = max uε = max(u + ε(x2 + y 2 ))
D D ∂D ∂D
79
Chapter 7. Maximum principles
min u = min u .
D ∂D
Proof. Note that ∆(−u) = −∆u = 0, hence we can apply Theorem 7.1.1 to −u
and obtain
min u = − max(−u) = − max(−u) = min u .
D D ∂D ∂D
and compute
V 0 (r) =
ˆ 2π
1 d
= u(x0 + r cos θ, y0 + r sin θ) dθ
2π 0 dr
ˆ 2π
1
= [ux (x0 + r cos θ, y0 + r sin θ) cos θ + uy (x0 + r cos θ, y0 + r sin θ) sin θ] dθ
2π 0
ˆ ˆ
1
= ∂ν u = ∆u = 0 .
2πr ∂Br (x0 ,y0 ) Br (x0 ,y0 )
80
7.3. Strong maximum principle
D
x
x0
Choose R > 0 smaller than the distance from γ to ∂D and define inductively
a sequence of points {xi }N i=0 ⊂ γ and radii Ri < R such that xi+1 ∈ ∂BRi (xi )
for any i = 1, . . . , N − 1 and xN = x. Note that one can take Ri = R for each
i = 0, . . . , N − 2 and then RN −1 ≤ R such that xN ∈ ∂BRN −1 (xN −1 ).
Then inside each ball we apply inductively the mean value theorem, Theo-
rem 7.2.1. More precisely, by the mean value theorem applied at x0 we have
ˆ ˆ
1 1
max u = u(x0 ) = u≤ max u = max u .
D 2πR ∂BR (x0 ) 2πR ∂BR (x0 ) D D
This implies that u = maxD u on ∂BR (x0 ). Therefore, since x1 ∈ ∂BR (x0 ), also x1
is a point of maximum for u. Hence we can repeat the argument above (using the
mean value theorem) to deduce that u = maxD u on ∂BR (x1 ), hence x2 ∈ ∂BR (x1 )
is a maximum for u, and iterating we get that x = xN is a maximum for u as well.
In particular u(x0 ) = maxD u = u(x). By arbitrariness of x ∈ D, this proves that
u = maxD u is constant in D.
Remark 7.3.2. Given a point (x0 , y0 ) ∈ D and a radius r > 0, consider the curve
γ(θ) = (x0 + r cos θ, y0 + r sin θ).
81
Chapter 7. Maximum principles
(x0 , y0 ) θ
Let us define F (γ(θ)) = ∇u(γ(θ)) · ν∂Br (x0 ,y0 ) (γ(θ)). Then we have
ˆ 2π ˆ 2π
1 1 1
u(x0 , y0 ) = F (γ(θ)) dθ = 0
F (γ(θ))|γ 0 (θ)| dθ
2π 0 2π 0 |γ (θ)|
ˆ 2πr ˆ
1 0 1
= F (γ(θ))|γ (θ)| dθ = F,
2π 0 2πr γ
where the second-last equality follows from the fact that |γ 0 (θ)| = r, because
γ 0 (θ) = (−r sin θ, r cos θ), and the last equality is the definition of integral along a
curve.
82
7.5. Boundary conditions
Proof. Assume by contradiction that there exist two solutions u1 , u2 . Then define
u := u1 − u2 , which solves (
∆u = 0 , in D
u = 0, in ∂D .
From the weak maximum principle Theorem 7.1.1 we get that the maximum and
the minimum of u are zero, which implies u ≡ 0 and thus u1 ≡ u2 .
• Dirichlet: u = g on ∂D.
It may be referred also as condition of first type or as a fixed boundary condition.
For example the following would be considered Dirichlet conditions:
83
Chapter 7. Maximum principles
(c) In fluid dynamics, the no-slip condition for viscous fluids states that at a solid
boundary the fluid has zero velocity relative to the boundary.
• Neumann: ∂ν u = g on ∂D, where ν is the outer normal vector to D.
This is also called second type boundary condition and it specifies the values in
which the derivative of a solution is applied within the boundary of the domain.
An application in thermodynamics is a prescribed heat flux from a surface, which
serves as boundary condition. For example, a perfect insulator has no flux, while
an electrical component may be dissipating at a known power.
• Robin or third type boundary condition: u + α∂ν u = g on ∂D, where α ∈ R
and g is given function.
Robin boundary conditions are also called impedance boundary conditions from
their application in electromagnetic problems.
84
7.6. Maximum principle for parabolic equations
85
CHAPTER 8
u=0
d
u=f ∆u = 0 u=g
c
u=0
a b x
87
Chapter 8. Laplace’s equation in rectangular and circular domains
X 00 (x) Y 00 (y)
=− .
X(x) Y (y)
Since the function on the left only depends on x, while the one on the right only
depends on y, the only possibility is that they are both constant
X 00 (x) Y 00 (y)
=− = λ ∈ R.
X(x) Y (y)
Hence, for Y we have the ODE Y 00 (y) = −λY (y) and, by the analysis we did in
Section 5.1 (and in general in Chapter 5), we know that equations of this type have
three solutions depending on the sign on λ. By the condition Y (c) = Y (d) = 0,
we deduce that λ must be positive
2
nπ
λ = λn = , n ∈ N, n ≥ 1,
d−c
88
8.2. Laplace’s equation with Dirichlet boundary conditions in rectangular
domains
where we renamed the coefficients. Now the only task left is to determine the
coefficients An and Bn . To do so we exploit the boundary conditions. Taking
x = a, since sinh(0) = 0, we get
∞
X p p
u(a, y) = Bn sinh( λn (a − b)) sin( λn (y − c)) = f (y) ,
n=1
from √
which we deduce that Bn are the Fourier coefficients of f scaled by a factor
sinh( λn (a − b)). The same reasoning applies to the other boundary condition in
order to determine An .
u=h u1 = 0 u2 = h
u1 = g
u2 = 0
u2 = 0
u1 = f
u=g
u=f
∆u = 0 ∆u1 = 0 + ∆u2 = 0
u=k u1 = 0 u2 = k
89
Chapter 8. Laplace’s equation in rectangular and circular domains
90
8.3. Laplace’s equation with Neumann boundary conditions in rectangular
domains
uy = k
d
ux = f ∆u = 0 ux = g
c
uy = h
a b x
91
Chapter 8. Laplace’s equation in rectangular and circular domains
Note that, by splitting the problem, the existence condition for the Neumann
problem might not be satisfied anymore for u1 and u2 . To overcome this problem,
we use the trick of adding a harmonic polynomial. Consider for instance α(x2 −y 2 )
for some α ∈ R and add it to u. This yields the new harmonic function v =
u + α(x2 − y 2 ). If we now split v = v1 + v2 as we did above for u, then the problems
for v1 and v2 are
∆u1 = 0 , in R
∆u2 = 0 , in R
(v1 )x = f + 2αa , in {a} × [c, d] (v2 )x = 0 , in {a} × [c, d]
(v1 )x = g + 2αb , in {b} × [c, d] (v2 )x = 0 , in {b} × [c, d]
(v1 )y = 0 , in [a, b] × {d} (v2 )y = k − 2αc , in [a, b] × {d}
(v ) = 0 ,
1 y in [a, b] × {c} , (v ) = h − 2αd , in [a, b] × {c} .
2 y
92
8.4. Two explicit examples
Since there is only one nonzero boundary condition, there is no need to split
the
P problem as in Section 8.2. We look for a solution of the form u(x, y) =
n∈N Xn (x)Yn (y), where each term Xn (x)Yn (y) is harmonic. Hence
Remark 8.4.2. Note that the general form would have the coefficient An in front
of the term sin(nx). However we can absorb this constant inside Cn and Dn ,
obtaining exactly the formula above.
From the condition u(x, π) = 0, we obtain Cn = 0 for all n ∈ N+ . Then, from
u(x, 0) = 1, we have
∞
X ∞
X
1 = u(x, 0) = sin(nx)[Dn sin(−nπ)] = αn sin(nx) ,
n=1 n=1
93
Chapter 8. Laplace’s equation in rectangular and circular domains
Thus
ˆ
2 π
2 cos(mx) π 2 1 − cos(mx)
αm = sin(mx) dx = − =
π 0 π m 0 π m
(
4/(πm) , if m is odd
=
0, if m is even .
Therefore, since αm = Dm sinh(−mπ), we have
4
, if m is odd
Dm = πm sinh(−mπ)
0, if m is even .
Example 8.4.3. Consider now the Laplace’s equation on R with Neumann bound-
ary conditions
∆u = 0 ,
in R = [0, π] × [0, π]
uy (x, π) = x − π/2
ux (0, y) = ux (π, y) = uy (x, 0) = 0 .
Let
´ us verify the necessary condition to solve elliptic Neumann problems, that is
∂ u = 0. In our case we have
∂R ν
ˆ ˆ π ˆ
π
∂ν u = x− dx = 0 = ∆u ,
∂R 0 2 R
as desired. Hence we can proceed looking for a solution via the method of separa-
tion of variables X
u(x, y) = Xn (x)Yn (y) .
n∈N
The harmonicity condition leads to
(
Xn00 (x) = −λn Xn (x) , Xn0 (0) = Xn0 (π) = 0
Yn00 (x) = λn Yn (x) .
Therefore we obtain Xn (x) = cos(nx) and Yn (y) = An cosh(ny)+Bn cosh(n(y−π))
for all n ∈ N. Then, the general solution is
∞
X
u(x, y) = cos(nx)[An cosh(ny) + Bn cosh(n(y − π))] .
n=0
94
8.5. Polar coordinates
and
∞ ∞
π X X
x − = uy (x, π) = An n sinh(nπ) cos(nx) = βn cos(nx) ,
2 n=0 n=0
4
− , if m is odd
=⇒ Am = πm3 sinh(mπ)
0, if m is even, m 6= 0 .
Remark 8.4.4. One could also have Dirichlet conditions on some parts of the bound-
ary and Neumann conditions on other parts of the boundary. In this case you need
to choose the right bases in terms of sin, cos and sinh, cosh.
Hence, any function u(x, y) can be expressed in polar coordinates via a function
w(r, θ) such that w(r, θ) = u(x(r, θ), y(r, θ)). Then the Laplacian in polar coordi-
nates reads
1 1
∆u = wrr + wr + 2 wθθ .
r r
95
Chapter 8. Laplace’s equation in rectangular and circular domains
Now assume that u is a harmonic function and that w only depends on the
variable r, that is w = w(r), then
1
0 = ∆u = w00 (r) + w0 (r) .
r
By defining v(r) := w0 (r), we get v 0 (r) = −v(r)/r and thus
v 0 (r) 1 d d
= − ⇐⇒ log|v(r)| = − log r ⇐⇒ log|v(r)| = − log(r) + c ,
v(r) r dr dr
for some constant c ∈ R. Hence we obtain that w0 (r) = v(r) = ec /r if v(r) > 0
and w0 (r) = v(r) = −ec /r if v(r) < 0. Integrating with respect to r we get
ˆ r c
e
w(r) = ± ds ± w(1) = c1 log(r) + c2 ,
1 s
96
8.6. Laplace’s equation in circular domains
Note that the conditions Θ(0) = Θ(2π), Θ0 (0) = Θ0 (2π) come from the fact that
we want u to be a classical solution inside D, so it should be at least C 2 . Hence
we impose that Θ and Θ0 are periodic in [0, 2π]. Observe that, since Θ00 = −λΘ
automatically also Θ00 is periodic. The solution for the second ODE is
gives the two parameter family of solutions. However the functions r−n and log r
are singular at 0 inside the domain D, so we discard them. Thus the general
solution is given by
∞
X
w(r, θ) = C0 + rn [An cos(nθ) + Bn sin(nθ)] .
n=1
Remark 8.6.1. The same method as above can be applied to domains that are
discs, circles, rings or sectors of a circle/ring. However, in the cases where the
domain is only a sector of a disc or a ring, then Θ is not necessarily periodic
anymore. Moreover, in cases where the origin is not in the domain, we do not have
to discard the terms with r−n and log r.
Example 8.6.2. Let B1 = {x2 + y 2 ≤ 1} be the unit disc in R2 . We want to solve
the following Dirichlet problem
(
∆u = 0 , in B1
u = y 2 , in ∂B1 .
Using polar coordinates and defining w(r, θ) = u(r cos θ, r sin θ), we can rewrite
the problem as
1 1
wrr + wr + 2 wθθ = 0 ,
(r, θ) ∈ (0, 1) × (0, 2π)
r r
w(1, θ) = sin2 θ = 1 − 1 cos(2θ) .
2 2
As seen before, we then get that the general solution has the form
∞
X
w(r, θ) = C0 + rn [An cos(nθ) + Bn sin(nθ)] .
n=1
97
Chapter 8. Laplace’s equation in rectangular and circular domains
from which we deduce that C0 = 1/2, A2 = −1/2 and all the other coefficients are
zero. Thus, the final solution is
1 1 2 1
w(r, θ) = − r cos(2θ) =⇒ u(x, y) = (1 − x2 − y 2 ) .
2 2 2
Example 8.6.3. Let us consider the problem
p
∆u = 0 ,
on D = {(x, y) ∈ R2 : 1 ≤ x2 + y 2 ≤ 2}
u(x, y) = 3x/2 , on {x2 + y 2 = 2}
u(x, y) = y , on {x2 + y 2 = 1} .
w(2, θ) = 3 cos θ
w(1, θ) = sin θ .
If we write w(r, θ) = R(r)Θ(θ), the ODEs for R and Θ are the same as before, but
now the boundary conditions for R have changed. The general solution is
w(r, θ) = E + F log r +
X∞
+ [An rn cos(nθ) + Bn rn sin(nθ) + Cn r−n cos(nθ) + Dn r−n sin(nθ)] .
n=1
98
8.6. Laplace’s equation in circular domains
This implies that E+F log(2) = 0, 2n Bn +2−n Dn = 0 for all n ≥ 1, 2A1 +C1 /2 = 3,
2n An + 2−n Cn = 0 for all n ≥ 2. Combining all these information we get
E = 0, E + F log(2) = 0 =⇒ E = F = 0
1
A1 + C1 = 0, 2A1 + C1 = 3 =⇒ A1 = 2, C1 = −2
2
1 1 4
B1 + D1 = 1, 2B1 + D1 = 0 =⇒ B1 = − , D1 = ,
2 3 3
and for n ≥ 2
An + Bn = 0, 2n An + 2−n Cn = 0 =⇒ An = Cn = 0
Bn + Dn = 0, 2n Bn + 2−n Dn = 0 =⇒ Bn = Dn = 0 .
Hence Rn (r) = Cn rnπ/γ + Dn r−nπ/γ and the general solution in this case is given
by
∞
X nπ nπ
w(r, θ) = An sin θ r nπ/γ
+ Bn sin θ r−nπ/γ
n=1
γ γ
and the coefficients An and Bn are found expanding the boundary conditions
w(1, θ) and w(2, θ) over the interval θ ∈ [0, γ] using the Fourier basis {sin(nπθ/γ)}.
99
Chapter 8. Laplace’s equation in rectangular and circular domains
Remark 8.6.5. If the sector is D = {(r, θ) : r ∈ [0, 2), θ ∈ (0, γ)} with boundary
conditions w(r, 0) = w(r, γ) = 0 for all r ∈ (0, 2), then the general solution is of
the form ∞
X nπ
w(r, θ) = An sin θ rnπ/γ ,
n=1
γ
since the negative powers of r are singular at the origin and should be discarded.
2π
U0 sin d
x
d x
We know that the electric potential satisfies Laplace’s equation in the region be-
tween plates (since there is no charge in there). Therefore we want to solve the
following Dirichlet problem
∆u = 0 ,
in (0, d) × R+
u(x, 0) = U0 sin(2πx/d) ,
u(0, y) = u(d, y) = 0 .
Note that there is an additional implicit boundary condition: we would like the
potential to go to zero in the “open” spatial direction, that in formulas translates
to
lim u(x, y) = 0 . (8.7.1)
y→∞
100
8.7. A “real life” example
P
Let us suppose that u(x, y) = n∈N Xn (x)Yn (y), with Xn (x)Yn (y) harmonic
for all n ∈ N. This leads to the ODEs
(
Xn00 (x) = −λn Xn (x) , Xn (0) = Xn (d) = 0
Yn00 (y) = λn Yn (y) .
101
Bibliography
103