0% found this document useful (0 votes)
75 views

Analysis III ETH

This document provides an analysis of notes on partial differential equations. It introduces concepts such as partial differential equations, well-posed problems, initial and boundary conditions, and classification of PDEs. It also discusses methods for solving PDEs including the method of characteristics and separation of variables. Specific PDEs examined include the heat equation, wave equation, and Laplace's equation.

Uploaded by

Maja Gwozdz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Analysis III ETH

This document provides an analysis of notes on partial differential equations. It introduces concepts such as partial differential equations, well-posed problems, initial and boundary conditions, and classification of PDEs. It also discusses methods for solving PDEs including the method of characteristics and separation of variables. Specific PDEs examined include the heat equation, wave equation, and Laplace's equation.

Uploaded by

Maja Gwozdz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Notes Analysis 3 - ETH Zürich

Mikaela Iacobelli

January 12, 2023


Contents

1 Preliminaries 3
1.1 Partial differential equations . . . . . . . . . . . . . . . . . . . . . . 3
1.2 What is a well-posed problem? . . . . . . . . . . . . . . . . . . . . . 5
1.3 Initial and boundary conditions . . . . . . . . . . . . . . . . . . . . 5
1.4 Classification properties of PDEs . . . . . . . . . . . . . . . . . . . 6
1.5 Modelling a stock market . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Method of characteristics 11
2.1 First order equations . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Quasilinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Method for first order linear PDEs . . . . . . . . . . . . . . . . . . 17
2.4 Existence and uniqueness questions . . . . . . . . . . . . . . . . . . 20
2.5 Examples of existence and uniqueness . . . . . . . . . . . . . . . . . 23
2.6 The existence and uniqueness theorem . . . . . . . . . . . . . . . . 26

3 Conservation laws and shock waves 29


3.1 What are (scalar) conservation laws? . . . . . . . . . . . . . . . . . 29
3.2 Example: Burgers’ equation . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Critical time for conservation laws . . . . . . . . . . . . . . . . . . . 33
3.4 Notion of weak solutions . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Rankine-Hugoniot condition . . . . . . . . . . . . . . . . . . . . . . 35
3.6 The entropy condition . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 One dimensional wave equation 43


4.1 Canonical form and general solution . . . . . . . . . . . . . . . . . . 43
4.2 The Cauchy problem and d’Alembert’s formula . . . . . . . . . . . 46
4.3 Domain of dependence and region of influence . . . . . . . . . . . . 49
4.4 The Cauchy problem for the nonhomogeneous wave equation . . . . 50
4.5 Symmetry of the wave equation . . . . . . . . . . . . . . . . . . . . 54

3
Contents

5 Separation of variables 57
5.1 Heat equation with Dirichlet boundary conditions . . . . . . . . . . 57
5.2 Wave equation with Neumann boundary conditions . . . . . . . . . 62
5.3 Inhomogeneous PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 Uniqueness with the energy method . . . . . . . . . . . . . . . . . . 69

6 Elliptic equations 71
6.1 Classification of linear second order PDEs . . . . . . . . . . . . . . 71
6.2 Laplace’s and Poisson’s equations . . . . . . . . . . . . . . . . . . . 73
6.3 Basic properties of elliptic problems . . . . . . . . . . . . . . . . . . 73
6.4 Harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7 Maximum principles 79
7.1 Weak maximum principle . . . . . . . . . . . . . . . . . . . . . . . 79
7.2 Mean value principle . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.3 Strong maximum principle . . . . . . . . . . . . . . . . . . . . . . . 81
7.4 Maximum principle for Poisson’s equation . . . . . . . . . . . . . . 82
7.5 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.6 Maximum principle for parabolic equations . . . . . . . . . . . . . . 84

8 Laplace’s equation in rectangular and circular domains 87


8.1 Boundary condition on two opposite sides . . . . . . . . . . . . . . 87
8.2 Laplace’s equation with Dirichlet boundary conditions in rectangu-
lar domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.3 Laplace’s equation with Neumann boundary conditions in rectan-
gular domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.4 Two explicit examples . . . . . . . . . . . . . . . . . . . . . . . . . 92
8.5 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.6 Laplace’s equation in circular domains . . . . . . . . . . . . . . . . 96
8.7 A “real life” example . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4
Notation

x The variables in bold denote vectors, e.g. x = (x1 , x2 , x3 ) ∈ R3 .

Ck A function is C k if it is continuously differentiable k times.

1
CHAPTER 1
PRELIMINARIES

1.1. Partial differential equations


Definition 1.1.1. An ordinary differential equation, or ODE, is an equation in-
volving functions of one independent variable and one or more of their derivatives.
Example 1.1.2. An example of ODE is Newton’s second law, that is
d2 x(t)
m = F (x(t)) .
dt2
The unknown function x(t) = (x1 (t), x2 (t), x3 (t)) ∈ R3 represents the position of a
particle at time t. Moreover m ∈ R+ is the mass of the particle and F : R3 → R3
is the force field. Note that
dx(t) d2 x(t)
and
dt dt2
represent respectively the velocity and the acceleration of the particle.
Definition 1.1.3. A partial differential equation, or PDE, is an equation involving
an unknown function of more than one variable and certain of its partial deriva-
tives.
Hereafter u denotes the real-valued solution (i.e., the unknown) of a given PDE
and it is usually a function of points x = (x1 , x2 , . . . , xn ) ∈ Rn , typically denoting
a position in space. Sometimes the function u also depends on a parameter t ∈ R,
denoting the time. We also use x, y, z to denote independent variables (instead of
x1 , x2 , x3 , . . .).
Notation. We write
∂u
uxk =
∂xk
to denote the partial derivative of u with respect to xk . Analogously we use
∂u ∂ 2u
ut = and u xk xl = .
∂t ∂xk ∂xl

3
Chapter 1. Preliminaries

We recall the following theorem.


Theorem 1.1.4 (Schwartz). Given a function u continuously differentiable at a
point, the order of partial derivatives at that point is irrelevant. Namely, if u is
continuously differential at a point x and u depends on two variables x, y, then
uxy (x) = uyx (x).
Definition 1.1.5. The gradient of a function u = u(x, y, z) is defined as
∇u := (ux , uy , uz )
and the Laplacian of u is
∆u := uxx + uyy + uzz .
We now see some examples that motivate the use of partial differential equa-
tions: when we want to study a physical system we want to understand the state
of such system at any point in space and at any time.
Example 1.1.6. Suppose that u(x, y, z, t) is the temperature at the point (x, y, z)
and at time t. We know that this state changes over time, so we consider the
quantity ut , which measures this change with respect to time. However, u also
changes with respect to the position, so we consider partial derivatives of u with
respect to x, y, z. Surely we need to relate the variations in space and in time and
it turns out that the heat flow over time may be described by
ut = uxx + uyy + uzz = ∆u . (Heat equation)
Example 1.1.7. Another fundamental equation that we will encounter is the
Laplace’s equation, which records diffusion effects in equilibrium and it is described
by the PDE
∆u = 0 . (Laplace’s equation)
Example 1.1.8. The wave equation, given by
utt = c2 ∆u , (Wave equation)
superficially resembles the heat equation, but it supports solutions with a com-
pletely different behaviour and can be used to describe the propagation of a wave
in a fluid.
Example 1.1.9. The Burgers’ equation
ut = uux (Burgers’ equation)
can model the flow of a viscous fluid or the traffic flow, and it is a prototypical
example of conservation law (see Chapter 3).

4
1.2. What is a well-posed problem?

1.2. What is a well-posed problem?


In general we study equations that originate from a physical or engineering prob-
lem, so we follow the scheme

real life problem model PDE.

It is not obvious that a given model is consistent, in the sense that it leads to a
solvable PDE. Furthermore we wish the solution to be unique and to be stable
under small perturbation of the data.
By “problem” we mean a PDE supplemented with initial or boundary condi-
tions. A problem is well-posed if it satisfies the following criteria:

1. The problem has a solution (existence).

2. The solution is unique (uniqueness).

3. A small change in the equation and/or in the side conditions gives rise to a
small change in the solution (stability).

If one or more of these conditions do not hold, then the problem is said to be
ill-posed.

1.3. Initial and boundary conditions


As you may recall from studying ODEs, there may be no solutions or there may
be many solutions for a given ODE (if the initial conditions are not appropriate).
The same is true for PDEs. Indeed PDEs have in general infinitely many solutions
and in order to obtain a unique solution we must supplement the equation with
additional conditions. What kind of conditions? This depends on the type of
PDE under study. We will focus entirely on PDEs coupled with a set of initial
conditions consisting in prescribing the unknown u and (or) some of its partial
derivatives on an hypersurface of the domain. In R2 this simply means that we fix
some initial data on a curve, and in R3 on a surface (which will be a plane most
of the time). This is what we call a Cauchy problem

Example 1.3.1. Consider the transport equation

ut + cux = 0 , (Transport equation)

for u : (x, t) ∈ R × R+ → R and c ∈ R a constant. If u ≥ 0, u can represent


the concentration of a pollutant in a river at time t and position x. The constant
c ∈ R represents the velocity of the river. In order to have a complete information

5
Chapter 1. Preliminaries

about u in time it makes sense to couple this equation with an information about
the concentration of the pollutant at time zero. Hence we consider the initial value
problem (
ut + cux = 0 , (x, t) ∈ R × R+
u(x, 0) = g(x) , x ∈ R ,
where the function g > 0 represents the concentration of the pollutant at time
zero.

Example 1.3.2. In the case of the one-dimensional vibrating string, we consider


the wave equation with four side condition, namely


 utt = uxx , (x, t) ∈ (0, L) × R+

u(0, t) = u(L, t) = 0 , t ≥ 0


 u(x, 0) = f (x) , x ∈ [0, L]
x ∈ [0, L] .

ut (x, 0) = g(x) ,

The second equation expresses two boundary conditions (the string is fixed at
position 0 and L) and the last two equations express the initial conditions: they
tell us what happens at time zero in terms of the deflection f (x) and of the speed
g(x).
Remark 1.3.3. The domain of the PDE is defined only in the interior of the interval
because the function u may not be differentiable on the boundary.

Definition 1.3.4. We say that the solution of a PDE is strong if all the derivatives
of the solution that appear in the PDE exist and are continuous. Otherwise the
solution is said to be weak.

Weak solutions have points in their domain where the derivatives do not exist
(or are not continuous), so a weak solution cannot directly be plugged into the
equation.
Remark 1.3.5. There is no universal meaning for weak solution, the definition
depends on the type of PDE and we will see this later when studying conservation
laws.

1.4. Classification properties of PDEs


Definition 1.4.1. The order of a PDE is the order of the highest order partial
derivative of the unknown appearing within it.

Example 1.4.2. • ux + u2 uy y = ex uxy has order 2;

6
1.4. Classification properties of PDEs

• uxyz = xy 2 + zu has order 3;

• utt = uxx + f (x, t) has order 2.


Remark 1.4.3. Here we mostly work with PDEs of first and second order.
Definition 1.4.4. A PDE is linear if it is of the form
n
X n
X
(0) (1) (2)
a u+ ai1 uxi1 + ai1 ,i2 uxi1 xi2 + . . . =: L[u] = f (x) , (1.4.1)
i1 =1 i1 ,i2 =1

(m)
where f (x) and ai1 ,...,im are functions of the variable x = (x1 , . . . , xn ). Namely, a
PDE is linear if every summand consists of a function multiplied by u or by one
of its derivatives. Equivalently, if u and v solves (1.4.1), then u − v solves (1.4.1)
with f = 0.
Remark 1.4.5. Observe that the general form of a linear PDE of the first order for
an unknown function u in two independent variables x, y is

a(x, y)ux + b(x, y)uy + c(x, y)u = d(x, y) ,

while the general form of a linear PDE of the 2nd order is

a(x, y)uxx + b(x, y)uyy + 2c(x, y)uxy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y) .

Example 1.4.6. • xyux + sin2 (y)uxy − ex uyy = 2xy 3 is linear;

• uux = 2 is not linear;

• ut = ux + u2 is not linear;

• utt = uxxxx is linear.


Definition 1.4.7. We say that a linear PDE of the form defined in (1.4.1) is
homogeneous if f (x) = 0. When f 6≡ 0, we say that the PDE is inhomogeneous
and the function f (x) is the inhomogeneity.
An important property of linear homogeneous PDEs is that, given two solutions
u1 and u2 , any linear combination of u1 and u2 is a solution as well. Moreover, so-
lutions of a linear homogeneous PDEs generate different solutions of an associated
inhomogeneous PDE. Let us see how in the following theorem.
Theorem 1.4.8. Let L[u] = f (x) be a linear inhomogeneous PDE and L[u] = 0
be the corresponding homogeneous PDE. Let u1 , u2 be solutions of L[u] = 0 and up
be a solution of L[u] = f (x). Then, for all α, β ∈ R, we have that αu1 + βu2 is a
solution of L[u] = 0 and αu1 + βu2 + up is a solution of L[u] = f (x).

7
Chapter 1. Preliminaries

Remark 1.4.9. We can denote with L[u] any linear operator acting on u, as in
(1.4.1). For example, given the linear operator L[u] = ut − ux , the transport
equation can be written as L[u] = 0.
Example 1.4.10. • ut + ux = 0 is linear, homogeneous;

• uxx + uyy = x2 + y 2 is linear, inhomogeneous;

• u2x + u2y = 1 is nonlinear.


As you may be able to guess, many PDEs are not linear and nonlinear equations
are often further classified into subclasses according to the type of nonlinearity.
We will sometimes handle nonlinear PDEs that still have a special structure called
quasilinearity. The main reason to make all these distinctions lies in the tools
available to solve each type of equation. For example, the method of characteristics
allow us to solve first order quasilinear PDEs.
Definition 1.4.11. We say that a PDE is quasilinear if it is linear in its highest
order derivative term.
Remark 1.4.12. The general form of a quasilinear PDE of the first order for an
unknown u(x, y) depending on two variables is

a(x, y, u)ux + b(x, y, u)uy + c(x, y, u) = 0 .

Note that the functions a, b, c may depend also on u but not on ux , uy . The general
form of a quasilinear PDE of the 2nd order is instead

a(x, y, u, ux , uy )uxx +b(x, y, u, ux , uy )uyy +2c(x, y, u, ux , uy )uxy +d(x, y, u, ux , uy ) = 0 .

Example 1.4.13. • uux + x2 uy = u3 is quasilinear, of the first order;

• ux + uuy = 0 is quasilinear, of the first order;

• uux + u2 uy + u = e4 is quasilinear, of the first order;

• ux uy = x2 y is not quasilinear, but is of the first order;

• ut = uy uxx + u2 uyy + u2x is quasilinear, of the second order;

• u2xy = xu + uy is not quasilinear and it is of the second order.


Remark 1.4.14. A nonlinear equation can be quasilinear if it is linear in its highest
order terms. A quasilinear equation is a nonlinear equation but with the good
property of being linear with respect to the highest order terms, which can be
thought as “dominant”.

8
1.5. Modelling a stock market

Example 1.4.15. • ux + utt + 2u = ex is of the 2nd order, nonhomogeneous


and linear;

• ux + utt + 3uux = ex is of the 2nd order, nonlinear, but quasilinear (utt


appears linearly);

• uxx + (ut )2 + eu = 0 is of the 2nd order, nonlinear, but quasilinear;

• (uxx )2 + ut + eu = 0 is of the 2nd order and nonlinear, because uxx appears


nonlinearly;

• sin(u) + ux + uyy = 0 is of the 2nd order and quasilinear;

• u + ux + sin(uyy ) = 0 is of the 2nd order, nonlinear;

• u sin(ux ) + uyy = 0 is of the 2nd order, nonlinear, but quasilinear;

• (ux )2 + (uy )2 + uxy = 0 is of the 2nd order, quasilinear;

• the Monge-Ampère equation det(D2 u) = f , (where D2 u denotes the Hessian


matrix (D2 u)ij := uxi xj ) is of 2nd order, nonlinear, since the determinant is
multilinear and linear only for 1 × 1 matrices;

• |∇u| = f is of the first order, nonlinear;

• ut + div(~v u) = ∆u + f is of the 2nd order, nonhomogeneous and linear for


any given constant vector ~v = (v1 , . . . , vn ).

Remark 1.4.16. Divergence, Laplacian and gradient are linear operators.

1.5. Modelling a stock market


We conclude this introduction with a modelling example.

Example 1.5.1. Let us model a stock market as follows:


(
Y (t) = price of an asset
Y (0) = 1 .

Asset prices grow and decay exponentially, so we prefer to look at


(
X(t) = log(Y (t)) − rt
X(0) = 0 ,

9
Chapter 1. Preliminaries

where r > 0 is the growth rate. We now analyze the evolution of X(t) using the
Merton model. We assume that, given a time step τ > 0, the value at time t + τ
is given by X(t + τ ) = X(t) ± δ, for some δ > 0. We choose to add or subtract
δ with probability 1/2 each. Let us define p(x, t) := Prob(X(t) = x). Then the
equation for X(t + τ ) gives
1 1
p(x, t + τ ) = p(x + δ, t) + p(x − δ, t) .
2 2
Rearranging the terms, we thus obtain that

p(x, t + τ ) − p(x, t) δ 2 p(x + δ, t) + p(x − δ, t) − 2p(x, t)


= · .
τ 2τ δ2

Assuming that δ = 2kτ for some k > 0 and taking the limit τ → 0, we get

pt (x, t) = kpxx (x, t) .

As a result, the probability density p must fulfill the heat equation, whose solution
is well-known and it is given by
1 x2
p(x, t) = √ e− 4πkt .
4πkt

Summary

• PDEs are normally used together with boundary conditions.

• Well-posedness stands for the existence and uniqueness of a stable so-


lution.

• Linearity stands for linear in u, while quasilinear means linear in the


highest order derivatives of u.

• A linear PDE is homogeneous if all the terms depend linearly on u.

10
CHAPTER 2
METHOD OF CHARACTERISTICS

In this chapter we present an approach to solve first order quasilinear PDEs known
as method of characteristics. This method relies on a powerful geometrical inter-
pretation of first order PDEs, reducing them to a system of ODEs.

2.1. First order equations


A first order PDE for an unknown function u(x1 , . . . , xn ) can be written in general
form as
F (x1 , . . . , xn , u, ux1 , . . . , uxn ) = 0 , (2.1.1)

where F is a given function of 2n + 1 variables. For our purposes, we consider


two-dimensional real-valued function u(x, y) for which the equation (2.1.1) reduces
to
F (x, y, u, ux , uy ) = 0 . (2.1.2)

Given a solution u to this equation, the graph of u, defined as

graph(u) := {(x, y, u(x, y)) ∈ R3 },

is the solution surface and it is indeed a surface in R3 with normal vector at a point
(x, y, u(x, y)) given by (ux , uy , −1) (see Figure 2.1). Hence, observe that equation
(2.1.2) relates the graph surface to its normal, or equivalently to its tangent plane,
which is the plane orthogonal to the normal. In fact, the tangent plane at each
point of the surface graph(u) is the plane spanned by the vectors (1, 0, ux ) and
(0, 1, uy ). Indeed note that (1, 0, ux ) and (0, 1, uy ) are linearly independent and
orthogonal to the normal (ux , uy , −1). See Figure 2.2 for a representation in one
dimension less.
The point of this discussion is that a first order PDE can be seen geometrically
as a relation between the solution surface and its tangent plane.

11
Chapter 2. Method of characteristics

graph(u)

u(x, y)

y
(x, y)

Figure 2.1: Graph of the solution surface.

graph(f )

(1, f 0 )

(f 0 , −1)

Figure 2.2: The tangent to the graph of a function f : R → R is given by the


vector (1, f 0 ).

12
2.2. Quasilinear equations

2.2. Quasilinear equations


First order quasilinear equations are nonlinear PDEs, where the nonlinearity is
confined to the unknown function u, while the derivatives of u appear linearly (see
Definition 1.4.11). Thus, the general form of a first order quasilinear PDE (in two
variables) is
a(x, y, u)ux + b(x, y, u)uy = c(x, y, u) , (2.2.1)
where a, b, c are functions of three variables.

Example 2.2.1. Consider the first order PDE

ux (x, y) = c0 u(x, y) + c1 (x, y) ,

where c0 ∈ R is a constant. Fixing y ∈ R, this becomes a first order ODE, which


can be written as

[ux (x, y) − c0 u(x, y)] e−c0 x = c1 (x, y)e−c0 x ,

or equivalently as

u(x, y)e−c0 x = c1 (x, y)e−c0 x .

∂x
Integrating both sides over an interval of the form [x0 (y), x], we have
ˆ x
−c0 x −c0 x0 (y)
u(x, y)e − u(x0 (y), y)e = c1 (ξ, y)e−c0 ξ dξ .
x0 (y)

Therefore we obtain that


 ˆ x 
c0 x −c0 x0 (y) −c0 ξ
u(x, y) = e u(x0 (y), y)e + c1 (ξ, y)e dξ . (2.2.2)
x0 (y)

This means that once we prescribe the value of u on the curve {(x0 (y), y) : y ∈ R},
we can reconstruct the value of u everywhere (see Figure 2.3).
Depending on the initial conditions, we may have one solution, no solutions or
infinitely many solutions. Let us look into some specific examples.

• Let us require as initial condition u(0, y) = y for all y ∈ R. Then we can


set x0 (y) = 0, for which u(x0 (y), y) = y, and the solution of the equation is
given by ˆ
 x 
u(x, y) = ec0 x y + c1 (ξ, y)e−c0 ξ dξ .
0

Hence the solution is unique (see Figure 2.4).

13
Chapter 2. Method of characteristics

y
(x0 (y), y)

Figure 2.3: Once we prescribe the value of u on the curve (x0 (y), y), for example at
the intersection with the dotted line, we can reconstruct the value of u everywhere
on the dotted line.

z z

u(0, y) = y eco x ȳ

Initial ȳ
condition

ȳ y x

Projection of the section

Figure 2.4: Figures illustrating the case of one solution in three dimensions. Here
the initial datum is just one point in the plane {y = ȳ}, which implies both
existence and uniqueness of the solution.

14
2.2. Quasilinear equations

z z

u(x, 0) = ax
y x

Figure 2.5: The exponential curves of the right figure are what the PDE wants the
solution to be. For this reason, in this case we have no solution at all.

• Let us assume that c1 ≡ 0, so that the general solution is given by

u(x, y) = ec0 x u(x0 (y), y)e−c0 x0 (y) = ec0 x T (y) ,


 

where T (y) := u(x0 (y), y)e−c0 x0 (y) . If we prescribe u(x, 0) = ax as initial


condition for a nonzero constant a ∈ R, then T (y) has to satisfy T (0) =
u(x, 0)e−c0 x = axe−c0 x for all x ∈ R, which is patently impossible (see Fig-
ure 2.5).

• Assume as before that c1 ≡ 0, but now require as initial condition u(x, 0) =


ec0 x for all x ∈ R. Plugging y = 0 in (2.2.2), we then obtain

ec0 x = u(x, 0) = ec0 x u(x0 (0), 0)e−c0 x0 (0) .

Thus we only need to impose that u(x0 (0), 0)e−c0 x0 (0) = 1. In particular, we
can take any curve y 7→ (x0 (y), y) and choose the value of u on this curve as
we want, provided that u(x0 (0), 0) = ec0 x0 (0) . For example, we can consider
the curve y 7→ (0, y) and set x0 (0) = 0, u(0, y) = 1 + Ay 2 . Then

u(x, y) = ec0 x (1 + Ay 2 )

is a solution for every A ∈ R (see Figure 2.6).

15
Chapter 2. Method of characteristics

Initial condition prescribed here


y=0
x

Figure 2.6: Once we prescribe the value of u at one point on the curve {y = 0}
then we obtain the value of u at all other points on the curve {y = 0}. If the
value prescribe at {y = 0} is compatible then we have infinitely many solutions,
otherwise no solutions.

To summarize, if we consider the PDE

ux (x, y) = c0 u(x, y)

for a constant c0 ∈ R, then the general formula for a solution is

u(x, y) = ec0 x u(x0 (y), y)e−c0 x0 (y) .

So for all y ∈ R we take x0 (y) ∈ R, defining a curve y 7→ (x0 (y), y). Then the
value of u at each point (x0 (y), y) defines u along all the horizontal line passing
through that point.

• In the first example the initial condition was y = u(0, y) and we have a
unique solution.

• In the second case, the condition u(x, 0) = ax for a constant a 6= 0 is not


compatible with the PDE and we have no solutions.

• In the third example the initial condition u(x, 0) = ec0 x is compatible with
the PDE but it leaves “too much choice”. In fact we have infinitely many
solutions of the PDE.

The moral of the story is that boundary conditions and initial conditions are very
important. We need to be careful to impose appropriate conditions in order to
obtain a well-posed PDE.

16
2.3. Method for first order linear PDEs

2.3. Method for first order linear PDEs


Consider a general first order linear equation in two independent variables, namely

a(x, y)ux (x, y) + b(x, y)uy (x, y) = c0 (x, y)u(x, y) + c1 (x, y) . (2.3.1)
The idea is to assign the value of the solution u along a parametric curve and then
“propagate” this value along “characteristic curves”.
So, given a curve s 7→ (x0 (s), y0 (s)), we prescribe the value of u along such
curve as
u(x0 (s), y0 (s)) = ũ0 (s)
for all s ∈ R, for some function ũ0 of one variable. Hence, if we obtain a solution
u, the parametric curve in R3
Γ = Γ(s) = (x0 (s), y0 (s), ũ0 (s))
is contained in graph(u). We say that Γ is the initial curve.
Now observe that equation (2.3.1) can be rewritten as
(a, b, c0 u + c1 ) · (ux , uy , −1) = 0 .
Namely we ask the vector ~v := (a, b, c0 u + c1 ) to be orthogonal to the normal
vector (ux , uy , −1) and thus tangent to the surface graph(u). As a result, if we
integrate the vector field ~v , i.e., we consider the ODE ~z˙ = ~v (~z), then the curve ~z
is contained in the surface graph(u) (see Figure 2.7).
In other words, to find a solution to (2.3.1), we look for a surface S ⊆ R3
(which will then be graph(u)) such that at each point (x, y, u) ∈ S we have that
(a(x, y), b(x, y), c(x, y, u)) ∈ T(x,y,u) S ,
where c(x, y, u) := c0 (x, y)u(x, y) + c1 (x, y).
In order to do this, for all s, we consider the curve (x(t, s), y(t, s), ũ(t, s)) given
by solving the following system of ODEs
dx(t, s)


 = a(x(t, s), y(t, s))


 dt
dy(t, s)

= b(x(t, s), y(t, s)) (2.3.2)

 dt
 dũ(t, s) = c (x(t, s), y(t, s))ũ(t, s) + c (x(t, s), y(t, s)) ,



0 1
dt
with initial conditions 
x(0, s) = x0 (s)

y(0, s) = y0 (s)

ũ(0, s) = ũ0 (s) ,

17
Chapter 2. Method of characteristics

z
graph(u) (x0 (s), y0 (s), ũ0 (s)) = Γ(s)

~v

(x0 (s), y0 (s))

Figure 2.7: The initial curve Γ and the construction of the solution surface.

i.e., we require the initial point to be Γ(s). The solutions to this system of ODEs
(as one varies s) are the characteristic equations associated to the PDE (2.3.1) in
consideration.
Note that, by definition, the characteristic curves t 7→ (x(t, s), y(t, s), ũ(t, s))
have tangent vector

(a(x(t, s), y(t, s)), b(x(t, s), y(t, s)), c(x(t, s), y(t, s), ũ(t, s))) .

Hence we define the surface S as the union of this curve, namely

S := {(x(t, s), y(t, s), ũ(t, s)) ∈ R3 } ,

which is a parameterized representation of the solution surface graph(u) in the


variables (t, s). Then we shall reexpress (whenever possible) the surface in terms
of (x, y) as
u(x(t, s), y(t, s)) = ũ(t, s) .
Remark 2.3.1. If one does not want to think at the method of characteristics
through the geometric interpretation above, one can think as follows. Let u be a
solution of (2.3.1). Moreover let t 7→ (x(t), y(t)) be a curve solving

dx(t)


 = a(x(t), y(t))
dt
 dy(t) = b(x(t), y(t))

dt
18
2.3. Method for first order linear PDEs

and consider also the curve t 7→ u(x(t), y(t)). Then, applying the chain rule, we
get
d
[u(x(t), y(t))] = ẋux + ẏuy = aux + buy = c(x(t), y(t), u(x(t), y(t))) .
dt
In other words, u along the curve (x(t), y(t)) coincides with the solution of the
ODE
d
ũ(t) = c(x(t), y(t), ũ(t)) ,
dt
provided of course that they start from the same initial condition.
Example 2.3.2. Consider the following Cauchy problem
(
ux + uy = 1
u(x, 0) = 2x3 .
We parameterize the initial condition with the curve Γ(s) = (x0 (s), y0 (s), ũ0 (s)) =
(s, 0, 2s3 ).
In the notation of (2.3.1), here we have a = 1, b = 1, c0 = 0 and c1 = 1. Hence,
following the procedure described above, we obtain the ODE system
dx(t, s)


 = a(x(t, s), y(t, s)) = 1


 dt
dy(t, s)

= b(x(t, s), y(t, s)) = 1

 dt
 dũ(t, s) = c (x(t, s), y(t, s))ũ(t, s) + c (x(t, s), y(t, s)) = 1 ,



0 1
dt
together with initial conditions

x(0, s) = x0 (s) = s

y(0, s) = y0 (s) = 0

ũ(0, s) = ũ0 (s) = 2s3 .

Therefore the characteristic curves are given by



x(t, s) = s + t

y(t, s) = t

ũ(t, s) = 2s3 + t .

Since we are looking for a solution in (x, y) coordinates, we have to find the inverse
map (t, s) 7→ (x, y) to find u(x, y). In this case it is very easy, indeed
( (
x(t, s) = s + t t=y
=⇒
y(t, s) = t s = x−t = x−y.

19
Chapter 2. Method of characteristics

Hence the solution to the PDE is given by

u(x, y) = ũ(t(x, y), s(x, y)) = ũ(y, x − y) = 2(x − y)3 + y .

Remark 2.3.3. There is no unique way to parameterize the initial condition. We


could have define Γ(s) = (s4 , 0, 2s12 ), which gives the same initial conditions as
before. The parameterized solution surface is then given by the relation

u(x(t, s), y(t, s)) = ũ(t, s) = 2s12 + t .

Summary

• First order PDEs relate the solution surface to its tangent plane.

• They can be solved using the method of characteristics.

• The initial curve is a parametrization of the initial conditions and it is


used to obtain the characteristic equations.

• The characteristic curves span the solution surface.

2.4. Existence and uniqueness questions

We now discuss some conditions that guarantee local existence and uniqueness.
The questions is: under which conditions does there exist a unique integral surface
for (2.2.1) that contains the initial curve Γ?
To solve our Cauchy problem, we need to solve the characteristic equations
using the points we selected on Γ as an initial condition for the system of ODEs
(2.3.2). Assuming that the coefficients of the ODEs are smooth (a and b have to
be C 1 ), we can apply the Cauchy–Lipschitz theorem for ODEs that guarantees
local existence in time and uniqueness of the solution. Hence, for all s ∈ R
there exists some time interval Is = (s − ε, s + ε) ⊆ R such that the solution
t 7→ (x(t, s), y(t, s), ũ(t, s)) exists uniquely for all t ∈ Is .
Once solved the ODE system, we have an expression for ũ in the variables (t, s).
The fundamental relation between ũ(t, s) and u(x, y) (the desired solution) is given
by ũ(t, s) = u(x(t, s), y(t, s)). Some difficulties may arise in the inversion of the
transformation from (t, s) to (x, y), because the mapping x = x(t, s), y = y(t, s)
may be not invertible. Thanks to the implicit function theorem we know that this

20
2.4. Existence and uniqueness questions

map is locally invertible if the following transversality condition holds


 
∂x ∂y
 ∂t ∂t 
 
det   6= 0 .
 ∂x ∂y 
∂s ∂s

Note that this condition at (0, s) becomes


!
a(x0 (s), y0 (s), u0 (s)) b(x0 (s), y0 (s), u0 (s))
det d d 6= 0 ,
x0 (s) y0 (s)
ds ds
because x(0, s) = x0 (s), y(0, s) = y0 (s), ∂x ∂t
= a and ∂y ∂t
= b.
Asking that the transversality condition is verified means that the vectors (a, b)
d d
and ( ds x0 (s), ds y0 (s)) are transverse, i.e., they are not parallel, see Figure 2.8.
Since
(a(x0 (s), y0 (s), u0 (s)), b(x0 (s), y0 (s), u0 (s)))
is the tangent vector to the characteristic t 7→ (x(t, s), y(t, s)) at t = 0, while
d d
( ds x0 (s), ds y0 (s)) is the tangent vector to π(Γ) := {(x0 (s), y0 (s), 0) ∈ R3 } at s, the
transversality condition means that π(Γ) and t 7→ (x(t, s), y(t, s)) are transverse
at t = 0.
So far we discussed local problems (i.e., problems forbidding local existence of
a solution), but we can also encounter obstacles to global existence. Indeed global
existence (i.e., existence of a solution in all of the domain of interest) can fail for
several reasons:
(i) In general ODEs only have local solutions and solutions can blow up in finite
time. Similarly, solutions of Cauchy problem can blow up if you move far
enough away from Γ.

(ii) If the characteristics t 7→ (x(t, s), y(t, s)) intersect the Cauchy curve Γ more
than once, then global existence may fail. This is because the characteristic
equation is well-posed for a single initial condition. Think about the fact
that a characteristic curve “carries” with it a charge of information from its
intersection point with Γ. If a characteristic curve intersects Γ more than
once these two “information charges” might be in conflict (see Figure 2.9).

(iii) If the vector field (a, b) vanishes at some point, then the corresponding PDE
may only have a solution outside of a neighborhood of this point.

(iv) If the characteristics intersect within the domain of interest, then existence
can break down at the intersection points.

21
Chapter 2. Method of characteristics

Alternatively it may happen that the characteristic curves (as curves in R3 )


project all onto the same curve in the (x, y)-plane. In this case:

(i) either the characteristic curves coincide before the projection and thus there
are infinitely many solutions;

(ii) or these curves do not coincide, which means that graph(u) should take
different values on π(Γ), which is impossible.

π(Γ)
u ≡ u0 (s)
(x0 (s), y0 (s))

(x(t, s), y(t, s))

Figure 2.8: Projection of characteristic curve crossing π(Γ) transversally.

y (x(t, s), y(t, s))

(x0 (s), y0 (s)) (x0 (s0 ), y0 (s0 ))

π(Γ)

Figure 2.9: Projection of a characteristic curve crossing π(Γ) twice. The value of
u at (x0 (s0 ), y0 (s0 )) may not be uniquely defined since it should both be equal to
u0 (s0 ) and to u(x(t0 , s0 ), y(t0 , s0 )).

22
2.5. Examples of existence and uniqueness

2.5. Examples of existence and uniqueness


Later we will see the exact statement of the existence theorem. Before that, let us
see some other examples.
Example 2.5.1. Consider the following Cauchy problem
(
ux + uy = 1
u(x, −x) = g(x) ,
where g is any function in one variable. We parameterize the initial curve by
choosing
Γ(s) := (s, −s, g(s)) .
The characteristic equations are then given by
dx(t, s)


 = 1, x(0, s) = s =⇒ x(t, s) = t + s
 dt



dy(t, s)
= 1, y(0, s) = −s =⇒ y(t, s) = t − s

 dt
 dũ(t, s) = 1, ũ(0, s) = g(s) =⇒ ũ(t, s) = g(s) + t .



dt
In this case the relation between the variables x and y and the variables t and s is
just (
t = x+y
2
s = x−y
2
.
 
Therefore ũ(t, s) = u(x(t, s), y(t, s)) gives us the solution u(x, y) = g x−y
2
+ x+y
2
.

Example 2.5.2. Let us now consider the same PDE as in the previous example,
but with a different initial condition, namely
(
ux + uy = 1
u(x, x) = h(x) ,
for a function h in one variable. As a parametrization of the initial curve we choose
Γ(s) := (s, s, h(s)). Hence we have the following characteristic equations
dx(t, s)


 = 1, x(0, s) = s =⇒ x(t, s) = t + s
 dt



dy(t, s)
= 1, y(0, s) = s =⇒ y(t, s) = t + s

 dt
 dũ(t, s) = 1, ũ(0, s) = h(s) =⇒ ũ(t, s) = h(s) + t .



dt
23
Chapter 2. Method of characteristics

y
π(Γ)

projection of s
the characteristics

u
=
g(
s)
+
t
Figure 2.10: A plot of the characteristics in Example 2.5.1.

In this case the relation between (t, s) and (x(t, s), y(t, s)) cannot be inverted. This
can be seen evaluating the determinant
 
∂x ∂y
 
 ∂t ∂t  1 1
 
det   = det = 0.
 ∂x ∂y  1 1
∂s ∂s
Note that the projection of the initial curve is the diagonal {x = y}, but this is
also the projection of a characteristic curve. In this case where h(x) = x + c for
a constant c ∈ R, we obtain ũ(t, s) = s + t + c. Thenit isnot necessary to invert
the mapping (x(t, s), y(t, s)) because u = x+y
2
+ c + f x−y
2
is a solution for every
differentiable function f that vanishes at the origin. However, for any other choice
of h the problem has no solutions.

Example 2.5.3. Consider the following Cauchy problem


(
2ux + uy = 1 − u
u(x, ex + x2 ) = x, for x ∈ R .

Hence we can choose


s
Γ(s) = (x0 (s), y0 (s), ũ0 (s)) = (s, es + , s)
2
24
2.5. Examples of existence and uniqueness

and the characteristic equations are


dx(t, s)


 = 2 , x(0, s) = s =⇒ x(t, s) = 2t + s
 dt



dy(t, s) s s
= 1 , y(0, s) = es + =⇒ y(t, s) = es + +t

 dt 2 2
dũ(t, s)



 = 1 − ũ(t, s) , ũ(0, s) = s .
dt
Let us solve the equation for ũ:
 
d d d
ũ + ũ = 1 =⇒ ũ + ũ et = et =⇒ (ũet ) = et .
dt dt dt
Therefore we have
ˆ t
t
ũ(t, s)e − ũ(0, s) = eτ dτ = et − 1
0

=⇒ ũ(t, s) = e−t [ũ(0, s) + et − 1] = e−t s + 1 − e−t . (2.5.1)


From the relations x = 2t+s and 2y = s+2es +2t, we get 2y = 2es +x. Therefore,
since es > 0, it must hold 2y > x and s = ln(y − x2 ). Going back to the relation
x = 2t + s, we then get t = x−s 2
= 12 (x − ln(y − x2 )). Plugging these relations in
(2.5.1) and using that u(x, y) = ũ(t(x, y), s(x, y)), we obtain
r
1 x x 1 x x
u(x, y) = e− 2 (x−ln(y− 2 )) [ln(y − ) − 1] + 1 = e− 2 x y − [ln(y − ) − 1] + 1 .
2 2 2
Remark 2.5.4. Computing the Jacobian for the initial curve we get
 
2 1
det = 2es > 0 for all s ∈ R .
1 es + 12

Therefore, we expect local existence of a unique solution at each point where


s = ln(y − x2 ) is well defined, that is in the half plane {(x, y) : 2y > x}.
Example 2.5.5. We refer to [Pin05, Example 2.6]. Consider the PDE
(
−yux + xuy = u
u(x, 0) = ψ(x) .

We parameterize the initial curve as

Γ(s) = (x0 (s), y0 (s), ũ0 (s)) = (s, 0, ψ(s)) ,

25
Chapter 2. Method of characteristics

so the system of ODEs is


dx(t, s)


 = −y , x(0, s) = s


 dt
dy(t, s)

= x , y(0, s) = 0

 dt
 dũ(t, s) = ũ(t, s) , ũ(0, s) = ψ(s) .



dt
Note that
d2 d d2 d
2
x=− y = −x and 2
y = x = −y .
dt dt dt dt
Hence we obtain that

d2
x = −x =⇒ x(t, s) = f1 (s) cos(t) + f2 (s) sin(t)


 2
 d 2t



d
2t
y = −y =⇒ y(t, s) = g1 (s) cos(t) + g2 (s) sin(t)


 d
 d
=⇒ ũ(t, s) = ũ(0, s)et = ψ(s)et .

 ũ = ũ

dt
d d
Using that x(0, s) = s, y(0, s) = 0, dt x(0, s) = −y(0, s), dt
y(0, s) = x(0, s) = s
and ũ(0, s) = ψ(s), we obtain that
(
x(t, s) = s cos(t)
y(t, s) = s sin(t) .

If we assume s > 0, we note that s, t act as polar coordinates, so we can invert the
relation above to obtain ( p
s = x2 + y 2
t = arctan(y/x) .
This gives us the solution
p
u(x, y) = ψ( x2 + y 2 )earctan(y/x) .

2.6. The existence and uniqueness theorem


As we have seen in the previous section, existence and uniqueness of solutions is a
delicate issue. In fact:
(i) The projection of the characteristics may not be transversal to the initial
curve, in which case we are not able to express t and s in terms of x and y.

26
2.6. The existence and uniqueness theorem

y
characteristics

u = ψ(s)et

u = ψ(−s) t u = ψ(s)
x

Figure 2.11: A plot of the characteristics in Example 2.5.5.

(ii) The projection of a characteristic may intersect π(Γ) more than once, in
which case the value of u may not be uniquely determined.
We have the following local existence and uniqueness result.
Theorem 2.6.1. Consider the Cauchy problem
(
a(x, y, u)ux + b(x, y, u)uy = c(x, y, u)
Γ(s) = (x0 (s), y0 (s), u0 (s)) .

Assume that there exists s0 ∈ R such that the transversality condition holds at
(0, s0 ), i.e.,  
∂x ∂y
 (0, s0 ) (0, s0 )
 ∂t ∂t
det   6= 0 .

 ∂x ∂y 
(0, s0 ) (0, s0 )
∂s ∂s
Then there exists a unique solution u of the Cauchy problem defined in a neigh-
borhood of (x0 (s0 ), y0 (s0 )).
Sketch of the proof. We first solve the characteristic equations for s close to s0 . By
existence and uniqueness for ODEs, we know that there exists a unique solution
(x(t, s), y(t, s), ũ(t, s)) defined for (t, s) close to (0, s0 ). Thanks to the transver-
sality condition, we know that we can apply the implicit function theorem and

27
Chapter 2. Method of characteristics

the map (t, s) 7→ (x(t, s), y(t, s)) is invertible close to (0, s0 ). So this allows
us to find a formula for u as u(x, y) = ũ(x(t, s), y(t, s)) in a neighborhood of
(x(0, s0 ), y(0, s0 )) = (x0 (s0 ), y0 (s0 )).

Summary

• The method of characteristics can be used to solve first order quasilinear


PDEs.

• The initial condition is described by the initial curve.

• The characteristics are expressed in terms of (t, s) instead of (x, y).

• We need to express the solution for u in terms of (x, y) but the map
(t, s) 7→ (x, y) may be not invertible.

28
CHAPTER 3
CONSERVATION LAWS AND SHOCK WAVES

In this chapter we study an important class of first order PDEs called conser-
vation laws, which are PDEs that prescribe conserved quantities such as mass,
electric charge, number of cars (in traffic dynamics), number of people (in crowd
dynamics), etc. We see some examples (as Burgers’ equation with various initial
data) and how we can apply the method of characteristics to solve conservation
laws. However, solutions of conservation laws may develop discontinuities even
for smooth initial data, for which reason we need to introduce the notion of weak
solution.
We recall that the local existence theorem for first order quasilinear PDEs
states that, under suitable conditions, one can find local solutions to first order
quasilinear PDEs using the method of characteristics. We see in some examples
that, even if a classical solution ceases to exist, the phenomenon (say for example
the traffic flow) that we are modelling certainly does not. Therefore we broaden
our definition of solution to allow us to make predictions about the phenomenon
under study after the time when a classical solution ceases to exist.

3.1. What are (scalar) conservation laws?


Conservation laws are PDEs describing the evolution of a conserved quantity.

Definition 3.1.1. A scalar conservation law for a function u : R × [0, +∞) → R


of one spatial variable x ∈ R and one time variable y ∈ [0, +∞) is a PDE of the
form
uy + f (u)x = 0 , (3.1.1)
for a given function f : R → R, called the flux function. Equivalently, (3.1.1) can
be written as
uy + c(u)ux = 0 , (3.1.2)
where c(u) = f 0 (u).

29
Chapter 3. Conservation laws and shock waves

Example 3.1.2. The easiest example of conservation law is the transport equation

uy + cux = 0 , (3.1.3)
for a constant c ∈ R, i.e., c(u) = c in this case. Note that uy + cux = uy + (cu)x ,
thus the flux is f (u) = cu and f 0 (u) = c(u) = c.
If u ≥ 0, u can represent the concentration of a pollutant in a river at time
y and position x (see also Example 1.3.1). The constant c ∈ R represents the
velocity of the river: if c > 0 the flow is from the left to right, if c < 0 the flow
goes from right to left. Moreover, the total amount of pollutant in an interval [a, b]
at time y is
ˆ b
u(x, y) dx .
a

The initial value problem (or Cauchy problem) for equation (3.1.3) is
(
uy + cux = 0 , (x, y) ∈ R × (0, ∞)
u(x, 0) = g(x) , x ∈ R ,

where g > 0 is the concentration of the pollutant at time 0.


Let us solve this problem using the method of characteristics. Note that (3.1.3)
is in the form (2.3.1). We can choose as initial curve Γ(s) = (x0 (s), y0 (s), ũ0 (s)) =
(s, 0, g(s)). Hence the characteristic ODEs are

dx(t, s)


 = c, x(0, s) = s =⇒ x(t, s) = ct + s


 dt
dy(t, s)

= 1, y(0, s) = 0 =⇒ y(t, s) = t

 dt
 dũ(t, s) = 0, ũ(0, s) = g(s)



=⇒ ũ(t, s) = g(s) .
dt
Now we need to invert the function (t, s) 7→ (x(t, s), y(t, s)), which we can do and
we obtain
( (
y(t, s) = t t(x, y) = y
=⇒
x(t, s) = ct + s = cy + s s(x, y) = x − cy .

As a result, we get

u(x, y) = ũ(t(x, y), s(x, y)) = g(s(x, y)) = g(x − cy) ,

which is a so-called traveling wave, se Figure 3.1.

30
3.2. Example: Burgers’ equation

c>0

Figure 3.1: A travelling wave. On the left the initial condition g(x) at y = 0. On
the right, the solution g(x − cy) when c > 0 after time y > 0.

3.2. Example: Burgers’ equation


Example 3.2.1. Let us consider the Burgers’ equation
(
uy + uux = 0
(3.2.1)
u(x, 0) = h(x) ,

which models the flow of a mass with concentration u(x, y), where the speed of the
flow depends on the concentration. The variable y has the physical interpretation
of a time and h(x) is the initial condition, so the concentration of mass at time
y = 0.
Remark 3.2.2. Burgers’ equation is in the form uy + c(u)ux = 0 with c(u) = u,
thus it is equivalent to the equation
 
1 2
uy + u = 0.
2 x

In particular the flux is f (u) = u2 /2 and the wave speed is c(u) = f 0 (u) = u.
Since (3.2.1) is a first order equation, we can use the method of characteristics.
The parameterized initial condition is Γ(s) = (s, 0, h(s)) and the characteristic
equations are given by

dx(t, s)


 = ũ(t, s) , x(0, s) = s =⇒ x(t, s) = s + h(s)t
 dt



dy(t, s)
= 1 , y(0, s) = 0 =⇒ y(t, s) = t

 dt
 dũ(t, s) = 0 , ũ(0, s) = h(s)



=⇒ ũ(t, s) = h(s) .
dt
31
Chapter 3. Conservation laws and shock waves

Inverting the function (t, s) 7→ (x(t, s), y(t, s)) = (s + h(s)t, t) as in the example of
the transport equation is not possible, we just obtain that y = t and x = s + h(s)y.
Therefore the solution of the PDE is implicitly given by

u(s + h(s)y, y) = ũ(t, s) = h(s) .

Note that the initial value of u (namely h) determines the slope of the characteristic
equations.
Now, recalling that x = s + h(s)y and ũ(t, s) = h(s), we have that s = x − ũy.
As a result, the solution can be written implicitly also as

u(x, y) = h(x − uy) .

Remark 3.2.3. This last implicit solution does not come unexpected (looking back
at the solution of the transport equation) and it is actually a very general formula.
Indeed, if you are solving a PDE in the form
(
uy + c(u)ux = 0 for (x, y) ∈ R × (0, ∞)
u(x, 0) = u0 (x) for x ∈ R ,

then u satisfies the implicit equation

u(x, y) = u0 (x − c(u(x, y))y) .

Remark 3.2.4. Let us verify the transversality condition for (3.2.1) at a point (0, s).
We have that  
∂x ∂y
 
 ∂t ∂t  u 1
 
det   = det = −1 6= 0 .
 ∂x ∂y  1 0
∂s ∂s
Therefore all the points of the initial curve Γ are not degenerate and, if h is
continuously differentiable, Theorem 2.6.1 ensures that the conservation law has a
unique solution on some time interval [0, yc ) (the subscript c stands for “critical”),
where yc > 0 is sufficiently small.
Let us now determine the critical time yc when the “classical” (or strong)
solution breaks down.
Recall that x = s + h(s)y. Let us fix y = ȳ and look at the map

s 7→ s + h(s)ȳ = x(ȳ, s) .

Assuming that h ∈ C 1 , we can compute


d
(s + h(s)ȳ) = 1 + h0 (s)ȳ .
ds
32
3.3. Critical time for conservation laws

Assume also that, for all s ∈ R, 1 + h0 (s)ȳ > 0. This implies that the map s 7→
x(ȳ, s) is strictly increasing, thus there exists its unique inverse map. Therefore,
for ȳ fixed, we can invert the relation s 7→ s + h(s)ȳ provided that 1 + h0 (s)ȳ > 0.
If we assume for instance that h0 is globally bounded, then 1 + h0 (s)ȳ > 0 for
ȳ ≥ 0 small enough. What is the first value ȳ > 0 for which we cannot invert the
relation s 7→ s + h(s)ȳ? This is given by the first ȳ for which there exists s ∈ R
such that 1 + h0 (s)y = 0. If we denote by yc such a ȳ, we can say that
 
1 0
yc = inf − 0 : s ∈ R, h (s) < 0 . (3.2.2)
h (s)
At time yc , there is a problem with the solution u. To see it, we can differentiate
the relation u(s + h(s)y, y) = h(s) with respect to s to get
ux (s + h(s)y, y)[1 + h0 (s)y] = h0 (s) .
Thus
h0 (s)
ux (s + h(s)y, y) = ,
1 + h0 (s)y
which shows that the derivative of u explodes when we take s and yc such that
1+h0 (s)yc = 0. Hence yc is the critical time after which there is no smooth solution
to the problem.
Remark 3.2.5. Note that the formula (3.2.2) is specific to Burgers’ equation.

3.3. Critical time for conservation laws


Theorem 3.3.1. Consider any scalar conservation law
(
uy + c(u)ux = 0 , (x, y) ∈ R × (0, ∞)
u(x, 0) = u0 (x) , x ∈ R ,

where c, u0 ∈ C 1 (R) and c ◦ u0 : R → R is bounded with bounded derivative. More-


over define
 
1
yc := inf − : s ∈ R, c(u0 (s))s < 0
c(u0 (s))s
 
1
= inf − 0 : s ∈ R, c(u0 (s))s < 0 ,
c (u0 (s))u00 (s)
with the standard convention that yc = ∞ if c(u0 (s))s ≥ 0 for all s ∈ R.
Then, if yc > 0, there exists a unique solution to the PDE above in [0, yc ) and
u satisfies the implicit equation
u(x, y) = u0 (x − c(u(x, y))y) .

33
Chapter 3. Conservation laws and shock waves

Remark 3.3.2. Solutions of conservation laws are constant along their characteris-
tics, which are straight lines. Indeed, for each s ∈ R, the characteristic through a
point (s, 0) is the line in the (x, y)-plane going through (s, 0) with slope 1/c(u0 (s))
and on this line u is equal to the constant u0 (s).
Remark 3.3.3. If c(u0 (s))s < 0, then there exists a time when the characteristics
cross. Heuristically you can think about the latter condition as when a faster
characteristic starts from a point behind a slower characteristic. If c(u0 (s))s is
never decreasing, there are not singularities, however such data are exceptional.

y
s 7→ s + yh(s) not invertible

s1 + ȳh(s1 ) s2 + ȳh(s2 )


u ≡ h(s1 ) u ≡ h(s2 )

Γ(s) ≡ y = 0
u(s1 , 0) = h(s1 ) u(s2 , 0) = h(s2 )

Figure 3.2: Crossing characteristics.

3.4. Notion of weak solutions


In the context of conservation laws, we say that u is a classical (or strong) solution
if u satisfies the classical formulation of conservation laws (3.1.1), or equivalently
(3.1.2). To introduce a notion of weak solution, we use the so-called integral
formulation of conservation laws, that is
ˆ b ˆ b ˆ y2
u(x, y2 ) dx − u(x, y1 ) dx = − [f (u(b, y)) − f (u(a, y))] dy , (3.4.1)
a a y1

for all a < b, y1 < y2 , where f (u) is the flux function.


The classical formulation (3.1.1) (or (3.1.2)) makes no sense if u is not contin-
uously differentiable, but the integral formulation (3.4.1) still makes sense even if
u is not continuous. Moreover we have that

34
3.5. Rankine-Hugoniot condition

• if u satisfies the classical formulation, then it satisfies also the integral for-
mulation;

• if u satisfies the integral formulation and it is regular, then it is a classical


solution.

Let us suppose that the domain of definition of u is D and D is divided into


subdomains Di for i = 1, . . . , n. Assume that u(x, y) is continuously differentiable
in each Di for all i = 1, . . . , n.

Definition 3.4.1. We say that u(x, y) is a weak solution on D = ∪ni=1 Di if u


satisfies the original PDE (3.1.1) (or (3.1.2)) in each Di for i = 1, . . . , n and the
integral form (3.4.1) on D. The boundaries between the regions Di are curves
called shocks.

Thus, in our notion of weak solution, we relax the requirement of a global clas-
sical solution and we allow solutions that are a combination of classical solutions
on each Di with possibly jumps between them. We now see a very important
condition that has to be verified in order to have a global weak solution.

3.5. Rankine-Hugoniot condition


Given u as in Definition 3.4.1, let x = σ(y) be a smooth curve in the (x, y)-plane
across which u is discontinuous. Assume that u, ux , uy have one-sided limits as
x → σ(y)+ and as x → σ(y)− . Choosing a < σ(y) < b, the formula (3.4.1)
becomes
ˆ σ(y) ˆ b
d d
f (u(a, y)) − f (u(b, y)) = u(x, y) dx + u(x, y) dx
dy a dy σ(y)
ˆ σ(y) ˆ b
0 −
= uy (x, y) dx + σ (y)u(σ(y) , y) + uy (x, y) dx − σ 0 (y)u(σ(y)+ , y) .
a σ(y)

In the limit as a → σ(y)− and b → σ(y)+ , the integral vanishes and we have

f+ − f−
f + − f − = σ 0 (y)(u+ − u− ) =⇒ σ 0 (y) = , (RH)
u+ − u−

where u± = limx→σ(y)± u(x, y) and f ± = f (u± ). The condition (RH) is the


Rankine-Hugoniot condition and, if a curve x = σ(y) satisfies (RH), then we
say that it is a shock wave.

35
Chapter 3. Conservation laws and shock waves

y u−
Γ = (σ(y), y)

u+

D− D+ x

Figure 3.3: A solution with a jump discontinuity along a curve.

Example 3.5.1. Consider the following Burgers’ equation


(
uy + uux = 0 , (x, y) ∈ R × (0, ∞)
2
u(x, 0) = e−x , x ∈ R .

2 2
Note that in this case c(u) = u and u0 (x) = e−x . Observe that c(u0 (s)) = e−s
is decreasing for s > 0. Hence this conservation law has a classical solution for
y ∈ [0, yc ) where

2
es
 
1
yc = min − = min .
s>0 c0 (u0 (s))u00 (s) s>0 2s

2
Let y∗ (s) := es /2s, then lims→0 y∗ (s) = lims→+∞ y∗ (s) = +∞. Therefore y∗ is
minimized when
 
dy∗ s2 1 1
= 0 ⇐⇒ e 1 − 2 = 0 ⇐⇒ s2 = .
ds 2s 2

Therefore √s = 1/ 2 is√the unique critical point of y∗ in (0, ∞). We conclude that
yc = y∗ (1/ 2) = e1/2 / 2. Then, for y < yc the solution is

2
u(x, y) = u0 (s) − e−s ,

2
where s is the unique solution of x = s + c(u0 (s))y = s + es y.

36
3.6. The entropy condition

3.6. The entropy condition


Example 3.6.1. We refer to [Pin05, Example 2.14]. Consider the Burgers’ equa-
tion (
uy + uux = 0
u(x, 0) = h(x) ,
with initial condition h defined as

1 ,
 if x ≤ 0
h(x) = 1 − x/α , if 0 ≤ x ≤ α

0, if α ≤ x .

Since the initial condition h(x) is not monotonously increasing, the solution de-
velops a singularity at time y = yc , where
 
1
yc = inf − 0 .
c(u0 (s))s <0 c (u0 (s))u00 (s)

Note that c(u0 (s)) = c(h(s)) = h(s), however the initial datum h(x) is not dif-
ferentiable. Nevertheless c(u0 (s)) = 1 − s/α is decreasing for s ∈ (0, α) and we
expect u to become discontinuous at time
 
1
yc = inf − = inf {α} = α .
s∈(0,α) c(u0 (s))s s∈(0,α)

The method of characteristic gives the system of ODEs



xt = a = ũ =⇒ x(t, s) = s + th(s)

yt = b = 1 =⇒ y(t, s) = t

ũt = 0 =⇒ ũ(t, s) = h(s) .

Therefore, for y < yc = α, the solution is u(x, y) = h(s), where (x, y) lies on
the characteristic through (s, 0). Since h is defined piecewise, we have to consider
three cases:

(i) If s ≤ 0, then h(s) = 1 and the characteristic have equation x = s + h(s)y =


s + y. Therefore u(x, y) = h(s) = 1 for s ≤ 0, which holds if and only if
x ≤ y (see Figure 3.4).

(ii) If s ≥ α, then h(s) = 0 and the characteristics have equation x = s+h(s)y =


s. Therefore u(x, y) = h(s) = 0 for s ≥ α, which holds if and only if x ≥ α
(see Figure 3.4).

37
Chapter 3. Conservation laws and shock waves

(iii) Finally for 0 ≤ s ≤ α, we have h(s) = 1 − s/α and the characteristics


have equation x = s + h(s)y = s + (1 − s/α)y, which is equivalent to s =
α(x − y)/(α − y). Therefore
 
s 1 α(x − y) x−y α−x
u(x, y) = h(s) = 1 − = 1 − =1− =
α α α−y α−y α−y

if 0 ≤ s ≤ α, which holds if and only if y ≤ x ≤ α.

u0 (x) = h(x)

α x

Figure 3.4: Initial condition of Example 3.6.1.

Note that at time yc = α characteristics intersect. Indeed, from y = yc = α,


a shock appears in the graph of u, jumping from 1 to 0. To find the shock curve
x = γ(y), we impose the Rankine-Hugoniot condition, so γ satisfies

f (u+ ) − f (u− ) 1 (u+ )2 − (u− )2 u+ + u− 1


γ 0 (y) = + −
= · + −
= = .
u −u 2 u −u 2 2
Since γ 0 (y) = 1/2, we deduce that γ is a linear function and, by the condition that
the shock starts at the point (α, α), we get
1
γ(y) = α + (y − α)
2
for y ≥ α. Therefore, for y ≥ α, we can write the solution for u as
(
1 , if x < γ(y)
u(x, y) =
0 , if x > γ(y),

38
3.6. The entropy condition

which is a weak solution (or shock wave). In conclusion, we constructed the fol-
lowing weak solution
if x ≤ y, y ∈ [0, α)

 1,


 α−x
α − y , if y ≤ x ≤ α, y ∈ [0, α)



u(x, y) =

 0, if x ≥ α, y ∈ [0, α)




 1, if x < (y + α)/2, y ∈ [α, ∞)
if x > (y + α)/2, y ∈ [α, ∞) .

0,

u≡1 u≡0

α−x
u= α−y

yc = α ...

α x
α(x−s)
y =x−s y= α−s
x=s

Figure 3.5: Intersection of characteristic curves in Example 3.6.1.

Example 3.6.2. We refer to [Pin05, Example 2.15]. In this example we see how,
by allowing for weak solutions, we lose uniqueness. Consider the Burgers’ equation
(
uy + uux = 0
u(x, 0) = h(x) ,

with initial condition h defined as



0 ,
 if x ≤ 0
h(x) = x/α , if 0 ≤ x ≤ α

1, if x ≥ α .

Since c(h(s))s = h0 (s) ≥ 0, there is no critical time yc > 0, where the characteristics
intersect. On the contrary, the characteristics diverge. In this situation we talk

39
Chapter 3. Conservation laws and shock waves

about expansion waves. The characteristic of x is given by



s ,
 if s ≤ 0
x(t, s) = s + h(s)y = s + y/α , if s ∈ [0, 1]

s+y, if s ≥ α .

Thus, inverting the relation in order to write s in terms of x, y, we have




 x, if {s ≤ 0} = {x ≤ 0}

 αx  
αx
s= , if {s ≤ [0, α]} = 0 ≤ ≤ α = {0 ≤ x ≤ y + α}

 α+y α+y

x−y, if {s ≥ α} = {x − y ≥ α} = {x ≥ y + α} .

Since u(x, y) = h(s), this yields



 0, if x ≤ 0
s

x
u(x, y) = = , if 0 ≤ x ≤ y + α

 α α+y
if x ≥ y + α .

1,

Let us now look at the case when α → 0. Then h(x) becomes the step function
(
0 , if x ≤ 0
h(x) =
1 , if x ≥ 1 .

By taking the solution u above and letting α → 0, we obtain



 0 , if x ≤ 0
x

u(x, y) = , if 0 < x < y (3.6.1)

 y
1 , if x ≥ y ,

which is a classical solution.


In principle, we can still find a solution with a shock, because u = 0 for x < 0
and u = 1 for x > y. A shock should be a curve (γ(y), y) satisfying γ(0) = 0 and
γ 0 (y) = (u+ + u− )/2 = 1/2 by the Rankine-Hugoniot condition, which leads to
γ(y) = y/2. Therefore another solution would be
(
0 , if x ≤ y/2
u(x, y) = (3.6.2)
1 , if x ≥ y/2 .

If a conservation law does not have a unique weak solution, then how can we
select the “right” one? The answer comes from the following entropy condition.

40
3.6. The entropy condition

u≡1 u≡0

x
u= y

Figure 3.6: The expansion (or rarefaction) wave of Example 3.6.1.

Definition 3.6.3. A weak solution satisfies the entropy condition if characteristics


only enter shock waves but do not emanate from them, i.e., a shock wave x = γ(y)
satisfies the entropy condition if c(u+ ) < γ 0 < c(u− ), or equivalently f 0 (u+ ) < γ 0 <
f 0 (u− ), where f is the flux and f 0 (u) = c(u).

One can see that the solution (3.6.1) satisfies the entropy condition trivially
because it has no shocks, while (3.6.2) does not satisfy the entropy condition, since
c(u+ ) = 1, c(u− ) = 0 and γ 0 = 1/2.

y
y
γ= 2

x
u≡0 u≡1

Figure 3.7: Characteristics emerging from the shock.

41
CHAPTER 4
ONE DIMENSIONAL WAVE EQUATION

In this section we study the one dimensional wave equation (which is the archetype
of hyperbolic equation, see Section 6.1) on the real line. We use the reduction to
the canonical form to show that the general solution of the one dimensional wave
equation can be decomposed as superposition of a forward and a backward trav-
eling wave. We also introduce the d’Alembert’s formula that gives us an explicit
solution to the Cauchy problem.
Usually real life applications of the wave equation take place on a finite interval
of times. In that case, we would need to deal with boundary conditions but for
now we consider the simplified setting in absence of boundary conditions, in order
to make some general considerations.

4.1. Canonical form and general solution


The homogeneous one dimensional wave equation is a hyperbolic second order
differential equation of the form

utt − c2 uxx = 0 , (x, t) ∈ R × (0, ∞) , (4.1.1)

where c ∈ R represents the wave speed.


This is called wave equation because it well describes waves. Wave propa-
gation appears in a huge plethora of different physical situations: water wave
propagation, sound waves, seismic waves and light waves. It arises in acoustic,
electromagnetism, and fluid dynamics.
Given (4.1.1), we introduce the new variables ξ(x, t) = x+ct and η(x, t) = x−ct
and we set w(ξ, η) = u(x(ξ, η), t(ξ, η)). Using the chain rule on the function
u(x, t) = w(ξ(x, t), η(x, t)), we obtain


ut = [w(ξ(x, t), η(x, t))]
∂t
= wξ (ξ(x, t), η(x, t)) · ξt (x, t) + wη (ξ(x, t), η(x, t)) · ηt (x, t) = wξ ξt + wη ηt

43
Chapter 4. One dimensional wave equation

and

ux = [w(ξ(x, t), η(x, t))]
∂x
= wξ (ξ(x, t), η(x, t)) · ξx (x, t) + wη (ξ(x, t), η(x, t)) · ηx (x, t) = wξ ξx + wη ηx .

Since ξx = ηx = 1 and ξt = c, ηt = −c, we have

ut (x, t) = c[wξ (x + ct, x − ct) − wη (x + ct, x − ct)]


ux (x, t) = wξ (x + ct, x − ct) + wη (x + ct, x − ct) .

Differentiating again with respect to x and t the above expressions, we get



utt = [c(wξ − wη )] = c2 (wξξ − 2wξη + wηη )
∂t

uxx = [wξ + wη ] = wξξ + 2wξη + wηη .
∂x
Plugging in the wave equation we obtain

0 = utt − c2 uxx = c2 [wξξ − 2wξη + wηη − wξξ − 2wξη − wηη ] = −4c2 wξη .

Thus we have wξη = 0. Note that wξη = ∂wξ /∂η = 0. This implies that wξ is
independent of η, therefore we can write it as wξ (ξ, η) = f (ξ), for some function
f : R → R. Then we integrate and we get
ˆ ξ
w(ξ, η) = f (α) dα + G(η)
0
´ξ
with G(η) = w(0, η). If we define F (ξ) = 0
f (α) dα, we can write the general
solution for the equation wξη = 0 as follows

w(ξ, η) = F (ξ) + G(η) ,

for F, G ∈ C 2 (R) = {f : R → R : f 0 and f 00 exist and are continuous}. Thus, in


the original variables, the general solution of the one dimensional wave equation
is
u(x, t) = F (x + ct) + G(x − ct) . (4.1.2)
If u solves (4.1.1) then there exist F, G ∈ C 2 (R) such that (4.1.2) holds. Conversely
any two functions F, G ∈ C 2 (R) give a solution of (4.1.1) via the formula (4.1.2).
Note that
• G(x − ct) represents a wave moving to the right with velocity c > 0 and thus
we say that it is a forward wave;

44
4.1. Canonical form and general solution

• F (x + ct) is a wave moving to the left with velocity c > 0, thus a backward
wave.
Remark 4.1.1. Equation (4.1.2) shows that any solution of the one dimensional
wave equation is the sum of two traveling waves.
Remark 4.1.2. Observe that the functions F (x + ct) and G(x − ct) are constant
along lines of the form x + ct = α ∈ R and x − ct = β ∈ R, respectively. Those
lines are called characteristics. Hence, for the wave equation the characteristics
are straight lines in the (x, t)-plane with slopes ±1/c. As for first order PDEs, the
“information” is propagated along these curves.

t t

F constant G constant x + ct = α

x α x

Figure 4.1: On the left, the characteristics where F and G are constant. On the
right, the backward wave F (x + ct).

We saw that (4.1.2) is valid for F, G ∈ C 2 (R). Let us now extend the validity
of this equation. Consider F, G real piecewise continuous functions. Let us ap-
proximate F and G by two sequences of C 2 functions {Fn }n∈N , {Gn }n∈N , namely
we demand that
(i) Fn , Gn ∈ C 2 for all n ∈ N;

(ii) Fn → F at all continuity points of F ;

(iii) Gn → G at all continuity points of G.


Then the function
un (x, t) = Fn (x + ct) + Gn (x − ct)
is a solution of (4.1.1) in the classical sense. Sending n to infinity, we obtain the
function u(x, t) = F (x + ct) + G(x − ct), which is not necessarily smooth enough
to be a “classical” or “strong” solution, but we can say that u is a generalized
solution of the wave equation.

45
Chapter 4. One dimensional wave equation

Remark 4.1.3. Assume that u is a smooth function except at (x0 , t0 ). Then either
F is not smooth at x0 + ct0 or G is not smooth at x0 − ct0 . Note that there
are two characteristics passing through (x0 , t0 ), which are x − ct = x0 − ct0 and
x + ct = x0 + ct0 . Thus, for any time t1 6= t0 , u is smooth except at one or two
points x± that satisfy

x− − ct1 = x0 − ct0 , x+ + ct1 = x0 + ct0 .

The singularities of solutions of the wave equation are traveling only along char-
acteristics, which is a typical feature of hyperbolic equations.

4.2. The Cauchy problem and d’Alembert’s formula


The Cauchy problem for the homogeneous one dimensional wave equation is given
by 
utt − c2 uxx = 0 , (x, t) ∈ R × (0, ∞)

u(x, 0) = f (x) (4.2.1)

ut (x, 0) = g(x) ,

where f and g represent respectively the amplitude and the velocity at time t = 0.
A solution to the above Cauchy problem can be thought as the amplitude of the
vibration of an infinite string. A classical solution for the Cauchy problem is a
function u that is twice continuously differentiable for all t ∈ R+ and solving
(4.2.1).
Since the general solution to (4.1.1) is given by (4.1.2), we need to find F and
G using the initial conditions. By u(x, 0) = f (x), we deduce

f (x) = u(x, 0) = F (x) + G(x) .

While, thanks to ut (x, 0) = g(x), we get



g(x) = ut (x, 0) = [F (x + ct) + G(x − ct)]|t=0 = c[F 0 (x) − G0 (x)] ,
∂t
which implies that
ˆ x
1
F (x) − G(x) = g(y) dy + [F (0) − G(0)] .
c 0

Hence we have the system


(
F (x) + G(x) = f (x)
´x
F (x) − G(x) = 1c 0 g(y) dy + [F (0) − G(0)] .

46
4.2. The Cauchy problem and d’Alembert’s formula

Adding these two equations we get


ˆ x
1
2F (x) = f (x) + g(y) dy + [F (0) − G(0)] .
c 0

On the other hand, subtracting the second equation from the first equation, we
have ˆ
1 x
2G(x) = f (x) − g(y) dy − [F (0) − G(0)] .
c 0
Therefore, the solution of (4.2.1) is given by

u(x, y) = F (x + ct) + G(x − ct)


ˆ
1 1 x+ct F (0) − G(0)
= f (x + ct) + g(y) dy + +
2 2c 0 2
ˆ
1 1 x−ct F (0) − G(0)
+ f (x − ct) − g(y) dy − ,
2 2c 0 2
which implies the so-called d’Alembert’s formula
ˆ x+ct
f (x + ct) + f (x − ct) 1
u(x, y) = + g(y) dy . (4.2.2)
2 2c x−ct

Remark 4.2.1. The value of the solution at (x, t) is only influenced by the values
of f and g in [x − ct, x + ct].
Remark 4.2.2. For the wave equation in higher dimension there are formulas similar
to the d’Alembert’s one, but they are more complicated and they go beyond the
scope of these notes.

Example 4.2.3. Consider the Cauchy problem (4.2.1) with c = 1 and initial
conditions given by
( (
0, if |x| > 1 0 , if |x| > 1
f (x) = g(x) =
1 − x2 , if |x| ≤ 1 , 1 , if −1 ≤ x ≤ 1 .

Using d’Alembert’s formula we can for example compute the solution u of


(4.2.1) at the point (1, 1/2) and we obtain that
ˆ 3/2 ˆ 1
f (1 + 1/2) + f (1 − 1/2) 1 3 1 5
u(1, 1/2) = + g(y) dy = + dy = .
2 2 1/2 8 2 1/2 8

Now observe that, since f is not C 1 , u is also not C 1 . However we claim


that u is continuous, even if g is not continuous. Indeed, since f is continuous,

47
Chapter 4. One dimensional wave equation

(f (x + t) − f (x − t))/2 is continuous as well, thus we just need to check that the


second term in (4.2.2) is continuous. Given a sequence of points (xk , tk ) → (x, t),
we have
ˆ xk +tk ˆ x+t
1 1

2 g(y) dy − g(y) dy
xk −tk 2 x−t
ˆ ˆ
1 x−t 1 x+t

k→∞
≤ g(y) dy +
g(y) dy −−−→ 0 ,
2 xk −tk 2 xk +tk
from which we deduce that u is continuous.
Still u is not C 1 , but can we say something about the singularities of u? We
look at the points of singularity of the initial data f and g and we look at their
evolution along the characteristics. Indeed, as observed above, the singularities of
the solution propagate along the characteristics.
In our case, singularities are at the points −1, 1. This means that singularities
can only live on the curves
{x + t = 1} ∪ {x − t = 1} ∪ {x + t = −1} ∪ {x − t = −1} ,
i.e., {x ± t = 1, −1}. In particular, we can at least say that u is a generalized
solution.
Example 4.2.4. Consider the following Cauchy problem (4.2.1) with c = 2 and
initial data (
1 , if |x| ≤ 1
f (x) = g(x) =
0 , if |x| > 1 .
We can compute the value of the solution u at (0, 1/2) using d’Alembert’s formula
and we get
ˆ
f (1/2) + f (−1/2) 1 1/2 5
u(0, 1/4) = + g(s) ds = .
2 4 −1/2 4
Moreover, again thanks to d’Alembert’s formula, we are able to understand the
large time behaviour of the solution u. Indeed, fixed x̄ ∈ R and let t go to infinity,
we have
ˆ
f (x̄ + 2t) + f (x̄ − 2t) 1 x̄+2t
 
lim u(x̄, t) = lim + g(y) dy .
t→∞ t→∞ 2 4 x̄−2t
For x̄ fixed and t sufficiently large, we have x̄ + 2t >´1, x̄ − 2t < −1, which means
´ x̄+2t 1
that f (x̄ + 2t) = f (x̄ − 2t) = 0 and x̄−2t g(y) dy = −1 dy = 2. Hence we get
1
lim u(x, t) = for all x ∈ R .
t→∞ 2

48
4.3. Domain of dependence and region of influence

t 1
u≡ 2

1
x


+

=
2t

2t
=


1

x
x

1
+

=
2t

2t
=


x
1

u≡0 u≡0

−1 1 x

Figure 4.2: Long time behaviour of u.

4.3. Domain of dependence and region of influence


Let us consider the one dimensional homogeneous wave equation as in (4.2.1).
How do f and g influence the value of u at a given point (x0 , t0 )? How fast does
the information propagates?
The answer to the second question is suggested by the factorization of the
solution in the sum of two traveling waves as in (4.1.2): the information propagates
with finite speed c.
To answer to the first question we recall that, by d’Alembert’s formula (4.2.2),
the value of u at the point (x0 , t0 ) is determined by the values of f at the boundaries
of the interval [x0 − ct0 , x0 + ct0 ] and by the value of g on all the interval. Hence
we say that the interval [x0 − ct0 , x0 + ct0 ] is the domain of dependence of u at
the point (x0 , t0 ). If we change the initial data at points outside the interval, the
value of the solution at the point (x0 , t0 ) does not change.
Now fix (x0 , t0 ) and consider in the (x, t)-plane the characteristic lines (re-
member, information propagates along characteristics) passing through the point
(x0 , t0 ), i.e.,
x − ct = x0 − ct0 , x + ct = x0 + ct0 .
These two lines intersect the x-axis at the points (x0 − ct0 , 0) and (x0 + ct0 , 0)
respectively. The triangle ∆(x0 ,t0 ) formed by these lines and the interval [x0 −
ct0 , x0 + ct0 ] is said characteristic triangle, see Figure 4.3.
Remark 4.3.1. If the initial conditions are smooth on [x0 −ct0 , x0 +ct0 ], the solution
itself is smooth in the characteristic triangle ∆(x0 ,t0 ) .

49
Chapter 4. One dimensional wave equation

(x0 , t0 ) Characteristic
triangle ∆(x0 ,t0 )

x0 − ct0 x0 + ct0 x
Domain of dependence

Figure 4.3: The characteristic triangle.

Now we can ask ourselves the dual question: which are the points in the half
plane t > 0 influenced by the initial data on a fixed interval [a, b]? The set of
points influenced by the values of f and g in [a, b] is the region of influence of the
interval [a, b]. From d’Alembert’s formula and the previous discussion, we discover
that the points in [a, b] influence the value of u at a given point (x0 , t0 ) if and only
if [x0 − ct0 , x0 + ct0 ] ∩ [a, b] 6= ∅. Hence, the initial conditions along [a, b] influence
those points (x, t) that satisfy

x − ct ≤ b and x + ct ≥ a ,

see Figure 4.4.


Remark 4.3.2. If f = 0 and g = 0 outside [a, b], then the solution u is identically
zero to the left of x + ct = a and to the right of x − ct = b.

4.4. The Cauchy problem for the nonhomogeneous wave


equation
The general nonhomogeneous one dimensional wave equation has the following
form 
2
utt − c uxx = F (x, t), (x, t) ∈ R × (0, ∞)

u(x, 0) = f (x) , x∈R (4.4.1)

ut (x, 0) = g(x) , x ∈ R.

50
4.4. The Cauchy problem for the nonhomogeneous wave equation

x + ct = a x − ct = b

Region of influence
of [a, b]

a b x

Figure 4.4: The region of influence.

This Cauchy problem models, for example, the vibration of an ideal string subject
to an external force F (x, t). As in the homogeneous case, f and g are given
functions that represent the shape and the vertical velocity of the string at time
zero.
As for the homogeneous case, we wish to have an analogous derivation of
d’Alembert’s formula. To do this, one integrates over the characteristic triangle
∆(x0 ,t0 ) of a generic point (x0 , t0 ) and obtains
¨ ¨
F (x, t) dx dt = (utt − c2 uxx ) dx dt .
∆(x0 ,t0 ) ∆(x0 ,t0 )

Then, after a series of computations, one gets the following theorem.

Theorem 4.4.1. The solution of the Cauchy problem (4.4.1) is given by


ˆ
f (x + ct) + f (x − ct) 1 x+ct
u(x, t) = + g(y) dy +
2 2c x−ct
ˆ ˆ
1 t x+c(t−τ )
+ F (ξ, τ ) dξ dτ ,
2c 0 x−c(t−τ )

which is d’Alembert’s formula for the nonhomogeneous wave equation.

Remark 4.4.2. The value of u at (x0 , t0 ) is given by the value of the data f, g, F
on the whole characteristic triangle. Note that for F = 0 this formula coincides
with d’Alembert’s formula obtained above.

51
Chapter 4. One dimensional wave equation

Example 4.4.3. Consider the following problem



utt − 4uxx = sin(x) , (x, t) ∈ R × (0, ∞)

u(x, 0) = x , x∈R

ut (x, 0) = x3 , x ∈ R.

Applying d’Alembert’s formula with c = 2, we get


ˆ ˆ ˆ
x + 2t + x − 2t 1 x+2t 3 1 t x+2(t−τ )
u(x, t) = + s ds + sin ξ dξ dτ
2 4 x−2t 4 0 x−2(t−τ )
ˆ
1  4  s=x+2t 1 t ξ=x+2(t−τ )
=x+ s − [cos ξ] dτ

16 s=x−2t 4 0 ξ=x−2(t−τ )
ˆ
1  4  s=x+2t 1 t
=x+ s − cos(x + 2(t − τ )) − cos(x − 2(t − τ )) dτ.
16 s=x−2t 4 0

Now, recalling that cos(α + β) − cos(α − β) = −2 sin(α) sin(β) for every angles α
and β, we get that

1
(x + 2t)2 − (x − 2t)2 (x + 2t)2 + (x − 2t)2
 
u(x, t) = x +
16 ˆ
sin x t
+ sin(2(t − τ )) dτ
2 0
2 2 sin x τ =t
= x + xt(x + 4t ) + [cos(2(t − τ ))]

4 τ =0
3 3 1 
= x + x t + 4xt + sin x 1 − cos(2t) .
4
Remark 4.4.4. Note that u is an odd function of x. Is this a coincidence?

Example 4.4.5. Consider the nonhomogeneous wave equation given by



utt − uxx = 2 cos t − t sin t , (x, t) ∈ R × (0, ∞)

u(x, 0) = xex , x∈R

ut (x, 0) = 0 , x ∈ R.

We could solve it using d’Alembert’s formula. However, if we can find a particular


solution v of the given nonhomogeneous equation, it is possible to reduce the
nonhomogeneous problem to a homogeneous one. This eliminates the need to
perform the double integral in d’Alembert’s formula. This technique is very useful
when F is simple, for example when F depends only on x or only on t.

52
4.4. The Cauchy problem for the nonhomogeneous wave equation

Suppose that we find a particular solution v, then we consider w := u − v.


Since the wave equation is linear, by superposition principle w solves the following
homogeneous problem

2
wtt − c wxx = 0 ,
 (x, t) ∈ R × (0, ∞)
w(x, 0) = f (x) − v(x, 0) , x ∈ R

wt (x, 0) = g(x) − vt (x, 0) , x ∈ R .

Hence, w can be found using d’Alembert’s formula for the homogeneous problem
and the final solution is given by u = v + w.
Since in our case F = F (t) = 2 cos t − t sin t, we look for a function v = v(t)
depending only on t that solves vtt = 2 cos t − t sin t (note that vxx = 0 because
v does not depend on x). Let us choose as a particular solution v(t) = t sin t. Of
course this solution is not unique because we did not impose any initial condition
for v and vt at time 0. Now define w(x, t) = u(x, t) − v(t) with associated PDE

wtt − wxx = 0 ,
 (x, t) ∈ R × (0, ∞)
x
w(x, 0) = u(x, 0) − v(0) = xe , x ∈ R

wt (x, 0) = ut (x, 0) − vt (0) = 0 , x ∈ R ,

which is a homogeneous wave equation. Applying d’Alembert’s formula we get


(x + t)ex+t + (x − t)ex−t ex
w(x, t) = = ((x + t)et + (x − t)e−t )
2 2
t −t t −t
e +e e −e
= xex + tex
2 2
= xex cosh t + tex sinh t .
Recalling that u(x, t) = w(x, t) + v(t), this gives
u(x, t) = xex cosh t + tex sinh t + t sin t .
We now prove a uniqueness theorem for the nonhomogeneous one dimensional
wave equation (4.4.1).
Theorem 4.4.6. The problem (4.4.1) has a unique solution.
Proof. The existence of a solution is given by d’Alembert’s formula, i.e., Theo-
rem 4.4.1. For the uniqueness, suppose that u1 and u2 are solutions of (4.4.1).
Then we define the difference w = u1 − u2 , which solves the equation

2 2 2
wtt − c wxx = (u1 )tt − c (u1 )xx − [(u2 )tt − c (u2 )xx ] = 0 , (x, t) ∈ R × (0, ∞)

w(x, 0) = u1 (x, 0) − u2 (x, 0) = f (x) − f (x) = 0 , x∈R

wt (x, 0) = (u1 )t (x, 0) − (u2 )t (x, 0) = g(x) − g(x) = 0 , x ∈ R.

Hence, by d’Alembert formula (4.2.2), this implies w(x, t) = 0 and thus u1 = u2 ,


as desired.

53
Chapter 4. One dimensional wave equation

4.5. Symmetry of the wave equation


Let us now introduce another property of the wave equation, which shows that the
symmetry in Example 4.4.3 was not a fortuity (see Remark 4.4.4).
Theorem 4.5.1. Given a general nonhomogeneous wave equation, if the initial
data f , g and the inhomogeneity F are even (resp. odd, periodic) with respect to
x, then the solution is even (resp. odd, periodic) as well.
Proof. Let us first consider the case in which f , g and F are even with respect to
x, i.e., f (x) = f (−x), g(x) = g(−x), F (x, t) = F (−x, t). Let us define v(x, t) :=
u(−x, t), then we want to prove that v = u, namely that u is even with respect to
x. We note that
• vt (x, t) = ut (x, t), vtt (x, t) = utt (−x, t);
• vx (x, t) = −ux (−x, t), vxx (x, t) = uxx (−x, t).
Therefore it holds

2 2
vtt − c vxx = utt (−x, t) − c uxx (−x, t) = F (−x, t) = F (x, t) , (x, t) ∈ R × (0, ∞)

v(x, 0) = u(−x, 0) = f (−x) = f (x) , x∈R

vt (x, 0) = ut (−x, 0) = g(−x) = g(x) , x ∈ R,

where we used that f, g, F are even. Thus v satisfies the same wave equation
with the same boundary conditions as u and therefore v = u, by uniqueness
Theorem 4.4.6.
The odd case (i.e., f (x) = −f (−x), g(x) = −g(−x) and F (x, t) = −F (−x, t))
and the periodic case (i.e., f (x) = f (x + L), g(x) = g(x + L) and F (x, t) =
F (x + L, t) for some L > 0) can be solved analogously defining v(x, t) = −u(−x, t)
and v(x, t) = u(x + L, t), respectively.
Let us now see how we can apply the previous theorem to solve a particular
wave equation with an extra boundary condition. Consider the problem
 2
utt − c uxx = 0 , (x, t) ∈ (0, ∞) × (0, ∞)


u(x, 0) = f (x) , x > 0


 ut (x, 0) = g(x) , x > 0
t ≥ 0.

u(0, t) = 0 ,
In order to fulfill the boundary condition u(0, t) = 0, we extend f and g in and
odd way as
( (
f (x) , if x ≥ 0 g(x) , if x ≥ 0
f˜(x) := g̃(x) :=
−f (−x) , if x < 0 , −g(−x) , if x < 0 .

54
4.5. Symmetry of the wave equation

Then we solve the equation



2
utt − c uxx = 0 , (x, t) ∈ R × (0, ∞)

u(x, 0) = f˜(x) , x ∈ R

ut (x, 0) = g̃(x) , x ∈ R .

The solution u is odd in x because f˜ and g̃ are odd, therefore u satisfies u(0, t) = 0.
Indeed u(x, t) = −u(−x, t) implies that u(0, t) = −u(0, t) and thus u(0, t) = 0.

55
CHAPTER 5
SEPARATION OF VARIABLES

We now introduce the method of separation of variables to solve linear partial


differential equations with boundary and/or initial conditions. Let us directly
present the method in the case of the heat equation.

5.1. Heat equation with Dirichlet boundary conditions


The heat equation is a linear second order PDE that describes how the distribution
of heat evolves over time in a medium. Heat flows from places where it is higher
towards places where it is lower (by the second law of thermodynamics). This
equation was derived and solved by Joseph Fourier in 1822.
The heat equation in R3 is ut = k∆u, where k ∈ R is the diffusivity of the
medium and the function u = u(t, x, y, z) represents the temperature at point
(x, y, z) at time t.
The equation says that the rate ut at which the material at a point (x, y, z) heats
up (or cool down) is proportional to how much hotter (or cooler) the surrounding
material is. The heat equation arises in the modeling of a number of phenomena,
for example
• in financial mathematics, in the modeling of options;
• in probability theory it is connected with the study of the Brownian motion;
• in physics for modeling particle diffusion.
Consider the Cauchy problem associated to the one dimensional heat equation

ut − kuxx = 0 ,
 (x, t) ∈ (0, L) × (0, ∞)
u(0, t) = u(L, t) = 0 , t > 0

u(x, 0) = f (x) , x ∈ (0, L) ,

where k ∈ R+ is the constant of diffusivity. This heat equation describes the


diffusion of heat in a one dimensional structure (for example a metal bar of length

57
Chapter 5. Separation of variables

L) over time, knowing that the initial temperature is equal to f . The boundary
conditions are telling us that the boundary of the metal bar is kept at zero, see
Figure 5.1. This Cauchy problem is also called an initial boundary problem (and
it is homogeneous).
Remark 5.1.1. In order to have compatibility between boundary and initial con-
ditions we assume that f (0) = f (L) = 0.

u(0, t) = 0 u(L, t) = 0

ut = kuxx

L x
u(x, 0) = f (x)

Figure 5.1: The boundary conditions for the heat equation modelled on a metal
bar of length L.

Let us now solve this problem using the method of separation of variables. The
first step consists in seeking for a solution that has the form of a product solution,
or separate solution, i.e.,
u(x, t) = X(x)T (t) ,
where X : [0, L] → R, T : [0, ∞) → R. Note that at this step we are not asking
that u satisfies the initial condition, but only the boundary conditions.
Plugging this into the heat equation, we get
T 0 (t) X 00 (x)
T 0 (t)X(x) − kX 00 (x)T (t) = 0 ⇐⇒ = .
kT (t) X(x)
Note that the term on the left-hand side only depends on t, while the term on the
right hand side only depends on x. Therefore, the only possibility is that these
two functions are equal to a constant −λ, namely
T 0 (t) X 00 (x)
= = −λ .
kT (t) X(x)

58
5.1. Heat equation with Dirichlet boundary conditions

We are now left with two ODEs


(
X 00 (x) = −λX(x) , x ∈ (0, L)
T 0 (t) = −kλT (t) , t > 0 .

These ODEs are only coupled by the separation constant −λ. Moreover note that
u satisfies the boundary conditions u(0, t) = u(L, t) = 0 if and only if u(0, t) =
X(0)T (t) = 0 and u(L, t) = X(L)T (t) = 0 for all t > 0. These conditions are
fulfilled either if T (t) = 0 (which gives a trivial solution) or if X(0) = X(L) = 0,
which represents the interesting case.
Let us now first consider the ODE in X
(
X 00 (x) = −λX(x) , x ∈ (0, L)
(5.1.1)
X(0) = X(L) = 0 .

We have to distinguish different cases:


• If λ < 0, the solution has the form
√ √
X(x) = α cosh( −λx) + β sinh( −λx) .

From the boundary condition X(0) = 0, since √ sinh(0) = 0 and cosh(0) = 1,


we get 0 = X(0) = α. Thus X(x) = β sinh( −λx). √ From the boundary
condition X(L) = 0, we obtain 0√= X(L) = β sinh( −λL). Observe that
sinh only vanishes at 0, thus sinh( −λL) 6= 0 and we get β = 0. As a result,
the only solution compatible with the boundary conditions in this case is the
trivial one.

• If λ = 0, the solution is X(x) = α + βx. Similarly as before, thanks to the


boundary conditions X(0) = X(L) = 0 we get α = β = 0. Again the only
possible solution is the trivial one.

• If λ > 0, the solution for X is


√ √
X(x) = α cos( λx) + β sin( λx) .

From the boundary condition X(0) = 0 we get α = 0 and X(x)
√ = β sin( λx).
Hence, in order for X to satisfy X(L) = 0, we must have λL = nπ for some
n ∈ N, because the sine vanishes only at integer multiples of π. Therefore
we get  nπ 2
λ= .
L
Thus the solutions compatible with the boundary conditions are X(x) =
β sin(nπx/L) for every n ∈ N.

59
Chapter 5. Separation of variables

Hence we obtain that the set of solutions of (5.1.1) is an infinite sequence of


functions  nπ 
Xn (x) = sin x , n ∈ N.
L
Now let us consider the problem for T , which is
T 0 (t) = −kλT (t) , t ∈ (0, ∞) .
The general solution for this equation is T (t) = Be−kλt . Since (5.1.1) has nontrivial
solution only for λn := (nπ/L)2 with n ∈ N, we are interested only in the solutions
T of the form
Tn (t) = Bn e−kλn t ,
for some n ∈ N.
We thus obtain a sequence of separated solutions to the heat equation given
by p
un (x, t) = Xn (x)Tn (t) = Bn sin( λn x)e−kλn t .
Note that the heat equation is linear, then by superposition principle (see Theo-
rem 1.4.8) any finite linear combination
N
X p
u(x, t) = Bn sin( λn x)e−kλn t
n=1

is still a solution to the heat equation that satisfies the boundary conditions.
At this point we can consider the initial condition. If f (x) admits the following
Fourier expansion
X∞ p
f (x) = Cn sin( λn x) ,
n=1
then a natural candidate for a solution is

X p
u(x, t) = Cn sin( λn x)e−kλn t .
n=1

But how to obtain the coefficients Cn√from f ? Fix m ∈ N and multiply the
expansion for f (x) by sin(mπx/L) = sin( λm x), obtaining
p  X∞ p p
f (x) sin λm x = Cn sin( λn x) sin( λm x) .
n=1

Integrating over [0, L] we thus get


ˆ L ∞ ˆ L
p  X p p L
f (x) sin λm x dx = Cn sin( λn x) sin( λm x) dx = Cm ,
0 n=1 0 2

60
5.1. Heat equation with Dirichlet boundary conditions

where we used that


ˆ L
(
p p L/2 , if m = n
sin( λn x) sin( λm x) dx =
0 0, if m 6= n .

As a result, we have that


ˆ L
2 p 
Cm = f (x) sin λm x dx
L 0

for all m ∈ N. In particular the coefficients are uniquely determined by the initial
condition f .

Example 5.1.2. Consider the Cauchy problem



ut − uxx = 0 ,
 (x, t) ∈ [0, π] × [0, ∞)
u(0, t) = u(π, t) = 0 , t ≥ 0

u(x, 0) = f (x) , x ∈ [0, π] ,

where f (x) = πx − x2 . This is a one dimensional heat equation with Dirichlet


boundary conditions in [0, π]. Therefore we know that the general solution is

2
X
u(x, t) = Bn sin(nx)e−n t .
n=1

Imposing u(x, 0) = f (x), we note (as before) the the coefficients Bn are the Fourier
coefficients of f , which can we obtained as follows: first of all, by integration by
parts we observe that
ˆ b ˆ b
cos(nx) ib
h cos(nx) h cos(nx) sin(nx) ib
x sin(nx) dx = −x + dx = −x + ,
a n a a n n n2 a

and
ˆ b ˆ b
2 cos(nx) cos(nx)
h ib
2
x sin(nx) dx = −x + dx 2x
a n a a n
ˆ b
2 cos(nx) sin(nx) ib sin(nx)
h
= −x + 2x 2
−2 dx
n n a a n2
h cos(nx) sin(nx) cos(nx) ib
= −x2 + 2x + 2 ,
n n2 n3 a

61
Chapter 5. Separation of variables

so that the coefficients Bn are


ˆ ˆ
2 π 2 π
Bn = f (x) sin(nx) dx = (πx − x2 ) sin(nx) dx
π 0 π 0
2  cos(nπ)  2  2 cos(nπ) cos(nπ) 2
= π −π − −π +2 −
π n π( n n3 n3
4 4 0, if n = 2j for some j ∈ N,
= 3
− 3 cos(nπ) = 8
πn πn πn3
, if n = 2j − 1 for some j ∈ N.

The solution is thus given by



X 8 2
u(x, t) = 3
sin((2j − 1)x)e−(2j−1) t .
j=1
π(2j − 1)

π x

Figure 5.2: The boundary conditions for the heat equation of Example 5.1.2.

5.2. Wave equation with Neumann boundary conditions


So far we discussed the method of separation of variables for the heat equation
with Dirichlet boundary conditions. However, later we will encounter three types
of boundary conditions:
• Dirichlet: u(0, t) = u(L, t) = 0.
• Neumann: ux (0, t) = ux (L, t) = 0.
• Mixed type, or Robin: α0 u(0, t)+β0 ux (0, t) = γ0 and αL u(L, t)+βL ux (L, t) =
γL .

62
5.2. Wave equation with Neumann boundary conditions

As an example with Neumann boundary conditions, we now present the method


of separation of variables applied to the one dimensional wave equation. Consider
the problem
 2
utt − c uxx = 0 ,


(x, t) ∈ [0, L] × [0, ∞)
u (0, t) = u (L, t) = 0 , t > 0
x x


 u(x, 0) = f (x) , x∈R
x ∈ R.

ut (x, 0) = g(x) ,

As before, we look for a solution u not identically zero of the form

u(x, t) = X(x)T (t) .

At this stage we do not take into account the initial conditions. Differentiating in
x and t we get utt = X(x)T 00 (t) and uxx = X 00 (x)T (t). Hence, plugging into the
equation, we obtain
T 00 (t) X 00 (x)
X(x)T 00 (T ) = c2 X 00 (x)T (t) ⇐⇒ = = −λ ,
c2 T (t) X(x)
for some λ ∈ R. Therefore we have the following ODEs
(
X 00 (x) = −λX(x) , X 0 (0) = X 0 (L) = 0
T 00 (t) = −c2 λT (t) .

The general solution for the ODE in X is


 √ √
α cosh( −λx) + β sinh( −λx) , if λ < 0

X(x) = α + βx , if λ = 0
 √ √
α cos( λx) + β sin( λx) , if λ > 0 .

Since we need to impose the Neumann boundary conditions X 0 (0) = X 0 (L) = 0,


let us compute X 0 , which is
√  √ √ 
 −λ α sinh( −λx) + β cosh( −λx) ,
 if λ < 0
X 0 (x) = β, if λ = 0
√ 
 √ √ 
λ − α sin( λx) + β cos( λx) , if λ > 0.

Let us now consider the three cases separately:


• If λ < √0, then X 0 (0) = 0 implies β = 0. Thus, from X 0 (L) = 0, we get
α sinh( −λL) = 0 and then α = 0. Therefore X(x) = 0, which means that
in this case we do not have nontrivial solutions.

63
Chapter 5. Separation of variables

• If λ = 0, the only nontrivial solution is given by X(x) = X0 (x) = α.



• If λ >√0, imposing the boundary conditions we find β = 0 and sin( λL) = 0.
Thus λL = nπ for some n ∈ N. As a result, λn = (nπ/L)2 are eigenval-
ues for√every n ∈ N and the corresponding eigenfunctions are Xn (x) =
αn cos( λn x).
Let us now consider the ODE for T , that is T 00 (t) = −c2 λT (t). If λ = 0, we
get T0 (t) = γ0 + δ0 t for some γ0 , δ0 ∈ R. On the other hand, for λ = λn > 0 the
ODE is Tn00 (t) = −c2 λn Tn , with solution
p p
Tn (t) = γn cos(ct λn ) + δn sin(ct λn ) ,

for some γn , δn ∈ R.
In conclusion, the general solution for the one dimensional wave equation with
Neumann boundary conditions can be written as

X
u(x, t) = Xn (x)Tn (t)
n=0

A0 + B0 t X
∞  nπ  h  nπc   nπc i (5.2.1)
= + cos x An cos t + Bn sin t .
2 n=1
L L L

The factor 1/2 in the first term is just for convenience.


Remark 5.2.1. Note that in this case we have a cosines expansion instead of a sines
one as for the heat equation, because of the Neumann boundary conditions.
To find An we exploit that the function at time 0 is equal to f (x), namely

A0 X  nπ 
f (x) = u(x, 0) = + An cos x .
2 n=1
L

Fix m ∈ N, if we multiply the equation above by cos(mπx/L) and we integrate


over [0, L], we get
ˆ L  mπ  ˆ L
A0  mπ 
f (x) cos x dx = cos x dx +
0 L 0 2 L
X ∞ ˆ L  nπ   mπ 
+ An cos x cos x dx .
n=1 0 L L

Since ˆ (
L  mπ  L , if m = 0
cos x dx =
0 L 0 , if m ≥ 1

64
5.2. Wave equation with Neumann boundary conditions

and
ˆ L
(
 nπ   mπ  L/2 , if n = m 6= 0
cos x cos x dx =
0 L L 0, if n 6= m ,

we obtain
ˆ L
2  mπ 
Am = f (x) cos x dx .
L 0 L
The same procedure can be implemented to find the coefficients Bm , since we have
that

B0 X nπc  nπ 
g(x) = ut (x, 0) = + Bn cos x ,
2 n=1
L L

and we get
ˆ L ˆ L
2 2  mπ 
B0 = g(x) dx , Bm = g(x) cos x dx for m ≥ 1 .
L 0 cmπ 0 L

Therefore the problem is formally solved.

Example 5.2.2. Consider the wave equation



utt − 9uxx = 0 ,


(x, t) ∈ [0, 1] × [0, ∞)
u (0, t) = u (1, t) = 0 ,
x x t≥0


 u(x, 0) = f (x) = 1 + cos(3πx) + 16 cos(20πx) , x ∈ [0, 1]
x ∈ [0, 1] .

ut (x, 0) = g(x) = 0 ,

Thanks to the arguments above, we can write the solution u as in (5.2.1). To find
the coefficients of this expression, we impose the initial conditions

A0 X
1 + cos(3πx) + 16 cos(20πx) = u(x, 0) = + An cos(nπx) .
2 n=1

Integrating the left hand and right hand side against cos(nπx) we find immediately
that A0 = 2, A3 = 1, A20 = 16 and Am = 0 for m 6= 0, 3, 20. On the other hand,
using that g(x) = 0, we obtain that Bm = 0 for all m ∈ N. Thus the solution to
the Cauchy problem is

u(x, t) = 1 + cos(3πx) cos(9πt) + 16 cos(20πx) cos(60πt) .

65
Chapter 5. Separation of variables

5.3. Inhomogeneous PDEs


Let us consider the following inhomogeneous heat equation

+
ut − kuxx = h(x, t) , (x, t) ∈ [0, L] × R

u(0, t) = u(L, t) = 0 , t ∈ R+

u(x, 0) = f (x) , x ∈ [0, L] .

Recall that, using the separation of variables for the homogeneous heat equation
with Dirichlet boundary condition, the admissible solutions for the ODE in X are
 nπ 
Xn = αn sin x , n ∈ N.
L
Now, instead of solving also the ODE for T (t), we write a general solution as

X  nπ 
u(x, t) = Tn (t) sin x ,
n=1
L

where Tn is arbitrary. Computing the term ut − kuxx , we get


X∞   nπ 2   nπ 
0
ut − kuxx = Tn (t) + k Tn (t) sin x .
n=1
L L

Thus, we are now left with the problem of finding Tn . Assume that, for every
t ∈ R+ , cn (t) is the n-th Fourier coefficient of the inhomogeneity h(·, t), namely
ˆ
2 L  nπ 
cn (t) = h(x, t) sin x dx .
L 0 L
Then we can express h(x, t) as follows

X  nπ 
h(x, t) = cn (t) sin x .
n=1
L

As a result, the equation ut − kuxx = h is equivalent to


X∞   nπ 2   nπ  X ∞  nπ 
0
Tn (t) + k Tn (t) sin x = cn (t) sin x .
n=1
L L n=1
L

Hence we need to impose


 nπ 2
Tn0 (t) +k Tn (t) = cn (t)
L
66
5.3. Inhomogeneous PDEs

and the initial condition



X  nπ 
f (x) = u(x, 0) = Tn (0) sin x ,
n=1
L

which leads to the ODE system


  nπ 2
Tn0 (t) + k
 Tn (t) = cn (t)
L

ˆ
2 L  nπ 
Tn (0) = sin x f (x) dx .


L 0 L
Thanks to the local existence and uniqueness theorem for ODEs, we have the
existence of a unique solution Tn (t) for every n ∈ N. Therefore, the formal solution
of our inhomogeneous Cauchy problem is

X  nπ 
u(x, t) = Tn (t) sin x .
n=1
L

Remark 5.3.1. As for the homogeneous case, if the boundary conditions are of
Neumann’s type, we obtain an expansion in term of cosines and there is a summand
for n = 0. Moreover, in the case of the inhomogeneous wave equation, we have a
second order ODE for Tn , complemented with two initial conditions (for Tn (0) and
Tn0 (0)) that are linked to the Fourier expansion of u(x, 0) = f (x) and ut (x, 0) =
g(x).
Example 5.3.2. Consider the inhomogeneous wave equation with Neumann bound-
ary conditions


 utt − uxx = 4π 2 cos(2πx)t , (x, t) ∈ (0, 1) × R+

u (0, t) = u (1, t) = 0
x x


 u(x, 0) = 1 + cos(2πx)

ut (x, 0) = 3 cos(2πx) .

From the method of separation of variables for the homogeneous wave equation
with Neumann boundary conditions we have that the admissible solutions for the
ODE in X are
Xn (x) = cos(nπx) , n ≥ 0.
Hence we try to look for solutions of the form

X
u(x, t) = Tn (t) cos(nπx) .
n=0

67
Chapter 5. Separation of variables

Imposing that u(x, t) satisfies utt − uxx = 4π 2 cos(2πx)t, we obtain



X
utt − uxx = [Tn00 (t) + n2 π 2 Tn (t)] cos(nπx) = 4π 2 cos(2πx)t .
n=0

Hence we require that


(
T200 (t) + 4π 2 T2 (t) = 4π 2 t , for n = 2
Tn00 (t) + n2 π 2 Tn (t) = 0 , for n 6= 2 .

Since u(x, 0) = 1 + cos(2πx) and ut (x, 0) = 3 cos(2πx) contain only summands of


the form cos(nπx) for n = 0 and n = 2, we need to consider separately the cases
n = 0, 2. In particular we have the following three cases:

00
T0 (t) = 0

case n = 0: T0 (0) = 1 =⇒ T0 (t) = 1

 0
T0 (0) = 0

00 2 2
Tn (t) + n π Tn (t) = 0

case n 6= 0, 2: Tn (0) = 0 =⇒ Tn (t) = 0

 0
Tn (0) = 0

00 2 2
T2 (t) + 4π T2 (t) = 4π t

case n = 2: T2 (0) = 1

 0
T2 (0) = 3
=⇒ T2 (t) = c1 sin(2πt) + c2 cos(2πt) + t ,
which gives T2 (0) = c1 = 1 and T20 (0) = 2πc2 + 1 = 3. Finally, we thus obtain
 
1
u(x, t) = 1 + sin(2πt) + cos(2πt) + t cos(2πx) .
π
Example 5.3.3. Consider the inhomogeneous wave equation

utt − uxx = sin(mπx) sin(ωt) , (x, t) ∈ (0, 1) × (0, ∞)


u(0, t) = u(1, t) = 0


 u(x, 0) = 0

ut (x, 0) = 0 ,

for some m ∈ N, ω ∈ R. We consider solutions of the form u(x, t) = X(x)T (t), as


before. The admissible solutions for the ODE in X are
Xn (x) = sin(nπx) , n ≥ 1,

68
5.4. Uniqueness with the energy method

P∞
and, as before, we look for solutions of the form u(x, t) = n=1 Tn (t) sin(nπx).
Plugging this into the equation we have

X
utt − uxx = [Tn00 (t) + n2 π 2 Tn (t)] sin(nπx) = sin(mπx) sin(ωt) .
n=1

The ODEs for Tn are given by


 00

 Tn (t) + n2 π 2 Tn (t) = 0 , if n 6= m
T 00 (t) + m2 π 2 T (t) = sin(ωt)

m m


 Tn (0) = 0 , for all n
 0
Tn (0) = 0 , for all n .

Thus, for n 6= m, we have Tn (t) = 0, while for n = m we get

Tm (t) = am cos(mπt) + bm sin(mπt) + cm sin(ωt) ,

provided ω 6= mπ. Using the initial conditions Tm (0) = 0 and Tm0 (0) = 0, we
obtain
1  ω 
Tm (t) = 2 sin(mπt) − sin(ωt) ,
ω − m2 π 2 mπ
and the solution u(x, t) is finally given by
1  ω 
u(x, t) = sin(mπt) − sin(ωt) sin(mπx) .
ω 2 − m2 π 2 mπ
Remark 5.3.4. We are assuming ω 6= mπ to avoid degeneracy. To deal with the
case ω = πm, we can think it as limit case as ω 6= πm, ω → mπ. Then
 
1 sin(mπt)
lim u(x, t) = − t cos(mπt) sin(mπx).
ω→mπ 2mπ mπ

Note that, if ω 6= πm for all m ∈ N, the solution is bounded. In other words,


a bounded periodic force with time frequency ω different from the frequencies of
the homogeneous solutions produces bounded oscillations. On the other hand, for
ω = mπ for some m ∈ N, the solution is unbounded. This is called resonance
effect (see collapse of the Tacoma bridge).

5.4. Uniqueness with the energy method


One of the main applications of the energy method is the proof of uniqueness for
solutions of initial boundary value problems. This method is based on the physical

69
Chapter 5. Separation of variables

principle of conservation of energy, although the quantity we refer as “energy” may


differ from the actual physical energy of the system. We illustrate the method in
the following example.
Consider the inhomogeneous wave equation


 utt − c2 uxx = F (x, t) , (x, t) ∈ [0, L] × R+

ux (0, t) = a(t)



ux (L, t) = b(t)

u(x, 0) = f (x)





u (x, 0) = g(x) .
t

Let u1 , u2 be solutions and set w := u1 − u2 . Then w solves




 wtt − c2 wxx = 0 , (x, t) ∈ [0, L] × R+

w (0, t) = w (L, t) = 0
x x
w(x, 0) = 0



wt (x, 0) = 0 .

Let us define the energy function


ˆ L
E(t) := (wt (t, x))2 + c2 (wx (t, x))2 dx .
0

By taking the derivative of E(t) we obtain


ˆ L
d
E(t) = (2wt wtt + 2c2 wx wxt ) dx
dt 0
ˆ L
=2 (wt wtt − c2 wxx wt ) dx + [2c2 wx wt ]|L0 = 0 .
0

Therefore E(t) is constant, and, since E(0) = 0, it follows that E(t) = 0 for all t.
By looking at the definition of E(t), we realize that E(t) = 0 for all t implies that
wx (x, t) = wt (x, t) = 0 for all x, t, thus w is constant too. Using that w(x, 0) = 0
for all x, we then get w(x, t) = 0. Thus u1 ≡ u2 , which proves uniqueness.

70
CHAPTER 6
ELLIPTIC EQUATIONS

In this chapter we study Laplace’s and Poisson’s equations, which are the archetype
of elliptic equations. We examine the main properties of elliptic equations, the link
between solutions of Laplace’s equation and harmonic functions.

6.1. Classification of linear second order PDEs


Assuming that uxy = uyx (which is always the case in these notes), any general
linear second order PDE in two independent variables has the form

L[u] = auxx + 2buxy + cuyy + dux + euy + f u = g .

The term auxx + 2buxy + cuyy is the leading term, or principal part, because the
behaviour of the PDE is determined by a, b and c.
Remark 6.1.1. The coefficients a, b and c depends on x, y, i.e., a = a(x, y), b =
b(x, y) and c = c(x, y).
As we already said, being able to properly classify the PDE we wish to inves-
tigate allows us to choose the correct method (if it exists!) to tackle the PDE.
Knowing the “type” of the equation allows one to use the relevant methods to
solve it, which can be quite different depending on the type of the equation.
You probably encountered conic sections and quadratic forms, which are usu-
ally classified into parabolic, elliptic and hyperbolic, based on the discriminant
b2 − 4ac. The same can be done for a second order PDE at a given point.
Given a point (x0 , y0 ), consider the value

δ(L)(x0 , y0 ) := b2 (x0 , y0 ) − a(x0 , y0 )c(x0 , y0 ) .

At the point (x0 , y0 ) the PDE is said to be


• hyperbolic if δ(L)(x0 , y0 ) > 0;

• parabolic if δ(L)(x0 , y0 ) = 0;

71
Chapter 6. Elliptic equations

• elliptic if δ(L)(x0 , y0 ) < 0.

Remark 6.1.2. Since there is the convention that the xy term is 2b, then the
discriminant becomes (2b)2 − 4ac = 4(b2 − ac) = 4δ[L] (and the 4 can be dropped).
Remark 6.1.3. This classification describes a local property. However, we often
study PDEs with constant coefficients, where the classification is global.

Example 6.1.4. Consider the following PDEs:

• The wave equation utt − uxx = 0 is hyperbolic (we use variables (x, t) instead
of (x, y)).

• The heat equation ut − uxx = 0 is parabolic.

• Laplace’s equation uxx + uyy = 0 is elliptic (here we use (x, y) intended as


spatial variables).

Similarly to what happens with second order algebraic equations, we can use
a nondegenerate change of variables to reduce the equation to a simpler form.

Definition 6.1.5. A transformation (x, y) 7→ (ξ, η) = (ξ(x, y), η(x, y)) is a change
of coordinates near a point (x0 , y0 ) if
 
∂x ξ ∂y ξ 
det 6= 0 .
∂x η ∂ y η

(x,y)=(x0 ,y0 )

Any second order PDE can be transformed in the so-called canonical form
by using a change of coordinates u(x, y) 7→ w(ξ, η) = w(ξ(x, y), η(x, y)). The
canonical forms are:
˜ ξ + ẽwη + f˜w = g̃;
• hyperbolic: wξη + dw
˜ ξ + ẽwη + f˜w = g̃;
• parabolic: wξξ + dw
˜ ξ + ẽwη + f˜w = g̃.
• elliptic: wξξ + wηη + dw

Example 6.1.6. Consider the wave equation utt − c2 uxx = 0 for t ≥ 0. Let us
apply the transformation (
ξ = x + ct
η = x − ct .
This gives us u(x, t) = w(ξ, η) = w(x + ct, x − ct). Plugging this into the wave
equation gives 0 = utt − c2 uxx = −4c2 wξη . Dividing by −4c2 , we get wξη = 0,
which is in hyperbolic canonical form.

72
6.2. Laplace’s and Poisson’s equations

6.2. Laplace’s and Poisson’s equations


Poisson’s equation ∆u = f and its homogeneous counterpart, Laplace’s equation
∆u = 0, have a very prominent role in applied sciences. For example, the tempera-
ture of a homogeneous and isotropic body at equilibrium is a solution of Laplace’s
equation. In this case we can say that Laplace’s equation describes the stationary
case (independent of time) of the diffusion equation. Other examples are:

• The equilibrium position of a perfectly elastic membrane solves ∆u = 0.

• Poisson’s equation plays an essential role in the theory of conservative fields


(electric field, magnetic field, gravitational field, etc). If u is the electrostatic
potential, then Poisson’s equation ∆u = f represents the link between the
potential u and the charge density −f .

6.3. Basic properties of elliptic problems


We study some basic models involving the Laplacian, including models for heat
conduction, elasticity, electromagnetism, and gravitation. We consider u = u(x, y)
for (x, y) ∈ D, where D is is an open subset of R2 .

Definition 6.3.1. We say that u is harmonic if it solves Laplace’s equation, i.e.,


∆u(x, y) = uxx + uyy = 0. The nonhomogeneous version of Laplace’s equation is
Poisson’s equation ∆u(x, y) = ρ(x, y).

Remark 6.3.2. Laplace’s and Poisson’s equations are second order linear PDEs.
Laplace’s equation is also homogeneous.
Remark 6.3.3. The linearity of Laplace’s operator implies that a linear combination
of harmonic functions is a harmonic function.

Definition 6.3.4. Let D ⊂ R2 an open set and let ∂D be the boundary of D. Let
ν be the unit outward normal to ∂D. Then we can consider the following Dirichlet
problem for Poisson’s equation
(
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.1)
u(x, y) = g(x, y) , (x, y) ∈ ∂D .

On the other hand, the Neumann problem for Poisson’s equation reads as follows
(
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.2)
∂ν u(x, y) = g(x, y) , (x, y) ∈ ∂D .

73
Chapter 6. Elliptic equations

Finally we can consider the problem of the third kind for Poisson’s equation, that
is (
∆u(x, y) = ρ(x, y) , (x, y) ∈ D
(6.3.3)
u(x, y) + α∂ν u(x, y) = g(x, y) , (x, y) ∈ ∂D ,
where α and g are given functions.

∆u = ρ ∆u = ρ

D D

∂D ∂D

u=g
∇u
ν ∂ν u = g

Figure 6.1: Dirichlet and Neumann problems.

We can now ask if a solution to those problem exists. Consider the Neumann
problem, which can model the distribution of the temperature u(x, y) in the domain
D at an equilibrium configuration. This means that the heat flux through the
boundary must be balanced by the temperature production inside the domain.
This simple consideration is encoded in the following lemma.
Lemma 6.3.5. A necessary condition for the existence of a solution to the Neu-
mann problem (6.3.2) is
ˆ ˆ
g(x(s), y(s)) ds = ρ(x, y) dx dy ,
∂D D

where (x(s), y(s)) is a parametrization of ∂D.


Proof. Recall the identity ∆u = div(∇u). Then Poisson’s equation reads as
div(∇u) = ρ. If u is a solution of the Neumann problem, using Gauss’ theorem
we have
ˆ ˆ ˆ ˆ ˆ ˆ
ρ= ∆u = div(∇u) = ∇u · ν = ∂ν u = g.
D D D ∂D ∂D ∂D
´ ´
Therefore D
ρ= ∂D
g as desired.

74
6.3. Basic properties of elliptic problems

´Remark 6.3.6.
´ If u is a solution
´ of Laplace’s equation ∆u = 0, then we have that

∂A n
u = A
div(∇u) = A
∆u = 0 for every open subset A ⊂ D, where n is the
outward unit normal to ∂A.
An other natural question to ask is if the Cauchy problem for Laplace’s equation
is well-posed, i.e., if a solution exists, if it is unique and it is stable with respect to
the initial conditions. We recall that the Cauchy problem for Laplace’s equation
is 
∆u = 0 ,
 (x, y) ∈ R × (0, ∞)
u(x, 0) = f (x)

uy (x, 0) = g(x) ,

where y here plays the role of time (see also the wave Equation (4.2.1)).
Consider Laplace’s equation in the half-plane x ∈ R, y > 0. The following
counterexample to well-posedness is due to Hadamard. Consider the following
Cauchy problem

∆u(x, y) = 0 ,
 x ∈ R, y > 0
u(x, 0) = 0

uy (x, 0) = sin(nx)/n ,

for some n ∈ N. We look for solutions of the form

u(x, y) = sin(nx)Y (y) .

Then, from ∆u = 0, we have

0 = uxx + uyy = −n2 sin(nx)Y (y) + sin(nx)Y 00 (y) ,

which implies that Y 00 (y) = n2 Y (y). From the Dirichlet conditions u(x, 0) = 0 it
follows that Y (0) = 0, while by the Neumann condition uy (x, 0) = sin(nx)/n we
have
sin(nx) 1
= uy (x, 0) = sin(nx)Y 0 (0) =⇒ Y 0 (0) = .
n n
Hence, solving the problem for Y we get Y (y) = sinh(ny)/n2 and thus we obtain
the solution of the Cauchy problem

1
u(x, y) = sin(nx) sinh(ny) .
n2

Now, setting un (x, y) = n12 sin(nx) sinh(ny), we realize that in the limit n → ∞
both un (x, 0) and uny (x, 0) tend to zero (the initial conditions describe an arbitrary

75
Chapter 6. Elliptic equations

small perturbation of the trivial solution u = 0). On the other hand, the solution
is not bounded in the half-plane y > 0. Indeed, for any a > 0, we have
1 1
sup |un (x, a)| = sup 2 |sin(nx)| sinh(na) = 2 sinh(na)
x∈R x∈R n n
1 n→∞
= 2 (ena − e−na ) −−−→ ∞ .
2n
Thus, the Cauchy problem for Laplace’s equation is not stable and this implies
that it is not well-posed with respect to the initial conditions.
In the next example we construct an initial datum for which there is no solution
to the Cauchy problem using the Hadamard counterexample.
Example 6.3.7. Consider as before the functions un (x, y) = sin(nx) sinh(ny)/n2
and define
N
N
X un (x, y)
u (x, y) := ,
n=1
n
for which it holds uN (x, 0) = 0 and
N N
X uny (x, 0) X 1
uN
y (x, 0) = = 2
sin(nx) .
n=1
n n=1
n

Moreover uN is a solution of Laplace’s equation by linearity. However, for N that


goes to infinity, we do not have existence to the Cauchy problem

∆u(x, y) = 0 ,
 (x, y) ∈ R × (0, ∞)
u(x, 0) = 0
uy (x, 0) = ∞
 P 2
n=1 sin(nx)/n ,

P∞ n
because the solution would be given by u∞ := n=1 u (x, y)/n, which is not
a convergent series. However note that the initial conditions u∞ (x, 0) = 0 and

u∞ 2
P
y (x, 0) = n=1 sin(nx)/n do make perfectly sense.

Remark 6.3.8. These examples demonstrate the difference between elliptic and
hyperbolic problems on the upper half-plane.

6.4. Harmonic functions


Let us now compute some harmonic functions. We define harmonic polynomial of
degree n a harmonic function P (x, y) of the form
X
P (x, y) = aij xi y j .
0≤i+j≤n

For example:

76
6.4. Harmonic functions

• For n = 0, u(x, y) = 1.

• For n = 1, u(x, y) = x and u(x, y) = y, thus in general u(x, y) = ax + by for


any a, b ∈ R.

• For n = 2, u(x, y) = xy and u(x, y) = x2 − y 2 .

• For n = 3, u(x, y) = x3 − 3xy 2 and u(x, y) = y 3 − 3x2 y.

77
CHAPTER 7
MAXIMUM PRINCIPLES

The maximum principle is a fundamental property of solutions to certain PDEs of


elliptic or parabolic type. Maximum principles are based on the observation that,
if a C 2 function u attains its maximum over an open set D at a point x0 ∈ D,
then
Du(x0 ) = 0 , D2 u(x0 ) ≤ 0 ,
where D2 u is the Hessian matrix. To use this observation, we need to work with
solutions that are at least C 2 .

7.1. Weak maximum principle


First we identify circumstances under which a function must attain its maximum
(or minimum) on the boundary.
Theorem 7.1.1 (Weak maximum principle). Let D be a bounded domain and let
u(x, y) ∈ C 2 (D) ∩ C(D) be a harmonic function in D. Then the maximum of u in
D is achieved on the boundary ∂D, namely
max u = max u .
D ∂D

Proof. Consider the function uε (x, y) = u(x, y)+ε(x2 +y 2 ), with ε > 0. Assume by
contradiction that uε attains a local maximum at (x, y) ∈ D. Then, ∆uε (x, y) ≤ 0.
On the other hand, since u is harmonic, we have that
∆uε (x, y) = ∆u(x, y) + 4ε = 4ε > 0 ,
which is a contradiction. This proves that uε takes its maximum on the boundary,
maxD uε = max∂D uε . Thus, since u ≤ uε and D is bounded, we get
max u ≤ max uε = max uε = max(u + ε(x2 + y 2 ))
D D ∂D ∂D

≤ max u + ε max(x + y 2 ) = max u + εc .


2
∂D ∂D ∂D

Letting ε → 0, it follows that maxD u ≤ max∂D u.

79
Chapter 7. Maximum principles

Corollary 7.1.2. Under the same assumptions of Theorem 7.1.1, we have

min u = min u .
D ∂D

Proof. Note that ∆(−u) = −∆u = 0, hence we can apply Theorem 7.1.1 to −u
and obtain
min u = − max(−u) = − max(−u) = min u .
D D ∂D ∂D

Remark 7.1.3. The boundedness in Theorem 7.1.1 and Corollary 7.1.2 of D is


necessary. Indeed, if one takes D = R2 \ B1 , then u(x, y) = log(x2 + y 2 ) is
harmonic in D, u|∂D = 0, but supD u = ∞ =
6 0.

7.2. Mean value principle


Theorem 7.2.1 (Mean value principle). Consider a harmonic function u on D
and let BR (x0 , y0 ) ⊂ D be a ball of radius R. Then
ˆ
1
u(x0 , y0 ) = u(x(s), y(s)) ds
2πR ∂BR (x0 ,y0 )
ˆ 2π (7.2.1)
1
= u(x0 + R cos θ, y0 + R sin θ) dθ .
2π 0
Proof. Given r ∈ (0, R), set
ˆ 2π
1
V (r) = u(x0 + r cos θ, y0 + r sin θ) dθ ,
2π 0

and compute

V 0 (r) =
ˆ 2π
1 d
= u(x0 + r cos θ, y0 + r sin θ) dθ
2π 0 dr
ˆ 2π
1
= [ux (x0 + r cos θ, y0 + r sin θ) cos θ + uy (x0 + r cos θ, y0 + r sin θ) sin θ] dθ
2π 0
ˆ ˆ
1
= ∂ν u = ∆u = 0 .
2πr ∂Br (x0 ,y0 ) Br (x0 ,y0 )

As a result, V (R) = V (0), which gives exactly what we want.


Remark 7.2.2. The inverse implication is also true, namely a smooth function that
satisfies the mean value property in some domain D is harmonic in D.

80
7.3. Strong maximum principle

7.3. Strong maximum principle


Theorem 7.3.1 (Strong maximum principle). Let u be a harmonic function in
D, an open connected subset of R2 . If u attains its maximum (or its minimum) at
an interior point of D, then u is constant.

Proof. Let x0 ∈ D be a maximum point for u. Let x be another point connected


to x0 by a curve γ.

D
x

x0

Figure 7.1: Proof of the Strong maximum principle

Choose R > 0 smaller than the distance from γ to ∂D and define inductively
a sequence of points {xi }N i=0 ⊂ γ and radii Ri < R such that xi+1 ∈ ∂BRi (xi )
for any i = 1, . . . , N − 1 and xN = x. Note that one can take Ri = R for each
i = 0, . . . , N − 2 and then RN −1 ≤ R such that xN ∈ ∂BRN −1 (xN −1 ).
Then inside each ball we apply inductively the mean value theorem, Theo-
rem 7.2.1. More precisely, by the mean value theorem applied at x0 we have
ˆ ˆ
1 1
max u = u(x0 ) = u≤ max u = max u .
D 2πR ∂BR (x0 ) 2πR ∂BR (x0 ) D D

This implies that u = maxD u on ∂BR (x0 ). Therefore, since x1 ∈ ∂BR (x0 ), also x1
is a point of maximum for u. Hence we can repeat the argument above (using the
mean value theorem) to deduce that u = maxD u on ∂BR (x1 ), hence x2 ∈ ∂BR (x1 )
is a maximum for u, and iterating we get that x = xN is a maximum for u as well.
In particular u(x0 ) = maxD u = u(x). By arbitrariness of x ∈ D, this proves that
u = maxD u is constant in D.

Remark 7.3.2. Given a point (x0 , y0 ) ∈ D and a radius r > 0, consider the curve
γ(θ) = (x0 + r cos θ, y0 + r sin θ).

81
Chapter 7. Maximum principles

(cos θ, sin θ) = ν∂Br (x0 ,y0 ) (γ(θ))


γ(θ) = (x0 + r cos θ, y0 + r sin θ)

(x0 , y0 ) θ

Figure 7.2: Mean value principle on the circle

Then (7.2.1) can be rewritten as


ˆ 2π
1
u(x0 , y0 ) = ∇u(γ(θ)) · ν∂Br (x0 ,y0 ) (γ(θ)) dθ .
2π 0

Let us define F (γ(θ)) = ∇u(γ(θ)) · ν∂Br (x0 ,y0 ) (γ(θ)). Then we have
ˆ 2π ˆ 2π
1 1 1
u(x0 , y0 ) = F (γ(θ)) dθ = 0
F (γ(θ))|γ 0 (θ)| dθ
2π 0 2π 0 |γ (θ)|
ˆ 2πr ˆ
1 0 1
= F (γ(θ))|γ (θ)| dθ = F,
2π 0 2πr γ

where the second-last equality follows from the fact that |γ 0 (θ)| = r, because
γ 0 (θ) = (−r sin θ, r cos θ), and the last equality is the definition of integral along a
curve.

7.4. Maximum principle for Poisson’s equation


Now we examine some important consequences of the maximum principles. Let
us assume that the domain D is bounded, then we have the following theorem.

Theorem 7.4.1. Given a bounded domain D ⊂ R2 , the Dirichlet problem


(
∆u = f , in D
u = g0 , in ∂D .

has at most one solution u ∈ C 2 (D) ∩ C(D).

82
7.5. Boundary conditions

Proof. Assume by contradiction that there exist two solutions u1 , u2 . Then define
u := u1 − u2 , which solves (
∆u = 0 , in D
u = 0, in ∂D .
From the weak maximum principle Theorem 7.1.1 we get that the maximum and
the minimum of u are zero, which implies u ≡ 0 and thus u1 ≡ u2 .

Remark 7.4.2. In the previous theorem the boundedness condition on D is neces-


sary. The reason for this is the same as for the weak maximum principle. Indeed,
let us consider a harmonic function u in R2 \ B1 (0), with u = 1 on ∂B1 (0). Then
u1 = 1 is a solution to this problem, but also u2 = 1 + log(x2 + y 2 ) is a solution.
Hence there is no uniqueness.

Theorem 7.4.3. Let D ⊂ R2 be a bounded domain. Let u1 , u2 ∈ C 2 (D) ∩ C(D)


solve ∆u1 = 0, ∆u2 = 0 with Dirichlet boundary conditions u1 = g1 on ∂D and
u2 = g2 on ∂D. Then

max |u1 − u2 | = max |g1 − g2 | .


D ∂D

Proof. Define v := u1 − u2 , then v is harmonic in D and v = g1 − g2 on ∂D.


Therefore the maximum principle Theorem 7.1.1 implies

max v = max v = max (g1 − g2 ) ,


D ∂D ∂D

and by the minimum principle Corollary 7.1.2 we have

min v = min v = min (g1 − g2 ) .


D ∂D ∂D

Therefore maxD |u1 − u2 | = max∂D |g1 − g2 | as desired.

7.5. Boundary conditions


We recall some different types of boundary conditions.

• Dirichlet: u = g on ∂D.
It may be referred also as condition of first type or as a fixed boundary condition.
For example the following would be considered Dirichlet conditions:

(a) In thermodynamics when a surface or an object is held at a fixed temperature.


(b) In electromagnetism when a node of a circuit is held at a fixed voltage.

83
Chapter 7. Maximum principles

(c) In fluid dynamics, the no-slip condition for viscous fluids states that at a solid
boundary the fluid has zero velocity relative to the boundary.
• Neumann: ∂ν u = g on ∂D, where ν is the outer normal vector to D.
This is also called second type boundary condition and it specifies the values in
which the derivative of a solution is applied within the boundary of the domain.
An application in thermodynamics is a prescribed heat flux from a surface, which
serves as boundary condition. For example, a perfect insulator has no flux, while
an electrical component may be dissipating at a known power.
• Robin or third type boundary condition: u + α∂ν u = g on ∂D, where α ∈ R
and g is given function.
Robin boundary conditions are also called impedance boundary conditions from
their application in electromagnetic problems.

7.6. Maximum principle for parabolic equations


The maximum principle holds also for parabolic equations. Consider the heat
equation for u = u(t, x), t > 0, x ∈ D, namely
ut = k∆u .
Define the domain QT := [0, T ] × D, where D is the spatial domain and t ∈ [0, T ]
is the time. Then we define the parabolic boundary as
∂P QT := {{0} × D} ∪ {[0, T ] × ∂D} ,
that is the boundary of QT except for the top cover {T } × D.
Theorem 7.6.1 (Maximum principle for the heat equation). Let u solve the ho-
mogeneous heat equation ut = k∆u in QT = [0, T ] × D for some k > 0. Assume
that D ⊂ R2 is bounded. Then u achieves its maximum (and minimum) on ∂P QT .
Proof. Take ε > 0 and consider the function uε (t, x) = u(t, x) − εt. Then ∂t uε =
∂t u − ε, and ∆uε = ∆u, therefore ∂t uε = k∆uε − ε. Assume by contradiction that
uε has a maximum at some point (t0 , x0 ) ∈ QT \ ∂P QT . We distinguish two cases:
• In the case t0 < T , (t0 , x0 ) is an interior maximum point, hence ∂t uε (t0 , x0 ) =
0 and ∆uε (t0 , x0 ) ≤ 0. This is in contradiction with the equation ∂uε =
k∆uε − ε.
• In the case t0 = T , (T, x0 ) is an interior maximum point, thus ∆uε (T, x0 ) ≤
0. On the other hand, since uε attains its maximum at (T, x0 ), we have
uε (T, x0 ) − uε (T − s, x0 )
∂t uε (T, x0 ) = lim+ ≥ 0.
s→0 s

84
7.6. Maximum principle for parabolic equations

Again these two inequalities are in contradiction with the equation ∂t uε =


k∆uε − ε.

In conclusion, uε attains its maximum on ∂P QT . Since u − εT ≤ uε ≤ u inside QT


we get
max u ≥ max uε = max uε ≥ max u − εT
∂P Q T ∂P Q T QT QT

and the result follows letting ε → 0.

Corollary 7.6.2. The Dirichlet problem for the heat equation



ut − k∆u = f , in QT

u(0, x) = g(x) , in D

u(t, x) = h(x) , in [0, T ] × ∂D

has a unique solution.

Proof. Consider the function v := u1 − u2 and look at the equation fulfilled by v,


that is the homogeneous heat equation with zero boundary and initial conditions.
This leads to v = 0 and thus u1 = u2 . The details are left as an exercise.

85
CHAPTER 8

LAPLACE’S EQUATION IN RECTANGULAR


AND CIRCULAR DOMAINS

In this chapter we use the method of separation of variables to solve Laplace’s


equation on rectangular domains. We consider the domain R = [a, b] × [c, d] and
we assign one boundary condition on every boundary condition. Then we conclude
studying the Laplace’s equation on circular domains.

8.1. Boundary condition on two opposite sides

u=0
d

u=f ∆u = 0 u=g

c
u=0
a b x

Figure 8.1: Laplace equation in a rectangular domain.

As a first example we start with the assumption that u = 0 on two opposite

87
Chapter 8. Laplace’s equation in rectangular and circular domains

sides of the rectangle 



 ∆u = 0 , in R

u = 0 , in [a, b] × {c, d}


 u=f, in {a} × [c, d]
{b} × [c, d] .

u = g, in
We look for a solution of the form u(x, y) = X(x)Y (y). By plugging it in the
equation we get X 00 (x)Y (y) + Y 00 (y)X(x) = 0. Dividing by X(x)Y (y) we obtain

X 00 (x) Y 00 (y)
=− .
X(x) Y (y)

Since the function on the left only depends on x, while the one on the right only
depends on y, the only possibility is that they are both constant

X 00 (x) Y 00 (y)
=− = λ ∈ R.
X(x) Y (y)

Hence, for Y we have the ODE Y 00 (y) = −λY (y) and, by the analysis we did in
Section 5.1 (and in general in Chapter 5), we know that equations of this type have
three solutions depending on the sign on λ. By the condition Y (c) = Y (d) = 0,
we deduce that λ must be positive
 2

λ = λn = , n ∈ N, n ≥ 1,
d−c

with corresponding solution


 
nπ(y − c)
Yn (y) = an sin , n ∈ N, n ≥ 1.
d−c

Remark 8.1.1. If we had Neumann boundary conditions on Y , then we would have


used a cosines expansion instead of a sines one and we would have started from
n = 0 (see Section 5.2).
Concerning the ODE for X, we have Xn00 (x) = λn Xn (x). Since λn √ > 0 for all
n ≥ 1,√the solution of our sequence of ODEs is a combination of sinh( λn x) and
cosh( λn x). However, instead of expressing the family of all solutions to the ODE
as linear combination of hyperbolic sines and cosines in x, because of our
√ boundary
conditions
√ it is convenient to express our solution in terms of sinh( λn (x − a)),
2
sinh( λn (x − b)) for λn = (nπ/(d − c)) . Therefore the general form of Xn is given
by p p
Xn (x) = αn sinh( λn (x − a)) + βn sinh( λn (x − b)) .

88
8.2. Laplace’s equation with Dirichlet boundary conditions in rectangular
domains

Altogether, the expression for Xn and Yn gives



X p p p
u(x, y) = [An sinh( λn (x − a)) + Bn sinh( λn (x − b))] sin( λn (y − c)) ,
n=1

where we renamed the coefficients. Now the only task left is to determine the
coefficients An and Bn . To do so we exploit the boundary conditions. Taking
x = a, since sinh(0) = 0, we get

X p p
u(a, y) = Bn sinh( λn (a − b)) sin( λn (y − c)) = f (y) ,
n=1

from √
which we deduce that Bn are the Fourier coefficients of f scaled by a factor
sinh( λn (a − b)). The same reasoning applies to the other boundary condition in
order to determine An .

8.2. Laplace’s equation with Dirichlet boundary conditions


in rectangular domains
In the example above, we had two opposite boundaries where u was zero. This
simplified the computations and allowed us to have an expansion in sines for Y
and hyperbolic sines for X.
Let us now consider the general case where we have nonzero boundary condi-
tions 
∆u = 0 , in R


u = f , in {a} × [c, d]



u = g, in {b} × [c, d] (8.2.1)

u = h, in [a, b] × {d}





u = k , in [a, b] × {c} .

u=h u1 = 0 u2 = h
u1 = g

u2 = 0

u2 = 0
u1 = f
u=g
u=f

∆u = 0 ∆u1 = 0 + ∆u2 = 0

u=k u1 = 0 u2 = k

Figure 8.2: Splitting of the Laplace equation in a rectangular domain.

89
Chapter 8. Laplace’s equation in rectangular and circular domains

Then we can do the following splitting: we can write u as u1 + u2 , where


 

 ∆u1 = 0 , in R 
 ∆u2 = 0 , in R
 
u1 = f , in {a} × [c, d] u2 = 0 , on {a} × [c, d]

 

 
u1 = g , in {b} × [c, d] u2 = 0 , on {b} × [c, d]
 
u1 = 0 , in [a, b] × {d} u2 = h , on [a, b] × {d}

 


 

 
u = 0 ,
1 in [a, b] × {c} ,  u2 = k , on [a, b] × {c} .
Hence, we saw in the previous example that u1 is of the form
u1 (x, y) =
∞       
X nπ nπ nπ
= An sinh (x − a) + Bn sinh (x − b) sin (y − c) .
n=1
d−c d−c d−c
Analogously, reversing the role of x and y, u2 is given by an expression of the form
u2 (x, y) =
∞       
X nπ nπ nπ
= Cn sinh (y − c) + Dn sinh (y − d) sin (x − a) .
n=1
b−a b−a b−a
Finally, note that the coefficients An , Bn , Cn , Dn are related to the Fourier coeffi-
cients of the boundary data.
Observe that, when we split the problem for u in two problems for u1 and u2 ,
the boundary data may not be continuous anymore even if they are continuous in
the original problem (consider for instance the case f = g = h = k = 1). This
is not an issue analytically, but it becomes a problem when one wants to solve
the problem numerically, since the jump in the boundary data creates numerical
problems. We now describe a trick to avoid this issue.
If we want to solve (8.2.1), we can define u := u − P , where P is a polynomial
P (x, y) := a0 + a1 x + a2 y + a3 xy for some a0 , a1 , a2 , a3 ∈ R. Note that u is still
harmonic since P is harmonic, and it solves


 ∆u = 0 , in R

u = f , in {a} × [c, d]



u = g, in {b} × [c, d]

u = h, in [a, b] × {d}





u = k , in [a, b] × {c} ,
where f = f − P , g = g − P , h = h − P , k = k − P . Now, if the boundary data
for u are continuous, we can choose coefficients a0 , a1 , a2 , a3 ∈ R to ensure that
f (a, c) = f (a, d) = g(b, c) = g(b, d) = 0 (we have four parameters to adjust four
boundary conditions). In this way, if we split the problem as before in u = u1 + u2 ,
the boundary data for u1 and u2 are not discontinuous anymore.

90
8.3. Laplace’s equation with Neumann boundary conditions in rectangular
domains

8.3. Laplace’s equation with Neumann boundary conditions


in rectangular domains
Consider now the Laplace’s equation in a rectangular domain with Neumann
boundary conditions. 

 ∆u = 0 , in R

ux = f , on {a} × [c, d]



ux = g , on {b} × [c, d]

uy = k , on [a, b] × {d}





u = h ,
y on [a, b] × {c} .
Suppose that the problem satisfies the necessary condition for the existence of a
y

uy = k
d

ux = f ∆u = 0 ux = g

c
uy = h
a b x

Figure 8.3: Neumann problem in a rectangular domain.

solution to the Neumann problem, namely


ˆ d ˆ d ˆ b ˆ b
g− f+ k− h = 0.
c c a a

To solve the problem we need to split u = u1 + u2 in the sum of two problems


as we did for the Dirichlet problem in Section 8.2. Hence u1 , u2 satisfy
 

 ∆u1 = 0 , in R 
 ∆u2 = 0 , in R
 
(u1 )x = f , in {a} × [c, d] (u2 )x = 0 , on {a} × [c, d]

 

 
(u1 )x = g , in {b} × [c, d] (u2 )x = 0 , on {b} × [c, d]
 
(u1 )y = 0 , in [a, b] × {d} (u2 )y = k , on [a, b] × {d}

 


 


(u ) = 0 , in [a, b] × {c} , 
(u ) = h , on [a, b] × {c} .
1 y 2 y

91
Chapter 8. Laplace’s equation in rectangular and circular domains

Note that, by splitting the problem, the existence condition for the Neumann
problem might not be satisfied anymore for u1 and u2 . To overcome this problem,
we use the trick of adding a harmonic polynomial. Consider for instance α(x2 −y 2 )
for some α ∈ R and add it to u. This yields the new harmonic function v =
u + α(x2 − y 2 ). If we now split v = v1 + v2 as we did above for u, then the problems
for v1 and v2 are
 

 ∆u1 = 0 , in R 
 ∆u2 = 0 , in R
 
(v1 )x = f + 2αa , in {a} × [c, d] (v2 )x = 0 , in {a} × [c, d]

 

 
(v1 )x = g + 2αb , in {b} × [c, d] (v2 )x = 0 , in {b} × [c, d]
 
(v1 )y = 0 , in [a, b] × {d} (v2 )y = k − 2αc , in [a, b] × {d}

 

 
 
(v ) = 0 ,
1 y in [a, b] × {c} , (v ) = h − 2αd , in [a, b] × {c} .
2 y

Note that the compatibility condition for v1 is given by


ˆ d ˆ d ˆ d
1
(g + 2αb) − (f + 2αa) = 0 =⇒ α = (f − g) .
c c 2(b − a)(d − c) c
Hence, with this choice of α, we can solve the problem for v1 . Now recall that, by
assumption, the Neumann problem for u was solvable, that is
ˆ d ˆ b
(g − f ) + (k − h) = 0 .
c a

Thus α is also equal to


ˆ d ˆ b
1 1
α= (f − g) = α = (k − h) .
2(b − a)(d − c) c 2(b − a)(d − c) a

This implies that ˆ ˆ


b b
(k − 2αd) − (h − 2αc) = 0 ,
a a
thus also v2 satisfies the compatibility condition and we can solve the problem
using the method of separation of variables.

8.4. Two explicit examples


Example 8.4.1. We want to solve the following Dirichlet problem on the square
R = [0, π] × [0, π] ⊂ R2

∆u = 0 ,
 in R
u(x, 0) = 1

u(x, π) = u(0, y) = u(π, y) = 0 .

92
8.4. Two explicit examples

Since there is only one nonzero boundary condition, there is no need to split
the
P problem as in Section 8.2. We look for a solution of the form u(x, y) =
n∈N Xn (x)Yn (y), where each term Xn (x)Yn (y) is harmonic. Hence

0 = ∆(Xn (x)Yn (y)) = Xn00 (x)Yn (y) + Xn (x)Yn00 (y)


X 00 (x) Y 00 (y)
⇐⇒ n =− n = −λn ∈ R .
Xn (x) Yn (y)
Therefore we get the system of ODEs
(
Xn00 (x) = −λn Xn (x) , Xn (0) = Xn (π) = 0
Yn00 (x) = λn Yn (x) ,

from which we deduce the solution for Xn


p p
Xn (x) = An sin( λn x) + Bn cos( λn x) .

From Xn (0) = 0 we deduce Bn = 0 for all n ∈ N, and from Xn (π) = 0 we have


λn = n2 for all n ∈ N. Hence we get Xn (x) = An sin(nx) for all n ∈ N.
On the other hand, the function Yn is given by

Yn (y) = Cn sinh(ny) + Dn sinh(n(y − π)) .

As a result, the general solution for the problem we are considering is



X
u(x, y) = sin(nx)[Cn sinh(ny) + Dn sinh(n(y − π))] .
n=1

Remark 8.4.2. Note that the general form would have the coefficient An in front
of the term sin(nx). However we can absorb this constant inside Cn and Dn ,
obtaining exactly the formula above.
From the condition u(x, π) = 0, we obtain Cn = 0 for all n ∈ N+ . Then, from
u(x, 0) = 1, we have

X ∞
X
1 = u(x, 0) = sin(nx)[Dn sin(−nπ)] = αn sin(nx) ,
n=1 n=1

where we defined αn := Dn sin(−nπ). As usual, we multiply both sides by sin(mx)


and we integrate over [0, π], obtaining
ˆ π ∞ ˆ π (
X αm π/2 , if m = n
sin(mx) dx = αn sin(mx) sin(nx) dx =
0 n=1 0 0, if m 6= n .

93
Chapter 8. Laplace’s equation in rectangular and circular domains

Thus
ˆ
2 π
   
2 cos(mx) π 2 1 − cos(mx)
αm = sin(mx) dx = − =
π 0 π m 0 π m
(
4/(πm) , if m is odd
=
0, if m is even .
Therefore, since αm = Dm sinh(−mπ), we have
4

 , if m is odd
Dm = πm sinh(−mπ)
0, if m is even .

In conclusion, using that sinh(−mπ) = − sinh(mπ), we get



X 4 sin((2j + 1)x) sinh((2j + 1)(y − π))
u(x, y) = − .
j=0
π(2j + 1) sinh((2j + 1)π)

Example 8.4.3. Consider now the Laplace’s equation on R with Neumann bound-
ary conditions

∆u = 0 ,
 in R = [0, π] × [0, π]
uy (x, π) = x − π/2

ux (0, y) = ux (π, y) = uy (x, 0) = 0 .

Let
´ us verify the necessary condition to solve elliptic Neumann problems, that is
∂ u = 0. In our case we have
∂R ν
ˆ ˆ π ˆ
π
∂ν u = x− dx = 0 = ∆u ,
∂R 0 2 R

as desired. Hence we can proceed looking for a solution via the method of separa-
tion of variables X
u(x, y) = Xn (x)Yn (y) .
n∈N
The harmonicity condition leads to
(
Xn00 (x) = −λn Xn (x) , Xn0 (0) = Xn0 (π) = 0
Yn00 (x) = λn Yn (x) .
Therefore we obtain Xn (x) = cos(nx) and Yn (y) = An cosh(ny)+Bn cosh(n(y−π))
for all n ∈ N. Then, the general solution is

X
u(x, y) = cos(nx)[An cosh(ny) + Bn cosh(n(y − π))] .
n=0

94
8.5. Polar coordinates

Exploiting the boundary conditions to find An and Bn we get



X
0 = uy (x, 0) = cos(nx)Bn n sinh(−nπ) =⇒ Bn = 0
n=0

and
∞ ∞
π X X
x − = uy (x, π) = An n sinh(nπ) cos(nx) = βn cos(nx) ,
2 n=0 n=0

where βn := An n sinh(nπ). By a similar computation as the one in the previous


example we get 
− 4 , if m is odd
βm = πm2
0, if m is even, m 6= 0

4

− , if m is odd
=⇒ Am = πm3 sinh(mπ)
0, if m is even, m 6= 0 .

This yields to the solution



X 4 cos((2j + 1)x) cosh((2j + 1)y)
u(x, y) = A0 − .
j=0
π(2j + 1)3 sinh((2j + 1)π)

Remark 8.4.4. One could also have Dirichlet conditions on some parts of the bound-
ary and Neumann conditions on other parts of the boundary. In this case you need
to choose the right bases in terms of sin, cos and sinh, cosh.

8.5. Polar coordinates


It can be useful in applications, for example when the domain D has some radial
symmetry, to express the Laplace’s operator in polar coordinates. We define the
polar coordinates (r, θ) via the relation
(
x = r cos θ
y = r sin θ .

Hence, any function u(x, y) can be expressed in polar coordinates via a function
w(r, θ) such that w(r, θ) = u(x(r, θ), y(r, θ)). Then the Laplacian in polar coordi-
nates reads
1 1
∆u = wrr + wr + 2 wθθ .
r r

95
Chapter 8. Laplace’s equation in rectangular and circular domains

Now assume that u is a harmonic function and that w only depends on the
variable r, that is w = w(r), then

1
0 = ∆u = w00 (r) + w0 (r) .
r
By defining v(r) := w0 (r), we get v 0 (r) = −v(r)/r and thus

v 0 (r) 1 d d
= − ⇐⇒ log|v(r)| = − log r ⇐⇒ log|v(r)| = − log(r) + c ,
v(r) r dr dr

for some constant c ∈ R. Hence we obtain that w0 (r) = v(r) = ec /r if v(r) > 0
and w0 (r) = v(r) = −ec /r if v(r) < 0. Integrating with respect to r we get
ˆ r c
e
w(r) = ± ds ± w(1) = c1 log(r) + c2 ,
1 s

with c1 = ec > 0 if v(r) > 0 and c1 = −ec < 0 if v(r) < 0.


Then w(r)
p = c1 log(r) + c2 is a solution of Laplace’s equation p for r > 0.
Since r = x + y , this proves that u(x, y) = w(r) = c1 log( x + y 2 ) + c2 =
2 2 2

c1 log(x2 + y 2 )/2 + c2 is harmonic on R2 \ {(0, 0)} for any c1 , c2 ∈ R.

8.6. Laplace’s equation in circular domains


We now consider Laplace’s equation on circular domains D = Ba = {0 ≤ r ≤
a, θ ∈ [0, 2π]}. For this problem we use the expression of the Laplacian in polar
coordinates
1 1
0 = ∆u = wrr + wr + 2 wθθ ,
r r
where u(x(r, θ), y(r, θ)) = u(r cos θ, r sin θ) = w(r, θ). We look for separated solu-
tions of the form
w(r, θ) = R(r)Θ(θ)
and obtain
1 1
0 = R00 (r)Θ(θ) + R0 (r)Θ(θ) + 2 Θ00 (θ)R(r)
r r
r2 R00 (r) + rR0 (r) Θ00 (θ)
=⇒ =− = λ.
R(r) Θ(θ)
Hence we have the ODEs system
(
r2 R00 (r) + rR0 (r) = λR(r)
Θ00 (θ) = −λΘ(θ) , Θ(0) = Θ(2π), Θ0 (0) = Θ0 (2π) .

96
8.6. Laplace’s equation in circular domains

Note that the conditions Θ(0) = Θ(2π), Θ0 (0) = Θ0 (2π) come from the fact that
we want u to be a classical solution inside D, so it should be at least C 2 . Hence
we impose that Θ and Θ0 are periodic in [0, 2π]. Observe that, since Θ00 = −λΘ
automatically also Θ00 is periodic. The solution for the second ODE is

Θn (θ) = An cos(nθ) + Bn sin(nθ) , n ∈ N.

For the first equation one can check that


(
C0 + D0 log r , for n = 0
Rn (r) =
Cn rn + Dn r−n , for n 6= 0

gives the two parameter family of solutions. However the functions r−n and log r
are singular at 0 inside the domain D, so we discard them. Thus the general
solution is given by

X
w(r, θ) = C0 + rn [An cos(nθ) + Bn sin(nθ)] .
n=1

Remark 8.6.1. The same method as above can be applied to domains that are
discs, circles, rings or sectors of a circle/ring. However, in the cases where the
domain is only a sector of a disc or a ring, then Θ is not necessarily periodic
anymore. Moreover, in cases where the origin is not in the domain, we do not have
to discard the terms with r−n and log r.
Example 8.6.2. Let B1 = {x2 + y 2 ≤ 1} be the unit disc in R2 . We want to solve
the following Dirichlet problem
(
∆u = 0 , in B1
u = y 2 , in ∂B1 .

Using polar coordinates and defining w(r, θ) = u(r cos θ, r sin θ), we can rewrite
the problem as
1 1

wrr + wr + 2 wθθ = 0 ,
 (r, θ) ∈ (0, 1) × (0, 2π)
r r
w(1, θ) = sin2 θ = 1 − 1 cos(2θ) .

2 2
As seen before, we then get that the general solution has the form

X
w(r, θ) = C0 + rn [An cos(nθ) + Bn sin(nθ)] .
n=1

97
Chapter 8. Laplace’s equation in rectangular and circular domains

Imposing the boundary condition we have



1 1 X
− cos(2θ) = w(1, θ) = C0 + [An cos(nθ) + Bn sin(nθ)] ,
2 2 n=1

from which we deduce that C0 = 1/2, A2 = −1/2 and all the other coefficients are
zero. Thus, the final solution is
1 1 2 1
w(r, θ) = − r cos(2θ) =⇒ u(x, y) = (1 − x2 − y 2 ) .
2 2 2
Example 8.6.3. Let us consider the problem
 p
∆u = 0 ,
 on D = {(x, y) ∈ R2 : 1 ≤ x2 + y 2 ≤ 2}
u(x, y) = 3x/2 , on {x2 + y 2 = 2}

u(x, y) = y , on {x2 + y 2 = 1} .

In polar coordinates this problem reads as


1 1

wrr + r wr + r2 wθθ = 0 , in D = {r ∈ [1, 2], θ ∈ [0, 2π)}

 w(2, θ) = 3 cos θ

w(1, θ) = sin θ .

If we write w(r, θ) = R(r)Θ(θ), the ODEs for R and Θ are the same as before, but
now the boundary conditions for R have changed. The general solution is

w(r, θ) = E + F log r +
X∞
+ [An rn cos(nθ) + Bn rn sin(nθ) + Cn r−n cos(nθ) + Dn r−n sin(nθ)] .
n=1

Using the boundary condition w(1, θ) = sin θ, we have



X
sin θ = w(1, θ) = E + [(An + Cn ) cos(nθ) + (Bn + Dn ) sin(nθ)] ,
n=1

which implies E = 0, B1 + D1 = 1, Bn + Dn = 0 for all n ≥ 2 and An + Cn = 0


for all n ≥ 1. By the condition w(2, θ) = 3 cos θ, we have

3 cos θ = w(2, θ) = E + F log(2) +


X∞
+ [(2n An + 2−n Cn ) cos(nθ) + (2n Bn + 2−n Dn ) sin(nθ)] .
n=1

98
8.6. Laplace’s equation in circular domains

This implies that E+F log(2) = 0, 2n Bn +2−n Dn = 0 for all n ≥ 1, 2A1 +C1 /2 = 3,
2n An + 2−n Cn = 0 for all n ≥ 2. Combining all these information we get

E = 0, E + F log(2) = 0 =⇒ E = F = 0
1
A1 + C1 = 0, 2A1 + C1 = 3 =⇒ A1 = 2, C1 = −2
2
1 1 4
B1 + D1 = 1, 2B1 + D1 = 0 =⇒ B1 = − , D1 = ,
2 3 3
and for n ≥ 2

An + Bn = 0, 2n An + 2−n Cn = 0 =⇒ An = Cn = 0
Bn + Dn = 0, 2n Bn + 2−n Dn = 0 =⇒ Bn = Dn = 0 .

This proves that


1 4
w(r, θ) = 2r cos θ − r sin θ − 2r−1 cos θ + r−1 sin θ .
3 3
Example 8.6.4. Let us now consider Laplace’s equation on an annular sector of
angle γ ∈ (0, 2π) and radii 1 and 2, i.e., D = {(r, θ) : r ∈ (1, 2), θ ∈ (0, γ)}. To
solve such problem we rely on the formula for the Laplacian in polar coordinates
and on the method of separation of variables. Assume that w is prescribed on
∂D and that w(r, 0) = w(r, γ) = 0 for all r ∈ (1, 2). If we look for solutions
of the form w(r, θ) = R(r)Θ(θ), to enforce these boundary conditions we impose
Θ(0) = Θ(γ) = 0. Hence we have
 

Θn (θ) = An sin θ .
γ
Then, the ODE for Rn becomes

r2 Rn00 (r) + rRn0 (r) − λn Rn (r) = 0 ,

where λn = (nπ/γ)2 . Looking for solutions of the form rα , we get


p
0 = α(α − 1) + α − λn = α2 − λn =⇒ α = ± λn .

Hence Rn (r) = Cn rnπ/γ + Dn r−nπ/γ and the general solution in this case is given
by
∞    
X nπ nπ
w(r, θ) = An sin θ r nπ/γ
+ Bn sin θ r−nπ/γ
n=1
γ γ
and the coefficients An and Bn are found expanding the boundary conditions
w(1, θ) and w(2, θ) over the interval θ ∈ [0, γ] using the Fourier basis {sin(nπθ/γ)}.

99
Chapter 8. Laplace’s equation in rectangular and circular domains

Remark 8.6.5. If the sector is D = {(r, θ) : r ∈ [0, 2), θ ∈ (0, γ)} with boundary
conditions w(r, 0) = w(r, γ) = 0 for all r ∈ (0, 2), then the general solution is of
the form ∞  
X nπ
w(r, θ) = An sin θ rnπ/γ ,
n=1
γ
since the negative powers of r are singular at the origin and should be discarded.

8.7. A “real life” example


Consider a pair of infinite, grounded conducting sheets separated at distance d.
Suppose that there is a conductor connecting the two metal sheets held at position
U0 sin(2πx/d). We want to understand what is the potential between the plates.
y



U0 sin d
x

d x

Figure 8.4: Configuration of the conductor between two plates.

We know that the electric potential satisfies Laplace’s equation in the region be-
tween plates (since there is no charge in there). Therefore we want to solve the
following Dirichlet problem

∆u = 0 ,
 in (0, d) × R+
u(x, 0) = U0 sin(2πx/d) ,

u(0, y) = u(d, y) = 0 .

Note that there is an additional implicit boundary condition: we would like the
potential to go to zero in the “open” spatial direction, that in formulas translates
to
lim u(x, y) = 0 . (8.7.1)
y→∞

100
8.7. A “real life” example

P
Let us suppose that u(x, y) = n∈N Xn (x)Yn (y), with Xn (x)Yn (y) harmonic
for all n ∈ N. This leads to the ODEs
(
Xn00 (x) = −λn Xn (x) , Xn (0) = Xn (d) = 0
Yn00 (y) = λn Yn (y) .

The solution to the first ODE is


p  nπ 2
Xn (x) = An sin( λn x) , with λn = for all n ∈ N .
d
On the other hand the solution to the second ODE is
p p
Yn (y) = Cn sinh( λn y) + Dn cosh( λn y) .

By condition (8.7.1), we deduce


√ √ √ √ !
e λn y
− e− λn y
e λn y
+ e− λn y
lim Cn + Dn =
y→∞ 2 2
  √
  √
Cn + Dn Dn − Cn
= lim e λn y
+ e− λn y
= 0.
y→∞ 2 2

Therefore we have that Cn + Dn = 0 for all n ∈ N. Hence


p p √
Yn (y) = Dn (cosh( λn y) − sinh( λn y)) = Dn e− λn y

and the general solution is given by


X  nπ 
u(x, y) = An sin x e−nπy/d .
n∈N
d

By the condition u(x, 0) = U0 sin(2πx/d), we deduce that An = 0 for all n 6= 2


and the final solution is
 

u(x, y) = U0 sin x e−2πy/d .
d

101
Bibliography

[Pin05] Yehuda Pinchover. Introduction to partial differential equations. Cam-


bridge University Press, Cambridge, 2005.

103

You might also like