Lectures On Finite Element Method
Lectures On Finite Element Method
Editors
3
4 Contents
7 Stokes problem 57
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Finite Element formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.3 Examples of elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.4 Stabilization techniques to circumvent the Babuska-Brezzi condition . . . . . . . . . . . 65
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
References 103
Preface
5
Acknowledgement
−∇ · (κ ∇u) = f in Ω,
u = u0 on ΓD ⊂ ∂Ω, (1.1)
−κ ∇ u · n = g on ΓN ⊂ ∂Ω,
where u = u( x ) is some unknown field, κ : Ω → R(d×d) is some given coefficient matrix and f = f ( x )
is a given source function. The boundary ∂Ω of Ω is a union of two subboundaries, ∂Ω = ΓD ∪ ΓN .
where ΓD is the Dirichlet boundary and ΓN is the Neumann boundary. The Dirichlet boundary
condition, u = u0 , specifies a prescribed value for the unknown u on ΓD . The Neumann boundary
condition, −κ ∇u · n = g, specifies a prescribed value for the (negative) normal derivative of u on
ΓN . We often call the Dirichlet boundary condition an essential boundary condition, while we call
Neumann boundary condition a natural boundary condition.
Let us look at one of the many examples where the equations (4.55) arises. Let u = u( x ) be the
temperature in a body Ω ⊂ Rd at a point x in the body, let q = q( x ) be the heat flux at x, let f be the
heat source and let ω ⊂ Ω be a small test volume. Conservation of energy gives
dE
Z Z
= q · n ds − f dx = 0, (1.2)
dt ∂ω ω
that is, the outflow of the energy over the boundary ∂ω is equal to the energy emitted by the heat
source function f . Fourier’s law relates the heat flux to the temperature in the following way:
q = −κ ∇u. (1.3)
This gives us Z Z
−κ ∇u · n ds = f dx. (1.4)
∂ω ω
1
2 Chapter 1. The finite element method
Figure 1.1: Sketch of the domain Ω and the two subboundaries ΓD and
ΓN .
Equation (1.6) holds for all test volumes ω ⊂ Ω. Thus, if u, κ and f are regular enough, we obtain
Z
(−∇ · (κ ∇u) − f ) dx = 0 ∀ ω ⊂ Ω (1.7)
ω
⇒ −∇ · (κ ∇u) = f in Ω. (1.8)
u = u0 on ΓD
(1.9)
−κ ∇ u · n = g on ΓN
(recall that q = −κ ∇u). This is illustrated in Figure 1.2. If we choose the special case where κ = 1, we
obtain the more standard Poisson equation
−∆u = f in Ω. (1.10)
u = u0 on ΓD (1.11)
∂u
− =g on ΓN . (1.12)
∂n
Figure 1.2: Sketch of the domain Ω and the two subboundaries ΓD and
ΓN .
4. Solution algorithm.
−∇ · (κ ∇u) = f in Ω,
u = u0 on ΓD ⊂ ∂Ω, (1.13)
−κ ∇ u · n = g on ΓN ⊂ ∂Ω.
Recall that ∇u · n = ∂u
∂n .
To obtain the weak form we integrate (sometimes integration by parts is needed) the product of the
strong form of the equation multiplied by a test function v ∈ V̂, where V̂ is called a test space:
Z Z
−∇ · (κ ∇u)v dx = f v dx ∀ v ∈ V̂ (1.14)
Ω Ω
Z Z Z
∂u
κ ∇u · ∇v dx − κ v ds = f v dx ∀ v ∈ V̂. (1.15)
Ω ∂Ω ∂n Ω
Letting v = 0 on the Dirichlet boundary, ΓD , the integral over the boundary becomes
Z Z Z
∂u ∂u
−κ v ds = −κ v ds = gv ds. (1.18)
∂Ω ∂n ΓN ∂n ΓN
and the trial space V, containing the unknown function u, is defined similar to V̂ but with a shifted
Dirichlet condition:
V = Hu10 ,ΓD (Ω) = {v ∈ H 1 (Ω) : v = u0 on ΓD }. (1.21)
We discretize the variational problem (1.19) by looking for a solution in a discrete trial space and
using a discrete test function. The finite element problem is: find uh ∈ Vh ⊂ V such that
Z Z Z
κ ∇uh · ∇v dx = f v dx − gv ds ∀ v ∈ V̂h ⊂ V̂, (1.22)
Ω Ω ΓN
Our question is now: How do we solve the discrete variational problem (1.22)? We introduce a basis
for V and Vh , and make an Anzats:
N
uh ( x ) = ∑ U j φ j ( x ), (1.23)
j =1
where
φj : Ω → R, j = 1, . . . , N, (1.24)
is basis for Vh . Inserting this into equation (1.22) and letting v = φ̂i , i = 1, . . . , N, we obtain
!
Z N Z Z
Ω
κ∇ ∑ Uj φj · ∇φ̂i dx =
Ω
f φ̂i dx −
ΓN
gφ̂i ds, i = 1, 2, . . . , N,
j =1
(1.25)
N Z Z Z
∑ Uj Ω
κ ∇φj · ∇φ̂i dx =
Ω
f φ̂i dx −
ΓN
gφ̂i ds, i = 1, 2, . . . , N.
j =1
Chapter 1. The finite element method 5
N
∑ Aij Uj = bi , i = 1, 2, . . . , N,
j =1 (1.26)
AU = b,
where
Z
Aij = κ ∇φj · ∇φ̂i dx,
ZΩ Z (1.27)
bi = f φ̂i dx − gφ̂i ds.
Ω ΓN
1.3 Solving the Poisson equation with FEM using abstract formalism
The strong form of the Poisson equation written as a linear system reads
Au = f ,
(1.28)
(+ BCs ),
h Au, vi = h f , vi (1.29)
Define
where a is a bilinear form (not necessarily an inner product) and L is a linear form (a functional):
a : V × V̂ → R,
(1.31)
L : V̂ → R.
In the finite element problem, we look for a discrete solution: find uh ∈ Vh such that
N
uh ( x ) = ∑ U j φ j ( x ). (1.34)
j =1
N
∑ Aij Uj = bi , i = 1, 2, . . . , N,
j =1 (1.36)
AU = b,
where
a(u, v) = L(v) ∀ v ∈ V,
(1.38)
a(uh , v) = L(v) ∀ v ∈ Vh ⊂ V.
Using these results and the linearity of the bilinear form, we get
or written symbolically
u − uh ⊥ a Vh . (1.40)
This property is called Galerkin orthogonality. The error, e = u − uh , is orthogonal (in the sense of the
bilinear form a) to the test space Vh . Thus, uh is the best possible approximation of u in Vh . We will
continue this concept in the next chapter.
Chapter 1. The finite element method 7
The finite element method (FEM) is a general framework for numerical solution of PDEs. FEM
is written in the language of functional analysis, therefore we need to introduce basic concepts and
notations from functional analysis and Sobolev spaces.
The fundamental idea is that functions are vectors in a function space which is a vector space. The
properties of a vector space is briefly reviewed below. Then we may equip the spaces with norms
and inner-products which allow us to quantify, e.g., magnitudes and differences between functions.
A fundament mathematical difficulty is, however, that the function spaces typically will be infinite
dimensional in the continuous setting, but this difficulty will not be addressed here.
• addition + : V × V → V
• multiplication · : F × V → V
2. + is associative: u + (v + w) = (u + v) + w
5. · is distributive: c · (u + v) = c · u + c · v
6. · is distributive: (c + d) · v = c · v + d · v
7. · is associative: c · (d · v) = (c · d) · v
9
10 Chapter 2. A short look at functional analysis and Sobolev spaces
Examples:
1. V = R
2. V = R3
4. V = {v : [0, 1] → R | v is continuous}
h·, ·i : V × V → F,
Examples:
1. V = R N , hv, wi = ∑iN=1 vi wi
2. V = `2 , hv, wi = ∑i∞=1 vi wi
3. V = C ∞ (Ω),
R
hv, wi = Ω vw dx
`2 is the space of all sequences (or infinite vectors) that satisfy ∑i v2i < ∞.
hv, wi = 0.
Examples:
k · k : V → R,
Chapter 2. A short look at functional analysis and Sobolev spaces 11
Examples:
p 1/p
1. V = R N , kvk p = ∑iN=1 vi , 16p<∞
1. V = R, k v k = | v |, vn = 1
n
2. V = R, k v k = | v |, vn = sin n
n
xi
3. V = C ([0, 1]), kvk = kvk∞ , vn ( x ) = ∑in=0 i!
4.
x ∈ [−1, − n1 ]
−1,
V = C ([−1, 1]), vn ( x ) = nx, x ∈ (− n1 , n1 )
x ∈ [ n1 , 1]
1,
R1
This sequence is Cauchy in the L1 -norm, kvk1 = −1 |v( x )| dx, but not Cauchy in the max norm,
kvk∞ = maxx∈[−1,1] |v( x )|, because C ([−1, 1]) with k · k∞ is not complete.
Figure 2.1 and 2.2 show the Cauchy sequence for example 1 and 2.
1
Figure 2.1: Cauchy sequence: n for n = 1, . . . , 100.
sin n
Figure 2.2: Cauchy sequence: n for n = 1, . . . , 100.
Chapter 2. A short look at functional analysis and Sobolev spaces 13
Vector space
Inner
Banach
product
space
space
HILBERT SPACE
Figure 2.3: Venn diagram of the different spaces.
So far we have looked at a lot of definitions, let us now consider some important results.
Examples:
1. V = R, Tv = 2v , v̄ = 0
√
2. V = R+ , Tv = v+2/v
v̄ =
2 , 2
Examples:
1. v( x ) = √1 ,
x
Ω = (0, 1), v 6 ∈ L2 ( Ω )
2. v( x ) = 1
1 , Ω = (0, 1), v ∈ L2 ( Ω )
x4
Theorem 2.5. L2 with hv, wi =
R
Ω vw dx is a Hilbert space.
Definition 2.11. Weak derivative (first order)
Let v ∈ L2 (Ω). The weak derivative of v (if it exists), is a function ∂v
∂xi ∈ L2 (Ω) satisfying,
Z Z
∂v ∂φ
φ dx = − v dx, ∀ φ ∈ C0∞ (Ω).
Ω ∂xi Ω ∂xi
where
∂|α|
∂α φ = .
∂ α1 x 1 ∂ α2 x 2 . . . ∂ α n x n
Lemma 2.1. A weak derivative (if it exist), is unique.
Lemma 2.2. A (strong) derivative (if it exist), is a weak derivative.
Definition 2.13. The Sobolev space H m
The sobolev space H m is the subspace of functions v in L2 (Ω), which possess weak derivatives ∂α for |α| 6 m.
The corresponding norm is
s Z s
kvk H k = ∑ |∂α v|2 dx ≡ ∑ k∂α vk2L2 (Ω)
| α |6k Ω | α |6k
and seminorm s Z s
|v| H k = ∑ |∂α v|2 dx ≡ ∑ k∂α vk2L2 (Ω) .
|α|=k Ω |α|=k
Chapter 2. A short look at functional analysis and Sobolev spaces 15
3.1 Introduction
Sobolev spaces are fundamental tools in the analysis of partial differential equations and also for finite
element methods. Many books provide a detailed and comprehensive analysis of these spaces that in
themselves deserve significant attention if one wishes to understand the foundation that the analysis
of partial differential equations relies on. In this chapter we will however not provide a comprehensive
mathematical description of these spaces, but rather try to provide insight into their use.
We will here provide the definition of these spaces. Further we will show typical functions, useful
for finite element methods, that are in some but not all spaces. We also show how different norms
capture different characteristics.
Sobolev spaces are generalizations of L p spaces. L p spaces are function spaces defined as follows. Let
u be a scalar valued function on the domain Ω, which for the moment will be assumed to be the unit
interval (0, 1). Then
Z 1
kuk p = ( |u| p dx )1/p .
0
L p (Ω) consists of all functions for which kuk p < ∞. Sobolev spaces generalize L p spaces by also
including the derivatives. On the unit interval, let
Z
∂u
kuk p,k = ( ∑ |( ∂x )i | p dx)1/p .
Ω i ≤k
(3.1)
p p
Then the Sobolev space Wk (Ω) consists of all functions with kuk p,k < ∞. Wk is a so-called Banach
space - that is a complete normed vector space. The corresponding semi-norm, that only include the
highest order derivative is
Z
∂
|u| p,k = ( ∑ |( )i u| p dx )1/p . (3.2)
Ω i =k ∂x
The case p = 2 is special in the sense that (3.1) defines an inner product. The Banach space then forms
a Hilbert space and these named with H in Hilbert’s honor. That is H k (Ω) = W 2,k (Ω).
For the most part, we will employ the two spaces L2 (Ω) and H 1 (Ω), but also H 2 and H −1 will be
used. The difference between the norm in L2 (Ω) and H 1 (Ω) is illustrated in the following example.
17
18 Chapter 3. Crash course in Sobolev Spaces
Figure 3.1: Left picture shows sin(πx ) on the unit interval, while the right
picture shows sin(10πx ).
Example 3.1. Norms of sin(kπx ) Consider the functions uk = sin(kπx ) on the unit interval. Figure 3.1
shows the function for k = 1 and k = 10. Clearly, the L2 and L7 behave similarly in the sense that they remain
the same as k increases. On the other hand, the H 1 norm of uk increases dramatically as k increases. The
following code shows how the norms are computed using FEniCS.
Python code
1 from dolfin import *
2
3 N = 10000
4 mesh = UnitInterval(N)
5 V = FunctionSpace(mesh, "Lagrange", 1)
6
7 for k in [1, 100]:
8 u_ex = Expression("sin(k*pi*x[0])", k=k)
9 u = project(u_ex, V)
10
11 L2_norm = sqrt(assemble(u**2*dx))
12 print "L2 norm of sin(%d pi x) %e " % (k, L2_norm)
13
14 L7_norm = pow(assemble(abs(u)**7*dx), 1.0/7)
15 print "L7 norm of sin(%d pi x) %e " % (k, L7_norm)
16
17 H1_norm = sqrt(assemble(u*u*dx + inner(grad(u), grad(u))*dx ))
18 print "H1 norm of sin(%d pi x) %e" % (k, H1_norm)
k\norm L2 L7 H1
1 0.71 0.84 2.3
10 0.71 0.84 22
100 0.71 0.84 222
Table 3.1: The L2 , L7 , and H 1 norms of sin(kπx ) for k=1, 10, and 100.
The Sobolev space with k derivatives in L2 (Ω) was denoted by H k (Ω). The subspace of H k with
k − 1 derivatives equal to zero at the boundary is denoted H0k (Ω). For example, H01 (Ω) consists of
Chapter 3. Crash course in Sobolev Spaces 19
all functions in H 1 that are zero at the boundary. Similarly, we may also defined a subspace Hg1 (Ω)
which consists of all functions in H 1 (Ω) that are equal to the function g on the boundary.
Mathematically, it is somewhat tricky to defined that a function in H 1 is equal to another function
as it can not be done in a pointwise sense. This difficulty is resolved by the concept of a trace usually
denoted by T. The concept of a trace is tricky, for example if T takes a function u in H 1 (Ω) and restrict
it to ∂Ω then Tu 6∈ H 1 (∂Ω). In fact, in general we only have Tu ∈ H 1/2 (∂Ω).
The norm k · k p,k defined in 3.1 is a norm which means that kuk p,k > 0 for all u 6= 0. On the other
hand | · | p,k is a semi-norm, meaning that |u| p,k ≥ 0 for all u. The space H 1 (Ω) is defined by the norm
Z
k u k1 = ( u2 + (∇u)2 dx )1/2
Ω
and contains all functions for which kuk1 ≤ ∞. Often we consider subspaces of H 1 satisfying the
Dirichlet boundary conditions. The most common space is denoted H01 . This space contains all
functions in H 1 that are zero on the boundary. The semi-norm | · |1 defined as
Z
| u |1 = ( (∇u)2 dx )1/2
Ω
is a norm on the subspace H01 . In fact, as we will see later, Poincare’s lemma ensures that k · k1 and
| · |1 are equivalent norms on H01 (see Exercise 3.5.
The above functions sin(kπx ) are smooth functions that for any k are infinitely many times differen-
tiable. They are therefore members of any Soblev space.
On the other had, the step function in upper picture in Figure 3.2 is discontinuous in x = 0.2 and
x = 0.4. Obviously, the function is in L2 (0, 1), but the function is not in H 1 (0, 1) since the derivative of
the function consists of Dirac’s delta functions1 that are ∞ at x = 0.2 and −∞ in x = 0.4.
The hat function in the lower picture in Figure 3.2 is a typical first order finite element function.
The function is in both L2 (0, 1) and H 1 (0, 1) (see Exercise 3.3). In general, functions in H q are required
to be in C q−1 , where C k is the class where the k’th derivatives exist and are continuous.
From Taylor series we know that a f ( x + h) may be approximated by f ( x ) and a polynomial in h that
depends on the derivatives of f . To be precise,
| f ( x + h) − ( Pk f )( x )| ≤ O(hk+1 ).
Here, ( Pk f )( x ) is a polynomial of degree k in h, f (n) denotes the n’th derivative of f , and the error
will be of order k + 1 in h. To be precise,
k
f (n) ( x ) n
( Pk f )( x ) = f ( x ) + ∑ n!
h .
n =1
In general, approximation by Taylor series bears strong requirement on the smoothness of the solution
which needs to be differentiable in a point-wise sense. However, in Sobolev spaces we have the very
usefull approximation property
This property is used extensively in analysis of finite element methods. The above approximation
property is often called the Bramble-Hilbert lemma for k ≥ 2 and the case k = 1 was included by a
special interpolation operator by Clement, the so-called Clement interpolant. For proof, see e.g. Braess
[2007], Brenner and Scott [2008].
We remember that for −∆ on the unit interval (0, 1), the eigenvalues and eigenvectors are (πk)2 and
sin(πkx ), k = 1, . . . , ∞, respectively. It is natural to expect that the eigenvalues in the discrete setting
approximate the continuous eigenvalues such that the minimal eigenvalue is ≈ π 2 , while the maximal
eigenvalue is ≈ π 2 /h2 , where k = 1/h is the largest k that may be represented on a mesh with element
size h. Computing the eigenvalues of the finite element stiffness matrix in FEniCS as2 ,
Python code
1 A = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc)
reveals that the eigenvalues are differently scaled. In fact, the minimal eigenvalue is ≈ π 2 h and
that the maximal eigenvalue is ≈ π 2 /h. The reason is that the finite element method introduces a
mesh-dependent scaling. To estimate the continuous eigenvalues we instead compute the eigenvalues
of the generalized eigenvalue problem,
Ax = λMx, (3.3)
where A is the above mentioned stiffness matrix and M is the mass matrix (or the finite element
identity matrix)
Python code
1 M = assemble_system(inner(u*v*dx, Constant(0)*v*dx, bc)
Figure 3.3 shows the eigenvalues of −∆, A, and (3.3) based on the following code:
Python code
1 from dolfin import *
2 import numpy
3 from scipy import linalg, matrix
4
5 def boundary(x, on_boundary): return on_boundary
6
7 for N in [100, 1000]:
8 mesh = UnitIntervalMesh(N)
9 V = FunctionSpace(mesh, "Lagrange", 1)
2 We use the assemble_system function to enforce the Dirichlet condition in symmetric fashion.
22 Chapter 3. Crash course in Sobolev Spaces
10 u = TrialFunction(V)
11 v = TestFunction(V)
12
13 bc = DirichletBC(V, Constant(0), boundary)
14 A, _ = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc)
15 M, _ = assemble_system(u*v*dx, Constant(0)*v*dx, bc)
16
17 AA = matrix(A.array())
18 MM = matrix(M.array())
19
20 k = numpy.arange(1, N, 1)
21 eig = pi**2*k**2
22
23 l1, v = linalg.eigh(AA)
24 l2, v = linalg.eigh(AA, MM)
25
26 print "l1 min, max ", min(l1), max(l1)
27 print "l2 min, max ", min(l2), max(l2)
28 print "eig min, max ", min(eig), max(eig)
29
30 import pylab
31 pylab.loglog(l1[2:], linewidth=5) # exclude the two smallest (they correspond to Dirichlet cond))
32 pylab.loglog(l2[2:], linewidth=5) # exclude the two smallest again
33 pylab.loglog(eig, linewidth=5)
34 pylab.legend(["eig(A)", "eig(A,M)", "cont. eig"], loc="upper left")
35 pylab.show()
From Figure 3.3 we see that that the eigenvalues of (3.3) and −∆ are close, while the eigenvalues
of A is differently scaled. We remark that we excluded the two smallest eigenvalues in the discretized
problems as they correspond to the Dirichlet conditions.
As will be discussed more thoroughly later, −∆ is a symmetric positive operator and can be thought
of as a infinite dimensional matrix that is symmetric and positive. It is also know from Riesz
representation theorem that if u solves the problem
−∆u = f , in Ω,
u = 0, on ∂Ω
Chapter 3. Crash course in Sobolev Spaces 23
then
| u | 1 = k f k −1 . (3.4)
This implicitly define the H −1 norm, although the definition then requires the solution of a Poisson
problem. For example, in the previous example where uk = sin(kπx ), we have already estimated that
| u k |1 = √πk
and therefore kuk k−1 = |(−∆)−1 uk |1 = √ 1 .
2 2kπ
Let us now generalize these considerations and consider a matrix (or differential operator) A which
is symmetric and positive. A has positive and real eigenvalues and defines an inner product which
may be represented in terms of eigenvalues and eigenfunctions. Let λi and ui be the eigenvalues and
eigenfunctions such that
Aui = λi ui
Then, x may be expanded in terms of the eigenfunctions ui as x = ∑i ci ui , where ci = ( x, ui ), and we
obtain
( x, x ) A = ( Ax, x ) = ( A ∑ ci ui , ∑ c j u j ) = (∑ λi ci ui , ∑ c j u j )
i j i j
Because A is symmetric, the egenfunctions ui are orthogonal to each other and we may choose a
normalized basis such that (ui , u j ) = δij . With this normalization, we simply obtain
A generalization of the A−inner product (with corresponding norm) to a Aq −inner product that
allow for both negative and franctional q is then as follows
and let U be the matrix with the eigenvectors as columns. The eigenvalues are normalized in the
sense that
U T MU = I
where I is the identity matrix. We obtain
U T AU = Λ or A = MUΛ( MU ) T ,
where Λ is a matrix with the eigenvalues λi on the diagonal. Hence also in terms of the generalized
eigenvalue problem (3.6) we obtain the A−norm as
k x k2A = x T MUΛ( MU )T x
and we may define fractional and negative norms in the same manner as (3.5), namely that
k x k2A,M,q = x T MUΛq ( MU )T x.
24 Chapter 3. Crash course in Sobolev Spaces
Defining the negative and fractional norms in terms of eigenvalues and eigenvectors is conve-
nient for small scale problems, but it is an expensive procedure because eigenvalue problems are
computationally demanding. It may, however, be tractable on subdomains, surfaces, or interfaces
of larger problems. We also remark that there are other ways of defining fractional and negative
norms. For example, one often used technique is via the Fourier series, c.f. e.g. Rauch [1997]. These
different definitions do in general not coincide, in particular because they typically have different
requirement on the domain or boundary conditions. One should also be careful when employing
the above definition with integer q > 1, in particular because boundary conditions requirements will
deviate from standard conditions in the Sobolev spaces for q > 1.
k \norm H1, q = 1 L2 , q = 0 H −1 , q = − 1
1 2.2 0.71 0.22
10 22 0.71 0.022
100 222 0.71 0.0022
Table 3.2: The L2 , L7 , and H 1 norms of sin(kπx ) for k=1, 10, and 100.
Python code
1 from dolfin import *
2 from numpy import matrix, diagflat, sqrt
3 from scipy import linalg, random
4
5 def boundary(x, on_boundary): return on_boundary
6
7 mesh = UnitIntervalMesh(200)
8 V = FunctionSpace(mesh, "Lagrange", 1)
9 u = TrialFunction(V)
10 v = TestFunction(V)
11 bc = DirichletBC(V, Constant(0), boundary)
12
13 A, _ = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc)
14 M, _ = assemble_system(u*v*dx, Constant(0)*v*dx, bc)
15 AA = matrix(A.array())
16 MM = matrix(M.array())
17
18 l, v = linalg.eigh(AA, MM)
19 v = matrix(v)
20 l = matrix(diagflat(l))
21
22 for k in [1, 10, 100]:
23 u_ex = Expression("sin(k*pi*x[0])", k=k)
24 u = interpolate(u_ex, V)
25 x = matrix(u.vector().array())
26
27 H1_norm = pi*k*sqrt(2)/2
28 print "H1 norm of sin(%d pi x) %e (exact) " % (k, H1_norm)
29 H1_norm = sqrt(assemble(inner(grad(u), grad(u))*dx))
Chapter 3. Crash course in Sobolev Spaces 25
( f , v)
k f k A∗ = sup .
v ∈V (v, v) A
( f , v)
k f k−1 = sup .
v∈ H 1
(v, v)1
3.9 Exercises
Exercise 3.1. Compute the H 1 and L2 norms of a random function with values in (0, 1) on meshes representing
the unit interval of with 10, 100, and 1000 cells.
Exercise 3.2. Compute the H 1 and L2 norms of sin(kπx ) on the unit interval analytically and compare with
the values presented in Table 3.2.
Exercise 3.3. Compute the H 1 and L2 norms of the hat function in Picture 3.2.
Exercise 3.4. Consider the following finite element function u defined as
1 1
h x − h (0.5 − h ), x = (0.5 − h, 0.5)
(
u= − h x + 1h (0.5 − h),
1
x = (0.5, 0.5 + h)
0, elsewhere
26 Chapter 3. Crash course in Sobolev Spaces
That is, it corresponds to the hat function in Figure 3.2, where u(0.5) = 1 and the hat function is zero every
where in (0, 0.5 − h) and (0.5 + h, 1). Compute the H 1 and L2 norms of this function analytically, and the L2 ,
H 1 and H −1 norms numerically for h = 10, 100 and 1000.
Exercise 3.5. Let Ω = (0, 1) then for all functions in H 1 (Ω) Poincaré’s inequality states that
∂u
| u | L2 ≤ C | | 2
∂x L
Use this inequality to show that the H 1 semi-norm defines a norm equivalent with the standard H 1 norm on
H01 (Ω).
4 Finite element error estimate
By Anders Logg, Kent–Andre Mardal
4.1 Ingredients
We have used the FEM to compute an approximate solution, uh , of a PDE. Fundamental question:
How large is the error e = u − uh ? To be able to estimate the error, we need some ingredients:
1. Galerkin orthogonality
2. Interpolation estimates
a(u − uh , v) = 0 ∀ v ∈ Vh ⊂ V. (4.3)
for some norm. We need to be able to estimate infv∈Vh ku − vk or at least get a sharp upper bound.
We will do this by estimating ku − vk for a particular (a good) choice of v!
27
28 Chapter 4. Finite element error estimate
Let πh u be a piecewise constant approximation of u( x ) (1D). Then for x ∈ ( xi−1 , xi ], from the
theory of Taylor expansion, we have
x̄i
z }| {
x i −1 + x i Z x 0
u( x ) = u + u (y) dy.
2 x̄i
| {z }
≡πh u
which leads to Z x
|u − πh u| = | u0 (y) dy | .
x̄i
Z x Z x
1/2 Z x 1/2 !2
i
6∑ 2
1 dy · 0 2
(u (y)) dy dx
i x i −1 x̄i x̄i
Z x Z x Z x
i
=∑ | 12 dy| · | (u0 (y))2 dy| dx
i x i −1 x̄i x̄i
Z x Z x
i x i −1 + x i
=∑ |x − |· (u0 (y))2 dy dx
i x i −1 2 x̄i
Z x Z x
hi i i
6∑ (u0 (y))2 dy dx
i
2 x i −1 x i −1
h2i
Z x
i
=∑ (u0 (y))2 dy
i
2 x i −1
Z b
1 1
6 (h u0 (y))2 dy = khu0 k2L2 ,
2 a 2
where hi = xi − xi−1 and h = maxi hi . Thus, we have found an interpolation estimate
1
ku − πh uk L2 6 √ khu0 k L2 . (4.5)
2
4.1.3 Coercivity
Definition 4.1. Coercive
A bilinear form a : H × H → R is called coercive if there exists a constant α > 0 such that
1. a priori: e = e(u)
2. a posteriori: e = e(uh )
We have used Cauchy–Schwartz inequality ones. Now we divide both sides by kek E and obtain
ku − uh k E 6 ku − vk E ∀ v ∈ Vh . (4.13)
Thus, the FEM solution is the optimal solution in the energy Norm! We combine this with the
interpolation estimate (4.5), by setting v = πh u:
ku − uh k E 6 ku − πh uk E (4.14)
6 C ( p, q)khq+1− p D q+1 uk. (4.15)
For example in the Poisson problem with piecewise linear functions (q = 1), we have
rZ
kvk E = |∇v|2 dx.
Ω
30 Chapter 4. Finite element error estimate
A priori error estimate in the V–norm that does not assume symmetry. From coersivity we get
1
kek2V 6 a(e, e) (4.17)
α
1
= a(e, u − v + v − uh ) (4.18)
α
1
= a(e, u − v) (from Galerkin Orthogonality) (4.19)
α
C
6 k e kV k u − v kV . (4.20)
α
Here we assumed boundedness of a. By dividing both sides by kekV , we get an inequality known as
Cea’s lemma.
C
kekV 6 ku − vkV ∀ v ∈ Vh (4.21)
α
As before, we can use an interpolation estimate to obtain
4.2.2 A posteriori error estimate for the Poisson problem in the energy norm
We will now derive an a posteriorierror estimate for the Poisson problem. To do this we need the
following interpolation estimates:
ke − πh ek T 6 C h T k∇ek T , (4.23)
p
ke − πh ek∂T 6 C h T k∇ekωT , (4.24)
where ωT is the patch of of elements surrounding T. Note that the constant C will change throughout
the derivation. We will also need Cauchy’s inequality,
b2
ab 6 δ a2 + , a, b, δ > 0. (4.25)
4δ
Recall that the energy–norm for the Poisson problem is
rZ
kvk E = |∇v|2 dx.
Ω
Chapter 4. Finite element error estimate 31
Let us explain a bit before we continue. In equation (4.27) we added the term a(e, πh e), which from
Galerkin orthogonality is zero (since πh e ∈ Vh ). We used integration by part to get equation (4.31). In
the first term one the right-hand side of equation (4.33), we insert the residual, R ≡ f + ∆uh . For the
second term, we look at surface integral over two neighboring facets (S), for all S, see figure 4.1. There
normal components, n, will be pointing in opposite direction of each other and we get,
[∂n uh ] is called a jump. Note that until now, we have only used equalities. Let us look at equation
(4.34) in two terms.
Z
A≡ R(e − πh e) dx (4.36)
T
6 k RkT ke − πh ekT (4.37)
6 k Rk T C h T k∇ek T (4.38)
C h2T 1
6 k Rk2T + k∇ek2T (4.39)
2 2
and
1
Z
B≡ [∂n uh ] (e − πh e)dS (4.40)
2 ∂T
1
6 k [∂n uh ] k∂T k(e − πh e)k∂T (4.41)
2 √
C hT
6 k [∂n uh ] k∂T k∇ekωT (4.42)
2
C hT
6 k [∂n uh ] k2∂T + ek∇ek2ωT (4.43)
e
In equation (4.37) and (4.41), we used Cauchy–Schwarz inequality. For equation (4.38) and (4.42), we
used the interpolation estimates (4.23) and (4.24) respectively. Finally we used Cauchy’s inequality
with δ = 12 for equation (4.39) and δ = 4e for equation (4.43). Let us sum up what we have so far:
6 ∑ A+B (4.45)
T ∈Th
1 C h2T C hT
6 ∑ 2
k∇ek2T + ek∇ek2ωT +
2
k Rk2T +
e
k [∂n uh ] k2∂T . (4.46)
T ∈Th
(4.47)
where N is the maximum number of surrounding elements. We use this and get
C h2T
1 C hT
kek2E 6 + eN kek2E + ∑ k Rk2T + k [∂n uh ] k2∂T (4.49)
2 T ∈T
2 e
h
C h2T
1 C hT
− eN kek2E 6 ∑ k Rk2T + k [∂n uh ] k2∂T . (4.50)
2 T ∈T
2 e
h
Chapter 4. Finite element error estimate 33
Finally we chose e such that ( 12 − eN ) > 0 and we get the a posteriorierror estimate:
!1
2
kek E 6 C ∑ h2T k Rk2T + h T k [∂n uh ] k2∂T ≡E (4.51)
T
4.3 Adaptivity
In many applications we need the error to be less then a given tolerance (TOL). The error will typically
be large at some parts of the domain and small at other parts of the domain. We do not want to refine1
all the elements in T , since this will require a lot more computational power and memory. Instead
we want to only refine the elements where the error is big. Let us first rewrite the a posteriorierror
estimate (4.51) in a more general form,
!1
2
kek E 6 C ∑ γ2T ≡ E. (4.52)
T
1. Given TOL > 0, choose T such that the computational norm is minimized and kekV 6 TOL.
• Choose T
• Compute uh on T
• Compute E for uh
Exercise 4.1.
Let {φi }im=0 be the standard nodal basis functions for continuous piecewise linear approximation on Ω = (0, 1)
with constant mesh size h = 1/m.
(a) Take m = 10. Draw a picture of the basis functions φ0 , φ5 and φ10 .
(b) Draw a similar picture of the derivatives φ00 , φ50 and φ10
0 .
(a) Write down a finite element method for this equation using standard continuous piecewise linear polyno-
mials. Show that the degrees of freedom U for the solution u = ∑im=−11 Ui φi may be obtained by solving
the linear system ( A + M )U = b. The matrix A is often called the stiffness matrix and M is called the
mass matrix.
Demonstrate that if f ∈ Vh , then the mass matrix M may be used to compute the right-hand side vector b (the
load vector) for the finite element discretization of (4.53).
Exercise 4.3.
Consider the following partial differential equation:
00
−u = f in (0, 1),
u0 (0) = 0, (4.54)
0
u (1) = 0.
(a) Explain why there is something wrong with this equation (why it is not well-posed). Consider both
uniqueness and existence of solutions.
(b) If you would implement a (standard) finite element method for this equation, what would happen? How
would you notice that something was wrong?
where a = a( x ) is a positive definite n × n matrix at each x ∈ Ω. Prove that the stiffness matrix A (for a
suitable finite element space on Ω) is also positive definite, and explain why A is only positive semidefinite for
homogeneous Neumann conditions.
Implement a simple Python program that computes the stiffness and mass matrices on Ω = (0, 1) for any
given m ≥ 2, where m is the number of intervals partitioning (0, 1). Use A and M to solve equation (4.53)
for f ( x ) = sin(5πx ). Plot the solution and compare with the analytical solution. Demonstrate that the
approximate solution converges to the exact solution when the mesh is refined. What is the convergence rate in
L2 ? What is the convergence rate in H 1 ?
Hint: Use numpy.array for storing matrices and vectors, numpy.linalg.solve R to solve the linear
R system
and pylab.plot to plot the solution. Also note that you may approximate bi = Ω φi f dx by f ( xi ) Ω φi dx.
Implement a simple Python program that computes the stiffness matrix A on a uniform triangular mesh of the
unit square Ω = (0, 1) × (0, 1). Use A to solve Poisson’s equation −∆u = f for f = 2π 2 sin(πx ) sin(πy)
and homogeneous Dirichlet conditions. Plot the solution and compare with the analytical solution. Demonstrate
that the approximate solution converges to the exact solution when mesh is refined. What is the convergence rate
in L2 ? What is the convergence rate in H 1 ?
Exercise 4.6. Solve the problem −∆u = f with homogenous boundary conditions on the unit interval for the
case where the analytical solution is u = sin(kπx ) and f is given as −∆u. As we learned in this chapter,
ku − uh k1 ≤ Ch p kuk p+1 .
Chapter 4. Finite element error estimate 35
Estimate C in numerical estimates for k = 1, 10, 100 on meshes with 100, 1000, and 10000 elements and
validate the error estimate.
Remark: Use errornorms in FEniCS and represent the analytical solution in a higher order space in order
to avoid super convergence.
Exercise 4.7. Consider the error of the problem in Exercise 4.6 in L2 , L∞ , and L1 norms. Is the order of the
approximation the same? Hint: use the least square method to estimate Cx and α x in
k u − u h k x ≤ Cx h α x ,
where x denotes the norm and Cx depends on u in contrast to Exercise 4.6. Hence, it is advisable to determine
Cx and α x for a given k and then change k.
Exercise 4.8. Consider the error of the problem in Exercise 4.6 and 4.7 in L2 and H 1 norms, but determine the
rate of convergence numerically with respect to the polynomial order of the finite element method. That is, use
the least square method to estimate C p and α p in
k u − u h k1 ≤ C p h α p .
Here, C p depends on u in contrast to Exercise 4.6. Hence, it is advisable to determine C p and α p for a given k
and then change k.
Exercise 4.9. Consider the same problem as in 4.6 in 3D (or 2D if your computer does not allow a proper
investigation in 3D). Assume that you tolerate a H 1 error of 1.0e − 2. What polynomial order of the Lagrange
finite element gives you the answer in minimal amount of computational time? Re-do the experiments with
tolerance 1.0e − 4 and 1.0e − 6
5 Finite element function spaces
By Anders Logg, Kent–Andre Mardal
Finite element function spaces (vh ) are constructed by patching together local function spaces,
V = V ( T ), defines on finite subsets of the domain Ω.
i The domain T is a bounded, closed subset of Rd (for d = 1, 2, 3, . . . ) with nonempty interior and piecewise
smooth boundary;
iii The set of degrees of freedom (nodes) L = {`1 , `2 , . . . , `n } is a basis for the dual space V 0 ; that is, the space
of bounded linear functionals on V .
Example: (T)
Figure 5.7 shows different kinds of domains, T, for different dimensions, d = 1, 2, 3. Example: (V )
• V = Pq ( T ) = {polynomials on T of degree 6 q}
• V = [Pq ( T )]d
37
38 Chapter 5. Finite element function spaces
• V = subspace of Pq ( T )
Example: (L)
• L(v) = v( x̄ )
R
• L(v) = T v( x ) dx
R
• L(v) = S v · n dS
Is the standard piecewise linear element, P1 , a finite element?
i T is a interval, triangle or tetrahedron: ok
ii V = {v : v( x ) = a + bx } ≡ P1 ( T )
dim V = n = d + 1 : ok
(Lv = 0 ⇒ v = 0) ⇔ `i ∑ β j φj = 0 for i = 1, . . . , n ⇒ β = 0
j
⇔ ∑ β j |{z}
`i φ j for i = 1, . . . , n ⇒ β = 0
j
= A ji
T
⇔ A β=0 ⇒β=0
⇔ A T is invertible
⇔ A is invertible
To sum up:
0
L is basis for V ⇔ A is invertible ⇔ (Lv = 0 ⇒ v = 0).
42 Chapter 5. Finite element function spaces
We can now check if P1 is a finite element. Take v on a triangle, set v = 0 at each corner. This leads
to v = 0 for linear functions. P1 is a finite element.
`i (φ J ) = δij .
A nodal basis has the desired property that if, uh = ∑nj=1 u j φj , then `i (uh ) = ui .
Example:
We look at P1 elements on triangle with corners at x1 , x2 and x3 ,
x1 = (0, 0), `1 v = v( x1 ) φ1 ( x ) = 1 − x1 − x2
x2 = (1, 0), `2 v = v( x2 ) φ2 ( x ) = x1
x3 = (0, 1), `3 v = v( x3 ) φ3 ( x ) = x2 .
Computing the nodal basis: Let {ψi }in=1 be any basis for P and let {ψi }in=1 be its nodal basis.
Then,
n
∑ α jk ψk = φi
i =1
n
`i ( ∑ α jk ψk ) = δij
i =1
`i (ψk )α jk = δij
| {z }
Aij
Aα T = I
A is the generalized Vandermonde matrix. Solving for α gives the nodal basis!
5.1.1 Conforming
Note:
If a finite element function space is a subspace of function space V, we call it V-conforming. Example,
the Lagrange elements are H 1 -conforming, CGq (T ) ⊂ H 1 (Ω).
Definition 5.3 (Lagrange element). The Lagrange element (CGq ) is defined for q = 1, 2, . . . by
n(q)
where { xi }i=1 is an enumeration of points in T defined by
i/q, 0 6 i 6 q, T interval,
x= (i/q, j/q), 0 6 i + j 6 q, T triangle, (5.7)
(i/q, j/q, k/q), 0 6 i + j + k 6 q, T tetrahedron.
The dimension of the Lagrange finite element thus corresponds to the dimension of the complete
polynomials of degree q on T and is
q + 1, T interval,
1
n(q) = (q + 1)(q + 2), T triangle, (5.8)
12
6 ( q + 1)( q + 2)( q + 3), T tetrahedron.
Figure 5.8 show the Lagrange elements for different dimensions and how the nodal points are placed.
Now we will look at some H (div)-conforming elements. First up is the Raviart–Thomas RTq el-
ements.
Definition 5.4 (Raviart–Thomas element). The Raviart–Thomas element (RTq ) is defined for q = 1, 2, . . . by
d=1
n=2 n=3 n=4
d=2
n=6
n=3 n=10
d=3
Figure 5.8: The Lagrange (CGq ) elementes. q is the order of the elements,
d is the dimention and n is the number of degrees of freedom.
Figure 5.9: Illustration of the degrees of freedom for the first, second
and third degree Raviart–Thomas elements on triangles and tetrahedra.
The degrees of freedom are moments of the normal component against
Pq−1 on facets (edges and faces, respectively) and, for the higher degree
elements, interior moments against [Pq−2 ]d .
Chapter 5. Finite element function spaces 45
Figure 5.10: Illustration of the first, second and third degree Brezzi–
Douglas–Marini elements on triangles and tetrahedra. The degrees of
freedom are moments of the normal component against Pq on facets
(edges and faces, respectively) and, for the higher degree elements, interior
moments against NED1q−1 .
Next element is the Brezzi–Douglas–Marini BDMq elements. These are also H (div)-conforming
elements.
where NED1 refers to the Nédélec H (curl) elements of the first kind.
The last elements we will look at, are the Nédélec NED2q elements of second kind. These are
H (curl)-conforming elements.
Definition 5.6 (Nédélec element of the second kind). The Nédélec element of the second kind (NED2q ) is
defined for q = 1, 2, . . . in two dimensions by
T = triangle, (5.17)
V = [Pq ( T )]2 , (5.18)
R
L= Re v · t p ds, for a set of basis functions p ∈ Pq (e) for each edge e,
(5.19)
T v · p dx, for a set of basis functions p ∈ RTq−1 ( T ), for q > 2.
46 Chapter 5. Finite element function spaces
Figure 5.11: Illustration of first and second degree Nédélec H (curl) ele-
ments of the second kind on triangles and first degree on tetrahedron.
Note that these elements may be viewed as rotated Brezzi–Douglas–Marini
elements.
T = tetrahedron, (5.20)
3
V = [Pq ( T )] , (5.21)
R
Re v · t p dl, for a set of basis functions p ∈ Pq (e) for each edge e,
L= v · p ds, for a set of basis functions p ∈ RTq−1 ( f ) for each face f , for q > 2 (5.22)
Rf
T v · p dx, for a set of basis functions p ∈ RTq−2 ( T ), for q > 3.
6.1 Introduction
This chapter concerns convection-diffusion equations of the form:
−µ∆u + v · ∇u = f in Ω
u = g on ∂Ω
Here v is typically a velocity, µ is the diffusivity, and u is the unknown variable of interest. We assume
the Dirichlet condition u = g on the boundary, while f is a source term.
The problem is a singular perturbation problem. That is, the problem is well-posed for µ > 0 but
becomes over–determined as µ tends to zero. For µ = 0 the Dirichlet conditions should only be set on
the inflow domain Γ; that is, where n · v < 0 for the outward unit normal n.
For many practical situations µ > 0, but small in the sense that µ |v|. For such problems,
the solution will often be similar to the solution of the reduced problem with µ = 0 except close
to the non-inflow boundary ∂Ω\Γ. Here, there will typically be a boundary layer exp (kvk∞ x/µ).
Furthermore, discretizations often shows unphysical oscillations starting at this boundary layer.
The next example shows a 1D convection diffusion problem resulting in non-physical oscillations
due to the use of a standard Galerkin approximation.
−u x − µu xx = 0, (6.1)
u(0) = 0, u(1) = 1. (6.2)
47
48 Chapter 6. Discretization of a convection-diffusion problem
Here, H(10,1) contains functions u ∈ H 1 with u = 0 at x = 0 and u = 1 and x = 1, while H(10,0) contains
functions that are zero both at x = 0 and x = 1. We consider a µ = 0.01, a relatively large µ, to enable us to
see the differences on a relatively coarse mesh.
Both the numerical and analytical solutions are shown in Figure 6.1. Clearly, the numerical solution is
polluted by non-physical oscillations on the coarse mesh with 10 elements, while a good approximation is
obtained for 100 elements.
Finally, we show the complete code for this example:
Python code
1 from dolfin import *
2 for N in [10, 100]:
3
4 mesh = UnitInterval(N)
5 V = FunctionSpace(mesh, "CG", 1)
6
7 u = TrialFunction(V)
8 v = TestFunction(V)
9
10 mu_value = 1.0e-2
11 mu = Constant(mu_value)
12 f = Constant(0)
13 h = mesh.hmin()
14
15 a = (-u.dx(0)*v + mu*u.dx(0)*v.dx(0))*dx
16 L = f*v*dx
Chapter 6. Discretization of a convection-diffusion problem 49
17
18 u_analytical = Expression("(exp(-x[0]/e) - 1)/ (exp(-1/%e) - 1)" % (mu_value, mu_value))
19 def boundary(x):
20 return x[0] < DOLFIN_EPS or x[0] > 1.0 - DOLFIN_EPS
21
22 bc = DirichletBC(V, u_analytical, boundary)
23
24 U = Function(V)
25 solve(a == L, U, bc)
26
27 U_analytical = project(u_analytical, V)
28
29 import pylab
30 pylab.plot(U.vector().array())
31 pylab.plot(U_analytical.vector().array())
32 pylab.legend(["Numerical Solution", "Analytical Solution"])
33 pylab.show()
To understand Example 6.1 we first remark that the discretization corresponds to the following
central finite difference scheme:
µ v
− [ui+1 − 2ui + ui−1 ] − [ u − u i −1 ] = 0, i = 1, . . . , N − 1
h 2 2h i+1
u0 = 0, u N = 1
Here, it is clear that ui+1 is coupled to ui−1 , but not to ui . Hence, this scheme allow for an alternating
sequence of ui+1 = ui−1 = . . ., while ui = ui−2 = . . . resulting in oscillations.
One cure for these oscillations is upwinding. That is, instead of using a central difference scheme,
we employ the following difference scheme:
du 1
( x ) = [ u i +1 − u i ] if v < 0,
dx i h
du 1
( x i ) = [ u i − u i −1 ] if v > 0.
dx h
Using this scheme, oscillations will disappear. The approximation will however only be first order.
There is a relationship between upwinding and artificial diffusion. If we discretize u x with a
central difference and add diffusion as e = h/2∆ we get
u i +1 − u i −1
central scheme, first order derivative
2h
h −ui+1 + 2ui − ui−1
+ central scheme, second order derivate
2 h2
u − u i −1
= i upwind scheme
h
Hence, upwinding is equivalent to adding artificial diffusion with e = h/2; that is, in both cases we
50 Chapter 6. Discretization of a convection-diffusion problem
−u x − µu xx = 0, (6.3)
u(0) = 0, u(1) = 1. (6.4)
We solve the problem with a standard Galerkin method using linear first order Lagrange elements as before,
but we add artificial diffusion. To be specific, the variational problem is:
Find u ∈ H(10,1) such that
Z 1
−u x v + (µ + βh)u x v x = 0, ∀ v ∈ H(10,0) ,
0
where β = 0.5 corresponds to the finite difference scheme with artificial diffusion mentioned above. Below is the
code for the changed variational form:
Python code
1 beta_value = 0.5
2 beta = Constant(beta_value)
3 f = Constant(0)
4 h = mesh.hmin()
5 a = (-u.dx(0)*v + mu*u.dx(0)*v.dx(0) + beta*h*u.dx(0)*v.dx(0))*dx
Figure 6.2 shows the solution for 10 and 100 elements when using artificial diffusion stabilization. Clearly,
the solution for the coarse grid has improved dramatically since the oscillations have vanished and the solution
appear smooth. It is, however, interesting to note that the solution for the fine mesh is actually less accurate than
the solution in Fig 6.2 for the corresponding fine mesh. The reason is that the scheme is now first order, while
the scheme in Example 6.1 is second order.
In the previous section we saw that artificial diffusion may be added to convection diffusion dominated
problems to avoid oscillations. The diffusion was, however, added in a rather ad-hoc manner. Here,
we will see how diffusion may be added in a consistent way; that is, without changing the solution as
h → 0. This leads us to streamline diffusion using the Petrov-Galerkin method. Our problem reads:
Find u such that
−µ∆u + v · ∇u = f in Ω,
u = g on ∂Ω.
where
Z Z
a(u, w) = µ∇u · ∇w dx + v · ∇uw dx,
ZΩ Ω
b(w) = f w dx.
Ω
Here, Hg1 is the subspace of H 1 where the trace equals g on the boundary ∂Ω.
Adding artificial diffusion to the standard Galerkin discretization, as was done in Example 6.2, can
be done as:
Find uh ∈ Vh,g such that
h
a(uh , vh ) + (∇uh , ∇vh ) = ( f , vh ) ∀ vh ∈ Vh,0 .
2
Let
τ (u, vh ) = a(uh , vh ) − ( f , vh ).
52 Chapter 6. Discretization of a convection-diffusion problem
τ (u, vh )
τ (u) = sup ∼ O(h).
v∈Vh ,v6=0 k v kV
lim τ (u) → 0.
h →0
However, it is not strongly consistent in the sense that τ (u) = 0 for every discretization, which is what
is obtained with the Galerkin method due to Galerkin-orthogonality:
while for the Petrov Galerkin method, we use the test functions L j :
Z Z
Aij = a( Ni , L j ) = µ∇ Ni · ∇ L j dx + v · ∇ Ni L j dx
Ω Ω
A clever choice of L j will enable us to add diffusion in a consistent way. To make sure that the matrix
is still quadratic, we should however make sure that the dimension of Vh and Wh are equal.
Let L j be defined as L j = Nj + βh v · ∇ Nj . Writing out the matrix Aij in (6.6) now gives
Aij = a( Ni , Nj + βh v · ∇ Nj )
Z Z
= µ∇ Ni · ∇( Nj + βh v · ∇ Nj ) dx + v · ∇ Ni · ( Nj + βh v · ∇ Nj ) dx
ZΩ Z Ω
= µ∇ Ni · ∇ Nj dx + v · ∇ Ni Nj dx
|Ω {z Ω
}
standard Galerkin
Z Z
+ βh µ∇ Ni · ∇(v · ∇ Nj ) dx + βh (v · ∇ Ni )(v · ∇ Nj ) dx
Ω Ω
| {z } | {z }
=0 third order term, for linear elements Artificial diffusion in v direction
Thus, both the matrix and the righthand side are changed such that artificial diffusion is added in a
consistent way.
We summarize this derivation by stating the SUPG problem. Find uh,sd ∈ Hg1 such that
where
Z Z
asd (u, w) = µ∇u · ∇w dx + v · ∇uw dx
Ω Ω
Z Z
+ βh (v · ∇u)(v · ∇w) dx + βh µ ∑ −∆u(v · ∇w) dx,
Ω e Ωe
Z Z
bsd (w) = f w dx + βh f v · ∇w dx.
Ω Ω
a(u, v) = L(v) ∀ v ∈ V.
is well-posed in the sense that there exists a unique solution with the following stability condition
C
k u kV ≤ k L kV ∗ .
α
Condition (1) is often refereed to as coersivity or positivity, while (2) is called continuity or
boundedness. Condition 3 simply states that the right-hand side should be in the dual space of V.
In the following we will use Lax-Milgram’s theorem to show that the convection-diffusion problem
is well-posed. The Lax-Milgram’s theorem is well-suited since it does not require symmetry of the
bilinear form.
We will only consider the homogeneous Dirichlet conditions in the current argument1 . From
Poincare’s lemma we know that
kuk0 ≤ CΩ |u|1 .
Using Poincare, it is straightforward to show that the semi-norm
Z
|u|1 = ( (∇u)2 dx )1/2
1 Has the argument for reducing non-homogeneous Dirichlet conditions to homogeneous Dirichlet conditions been demon-
strated elsewhere?
54 Chapter 6. Discretization of a convection-diffusion problem
are equivalent. Hence, on H01 the | · |1 is a norm equivalent the H 1 -norm. Furthermore, this norm will
be easier to use for our purposes.
For the convection-diffusion problem, we will consider two cases 1) incompressible flow, where
∇ · v = 0 and 2) compressible flow, where ∇ · v 6= 0. Let us for the begin with the incompressible case.
Further, let
Z
b(u, w) = µ∇u∇w dx
ZΩ
cv (u, w) = v · ∇u w dx
Ω
a(u, w) = a(u, w) + b(u, w)
and therefore cv (·, ·) is skew-symmetric. Letting w = u we obtain that cv (u, u) = −cv (u, u), which
means that cv (u, u) = 0. Therefore, the first condition in Lax-Milgram’s theorem (1) is satisfied:
The second condition, the boundedness of a (2), follows by applying Cauchy-Schwartz inequality
if we assume bounded flow velocities kvk∞ .
Z Z
a(u, v) = µ∇u∇w dx + v∇uw dx
Ω Ω
≤ µ | u |1 | w |1 + k v k ∞ | u |1 k w k0
≤ (µ + kvk∞ CΩ )|u|1 |v|1 .
The third condition simply means that the right-hand side needs to be in the dual space of Hg1 .
Hence, we obtain the following bounds by Lax-Milgram’s theorem:
µ + CΩ kvk∞
| u |1 ≤ k f k −1 .
µ
Notice that for convection-dominated problems CΩ kvk∞ µ and the stability constant will therefore
be large.
In the case where ∇ · v 6= 0, we generally obtain that cv (u, u) 6= 0. To ensure that a(u, u) is still
positive, we must then put some restrictions on the flow velocities. That is, we need
Further, the second condition of Lax-Milgram’s theorem still applies. However, that CΩ kvk∞ ≤ Dµ is
clearly very restrictive compared to the incompressible case.
We remark that the Lax-Milgram conditions in the presence of the SUPG clearly will not be
satisified in the continuous case because of the third order term −∆u(v · ∇w). With this term, the
second condition of Lax-Milgram is not satisified with C ≤ ∞.
R in2 order to make the term cv (u, u) skew-symmetric, it was required that the boundary
Finally,
integral Γ u w · n was zero. This was a consequence of the Dirichlet conditions. In general, this
is neither needed nor possible at Neumann boundaries. As long as Γ u2 w · n ≥ 0, the above
R
argumentation is valid. From a physical point of view this means that there is outflow at the Neumann
boundary, i.e., that w · n ≥ 0.
Proof. The bounds on the interpolation error is provided by the Bramble-Hilbert lemma for t ≥ 1 and
Clement’s result (the case t = 1), cf. e.g. Braess [2007], Brenner and Scott [2008].
For the Galerkin method the general and elegant result of Cea’s lemma provide us with error
estimates.
CB t
ku − uh kV = ku − uh k1 6 C1 h k u k t +1 .
α
CB
Here C1 = α , where B comes from the approximation property and α and C are the constants of Lax-Milgram’s
theorem.
Proof. The proof is straightforward and follows from the Galerkin orthogonality:
a(u − uh , v) = 0, ∀ v ∈ Vh
56 Chapter 6. Discretization of a convection-diffusion problem
Since Vh ⊂ V:
α k u − u h kV ≤ a(u − uh , u − uh )
= a(u − uh , u − v) − a(u − uh , v − uh )
≤ C k u − u h kV k u − v kV .
C
|||u − uh ||| 6 |||u − Ih u||| ≤ Cht−1 kukt ,
α
where t − 1 is the order of the polynomials of the finite elements.
We remark, as mentioned above, that Cα is large for convection dominated problems and that this
is what causes the poor approximation on the coarse grid, shown in Example 6.1.
To obtain improved error estimates for the SUPG method, we introduce an alternative norm:
1/2
kuksd = hkv · ∇uk2 + µ|∇u|2 (6.8)
Theorem 6.4. Suppose the conditions for Lax-Milgram’s theorem is satisfied in the Hilbert space defined by the
SUPG norm (6.8) and that we solve the SUPG problem (6.7) on a finite element space of order 1. Then,
Proof. The proof can be found in e.g. Elman et al. [2014], Quarteroni and Valli [2008].
6.5 Exercises
Exercise 6.1. Show that the matrix obtained from a central difference scheme applied to the operator Lu = u x
is skew-symmetric. Furthermore, show that the matrix obtained by linear continuous Lagrange elements are also
skew-symmetric. Remark: The matrix is only skew-symmetric in the interior of the domain, not at the boundary.
Exercise 6.2. Estimate numerically the constant in Cea’s lemma for various α and h for the Example 6.1.
Exercise 6.3. Implement the problem u = sin(πx ), and f = −αu xx − u x and estimate numerically the
constant in Cea’s lemma for various α. Compare with the corresponding constant estimated from Example 6.1.
Exercise 6.4. Implement the problem u = sin(πx ), and f = −αu xx − u x using SUPG and estimate the
constants in the error estimate obtained by both the ||| · ||| and the k · kv norms. Compare with the corresponding
constant estimated from Example 6.1.
Exercise 6.5. Investigate whether the coersivity condition holds when a homogeneous Neumann condition is
assumed on the outflow. You may assume that v · n > 0.
Exercise 6.6. Consider the eigenvalues of the operators, L1 , L2 , and L3 , where L1 u = u x , L2 u = −αu xx ,
α = 1.0e−5 , and L3 = L1 + L2 , with homogeneous Dirchlet conditions. For which of the operators are the
eigenvalues positive and real? Repeat the exercise with L1 = xu x .
Exercise 6.7. Compute the Soblev norms k · km of the function sin(kπx ) on the unit interval. Assume that
the Soblev norm is kukm = (−∆m u, u)1/2 . What happens with negative m? You may use either Fourier
transformation or compute (eigenvalues of) powers of the stiffness matrix.
Exercise 6.8. Perform numerical experiments to determine the order of approximation with respect to various
Soblev norms and polynomial orders for the function sin(kπx ) on the unit interval.
7 Stokes problem
By Anders Logg, Kent–Andre Mardal
7.1 Introduction
The Stokes problem describes the flow of a slowly moving viscous incompressible Newtonian fluid.
Let the fluid domain be denoted Ω. We assume that Ω is a bounded domain in Rn with a smooth
boundary. Furthermore, let u : Ω → Rn be the fluid velocity and p : Ω → R be the fluid pressure.
The strong form of the Stokes problem can then be written as
−∆u + ∇ p = f , in Ω, (7.1)
∇·u = 0, in Ω, (7.2)
u = g, on ∂Ω D , (7.3)
∂u
− pn = h, on ∂Ω N . (7.4)
∂n
Here, f is the body force, ∂Ω D is the Dirichlet boundary, while ∂Ω N is the Neumann boundary.
Furthermore, g is the prescribed fluid velocity on the Dirichlet boundary, and h is the surface force or
stress on the Neumann boundary. These boundary condition leads to a well-posed problem provided
that neither the Dirichlet nor Neumann boundaries are empty. In case of only Dirichlet conditions the
pressure is only determined up to a constant, while only Neumann conditions leads to the velocity
only being determined up to a constant.
These equations are simplifications of the Navier–Stokes equations for very slowly moving flow.
In contrast to elliptic equations, many discretizations of this problem will lead to instabilities. These
instabilities are particularly visible as non-physical oscillations in the pressure. The following example
illustrate such oscillations.
Python code
1 from dolfin import *
2
3 def u_boundary(x):
57
58 Chapter 7. Stokes problem
Figure 7.1: Poiseuille flow solution obtained with linear continuous el-
ements for both velocity and pressure. The left figure shows the (well-
represented) velocity while the right shows the pressure (with the wild
oscillations).
Chapter 7. Stokes problem 59
4 return x[0] < DOLFIN_EPS or x[1] > 1.0 - DOLFIN_EPS or x[1] < DOLFIN_EPS
5
6 def p_boundary(x):
7 return x[0] > 1.0 - DOLFIN_EPS
8
9 mesh = UnitSquare(40,40)
10 V = VectorFunctionSpace(mesh, "Lagrange", 1)
11 Q = FunctionSpace(mesh, "Lagrange", 1)
12 #Q = FunctionSpace(mesh, "DG", 0)
13 W = MixedFunctionSpace([V, Q])
14
15 u, p = TrialFunctions(W)
16 v, q = TestFunctions(W)
17
18 f = Constant([0,0])
19
20 u_analytical = Expression(["x[1]*(1-x[1])", "0.0"])
21 p_analytical = Expression("-2+2*x[0]")
22
23 bc_u = DirichletBC(W.sub(0), u_analytical, u_boundary)
24 bc = [bc_u]
25
26 a = inner(grad(u), grad(v))*dx + div(u)*q*dx + div(v)*p*dx
27 L = inner(f, v)*dx
28
29 UP = Function(W)
30 A, b = assemble_system(a, L, bc)
31 solve(A, UP.vector(), b, "lu")
32
33 U, P = UP.split()
34
35 plot(U, title="Numerical velocity")
36 plot(P, title="Numerical pressure")
37
38 U_analytical = project(u_analytical, V)
39 P_analytical = project(p_analytical, Q)
40
41 plot(U_analytical, title="Analytical velocity")
42 plot(P_analytical, title="Analytical pressure")
43
44 interactive()
However, when using the second order continuous elements for the velocity and first order continuous
elements for the pressure, we obtain the perfect solution shown in Figure 7.2.
The previous example demonstrates that discretizations of the Stokes problem may lead to, in
particular, strange instabilities in the pressure. In this chapter we will describe why this happens and
several strategies to circumvent this behaviour.
1
Let us first start with a weak formulation of Stokes problem: Find u ∈ HD,g and p ∈ L2 .
1
a(u, v) + b( p, v) = f ( v ), v ∈ HD,0
b(q, u) = 0, q ∈ L2 ,
60 Chapter 7. Stokes problem
where
Z
a(u, v) = ∇u : ∇v dx,
Z
b( p, v) = p ∇ · v dx,
Z Z
f (v) = f v dx + h v ds.
ΩN
1
Here HD,g contains functions in H 1 with trace g on ∂Ω D . To obtain symmetry we have substituted
p̂ = − p for the pressure and is referint to p̂ as p.
As before the standard finite element formulation follows directly from the weak formulation:
Find uh ∈ Vg,h and ph ∈ Qh such that
BT
A u f
= (7.7)
B 0 p 0
Here
Z
Aij = a( Ni , Nj ) = ∇ Ni ∇ Nj dx, (7.8)
Z
Bij = b( Li , Nj ) = ∇ Li Nj dx. (7.9)
Hence, A is n × n, while B is m × n, where n is the number of degrees of freedom for the velocity
field, while m is the number of degrees of freedom for the pressure.
1
Is the system (7.7) invertible? For the moment, we assume that the submatrix A is invertible. This
is typically the case for Stokes problem. We may then perform blockwise Gauss elimination: That is,
we multiply the first equation with A−1 to obtain
u = A −1 f − A −1 B T p
0 = Bu = BA−1 f − BA−1 B T p
BA−1 B T p = BA−1 f
This equation is often called the pressure Schur complement. The question is then reduced to whether
BA−1 B T is invertible. Consider the follwing two situations:
n n
A BT
A BT
m B 0
m B 0
v.s
Clearly, the right most figure is not invertible since n m and the 0 in the lower right corner
dominates. For the left figure on might expect that the matrix is non-singular since n m, but it will
depend on A and B. We have already assumed that A is invertible, and we therefore ignore A−1 in
BA−1 B T . The question is then whether BB T is invertible.
m×m
B =
BT
Alternatively,
(v, B T p)
max ≥ βk pk ∀ p. (7.11)
v kvk
Here, β > 0. We remark that (7.10) and (7.11) are equivalent for a finite dimensional matrix.
However, in the infinite dimentional setting of PDEs (7.10) and (7.11) are different. Inequality (7.10)
allow (v, B T p) to approach zero, while (7.11) requires a lower bound. For the Stokes problem, the
corresponding condition is crucial:
( p, ∇ · u)
sup > βk pk0 > 0, ∀ p ∈ L2 (7.12)
1
v∈ HD,g
k u k1
where k and ` are the ploynomial degree of the velocity and the pressure, respectively, the celebrated
Babuska-Brezzi condition has to be satisfied:
( p, ∇ · v)
sup > βk pk0 > 0, ∀ p ∈ Qh (7.13)
v∈Vh,g k v k1
Chapter 7. Stokes problem 63
We remark that the discrete condition (7.13) does not follow from (7.12). In fact, it is has been a major
challenge in numerical analysis to determine which finite element pairs Vh and Qh that meet this
condition.
Remark 7.2.1. For saddle point problems on the form (7.5)-(7.6) four conditions have to be satisfied in order to
have a well-posed problem:
Boundedness of a:
a(uh , vh ) ≤ C1 kuh kVh kvh kVh , ∀ uh , vh ∈ Vh , (7.14)
and boundedness of b:
b(uh , qh ) ≤ C2 kuh kVh kqh kQh , ∀ uh ∈ Vh , qh ∈ Qh , (7.15)
Coersivity of a:
a(uh , uh ) ≥ C3 kuh k2Vh , ∀ u h ∈ Zh , (7.16)
where Zh = {uh ∈ Vh | b(uh , qh ) = 0, ∀qh ∈ Qh } and "coersivity" of b:
b(uh , qh )
sup ≥ C4 kqh kQh , ∀ qh ∈ Qh . (7.17)
uh ∈Vh kuh kVh
For the Stokes problem, (7.14)-(7.16) are easily verified, while (7.17) often is remarkably difficult unless the
elements are designed to meet this condition. We remark also that condition (7.16) strictly speaking only needs
to be valid on a subspace of Vh but this is not important for the Stokes problem.
The Taylor-Hood elements are quadratic for the velocity and linear for pressure
Figure 7.3: Taylor-Hood: quadratic element for v and linear element for p
The generalization of the Taylor–Hood element to higher order, that is Pk − Pk−1 , satisfies the
Brezzi conditions.
The v element is continuous only in the mid-point of each side (see ??), and the p element is
discontinuous. This ensures that it satisfies the Babuska-Brezzi condition.
ku − uh k1 + k p − ph k0 6 Ch(kuk2 + k pk1 )
If we on the other hand choose to locate the nodal points on the corners in the v element as shown
in ?? (called a P1 − P0 element) the inf-sup condition is not satisfied and we get oscillations in the
pressure term.
P2 − P0 element is a popular element that satisfies the Brezzi conditions. And advantage with this
approach is that mass conservation is maintained elementwise. In fact, Pk − Pk−2 , satisfies the Brezzi
conditionsfor k ≥ 2. Here, the pressure element Pk−2 may in fact consist of both continuous and
discontinuous polynomials. The discontinuous polynomials then has the advantage of providing mass
conservation, albeit at the expence of many degrees of freedom compared with the continuous variant.
The mini element is linear in both velocity and pressure, but the velocity element contains a cubic
bubble. Notice that elements that are linear in both v and p will not satisfy the inf-sup condition.
Thus we add the extra bubble in v to give an extra degree of freedom as depicted in ??.
Figure 7.6: Mini: linear element with bubble for v and linear element for
p
ku − uh k1 + k p − ph k0 6 C0 hkuk2 + C1 h2 k pk2
Au + B T p = f
Bu = 0
Au + B T p
= f
Bu − eDp = ed,
then factorize:
= u A −1 f − A −1 B T p
Bu = BA−1 f − BA−1 B T p = ed + eDp
−1 T
( BA B + eD ) p = BA−1 f − ed
If D is nonsingular then ( BA−1 B T + eD ) will be is nonsingular since both D and BA−1 B T are positive
(only D is positive definite however).
Factorizing for p we end up with a Velocity-Schur complement. Solving for p in the second equation
and inserting the expression for p into the first equation we have
1. D = A
2. D = M
1
3. D = ∆t M
where A is the stiffness matrix (discrete laplace operator) and M is the mass matrix.
7.5 Exercises
Exercise 7.1. Show that the conditions (7.14)-(7.16) are satisfied for Vh = H01 and Qh = L2 .
Exercise 7.2. Show that the conditions (7.14)-(7.16) are satisfied for Taylor–Hood and Mini discretizations.
(Note that Crouzeix–Raviart is non-conforming so it is more difficult to prove these conditions for this case.)
Exercise 7.3. Condition (7.17) is difficult to prove. However, if we assume that Vh = L2 and Qh = H01 , you
should be able to prove it. (Hint: This is closely related to Poincare’s inequality.)
Exercise 7.4. Test other finite elements for the Poiseuille flow problem. Consider P1 − P0 , P2 − P2 , P2 − P0 , as
well as the Mini and Crouzeix–Raviart element.
Exercise 7.5. Implement stabilization for the Poiseuille flow problem and use first order linear elements for
both velocity and pressure.
Chapter 7. Stokes problem 67
Exercise 7.6. In the previous problem the solution was a second order polynomial in the velocity and first
order in the pressure. We may therefore obtain the exact solution and it is therefore difficult to check order of
convergence for higher order methods with this solution. In this exercise you should therefore implement the
problem u = (sin(πy), cos(πx ), p = sin(2πx ), and f = −∆u − ∇ p. Test whether the approximation is of
the expected order for P4 − P3 , P4 − P2 , P3 − P2 , and P3 − P1 .
Exercise 7.7. Implement the Stokes problem with analytical solution u = (sin(πy), cos(πx ), p = sin(2πx ),
and f = −∆u − ∇ p on the unit square. Consider the case where you have Dirichlet conditions on the sides
’x=0’, ’x=1’ and ’y=1’ while Neumann is used on the last side (this way we avoid the singular system associated
with either pure Dirichlet or pure Neumann problems). Then determine the order of the approximation of wall
shear stress on the side ’x=0’. The wall shear stress on the side ’x=0’ is ∇u · t where t = (0, 1) is the tangent
along ’x=0’.
8 Efficient Solution Algorithms: Iterative methods
and Preconditioning
By Anders Logg, Kent–Andre Mardal
To compute the solution of a partial differential equation, we often need to solve a system of linear
of equations with a large number of uknowns. The accuracy of the solution increase with the number
of unknowns used. Nowadays, unknowns in the order of millions to billions are routinely solved for
without the use of (state-of-the-art) high-performance computing. Such computations are fasilitated
by the enormous improvements in numerical algorithms and scientific software the last decades.
It should be quite clear that naive Gaussian elimination can not be employed. For a naive Gaussian
eliminations implementaton, the number of required floating point operations (FLOPS) scales as the
cube of the number of uknowns. Hence, solving a problem with 106 unknowns would then require
1018 FLOPS which on a modern computer with e.g. 3 GHz still would take about 10 years. As we
will see later, such problems may in common cases be solved in just a few seconds. There are two
ingrediences in such efficient algorithms: iterative methods and preconditioning.
Lets therefore consider the numerical solution of large linear systems,
Au = b,
where the linear system comes from discretization of PDEs. That is, A is a N × N matrix, and N is
between 106 and 109 in typical simulations. Furthermore, the matrix is normally extremely sparse and
contains only O( N ) nonzeros (see Exercise 8.1). It is important to notice that even though A is sparse
A−1 will in general be full. This is a main reason to consider iterative methods.
dam, he describes how he uses humans as computational resources. He writes "So far I have paid piece rates for the operation
[Laplacian] of about n/18 pence per coordinate point, n being the number of digits. As for the rate of working, one of the
quickest boys average 2000 operations per week, for numbers of three digits, those done wrong being discounted."
69
70 Chapter 8. Iterative methods and Preconditioning
The standard approach to analyze iterative methods is to look at what happens with the error. Let
the error at the n’th iteration be en = un − u. As this is a linear system of equations, we may subtract
u from both sides of (8.1) and obtain an equation for the iterative error:
en = en−1 − τAen−1 .
Assuming for the moment that A is symmetric and positive definite, then the norm of A in general
defined as
k Ax k
k Ak = max
x kxk
equals the largest eigenvalue of A, λmax . Furthermore, if we assume that the eigenvalues are ordered
with respect to increasing value, such that λ0 and λ N are the smallest and largest eigenvalue, then the
norm of I − τA,
k( I − τA) x k
k I − τAk = max
x kxk
is attained either for the smallest or largest eigenvalue as either (1 − τλ0 ) or −(1 − τλ N ). The optimal
relaxation parameter τopt can be stated in terms of the eigenvalues, λi , of A. Minimum is attained
when (1 − τopt λ0 ) = −(1 − τopt λ N ) which makes τopt = λ0 +2λ .
N
Let the convergence factor ρ be defined as
ρ = k I − τ Ak
2λ0 λ − λ0 κ−1
ρ = k I − τAk = max |1 − τλi | = 1 − τλ0 = 1 − = N = .
λi λ0 + λ N λ N + λ0 κ+1
λN
Here, κ = λ0 is the condition number.
We estimate the error reduction per iteation in terms of the convergence factor as,
which leads to
κ−1 n 0
ken k 6 ( ) k e k.
κ+1
For iterative methods, we never iterate until the true solution exactly. Instead a convergence
criteria needs to be choosen such that the error obtained by the iterative method is less than or at
least comparable to the approximation error of the original system. Determining an appropriate
convergence criteria is problem dependent and quite often challenging.
Nevertheless, let us assume that we need to reduce the error by a factor of e, that is, we need
Chapter 8. Iterative methods and Preconditioning 71
ken k
k e0 k
< e. From the iteration, we have
k e n k 6 ρ k e n −1 k 6 ρ n k e 0 k . (8.2)
An estimate for the number of iterations is then obtained by assuming equality in the equation
ken k
(8.2) and ke0 k = e. Then the number of iterations needed to achieve the desired error is:
log e log e
n= = −1
. (8.3)
log ρ log( K
K +1 )
If n is independent of the resolution of the discretization, the computational cost of the algorithm is
O( N ) in FLOPS and memory and the algorithm is order-optimal.
The current analysis of the simplest iterative method there is, the Richardson iteration, shows
that the efficiency of the method is determined by the condition number of the matrix. In the
literature you will find a jungle of methods of which the following are the most famous: the Conjugate
Gradient method, the Minimal Residual method, the BiCGStab method, and the GMRES method. It is
remarkable that in general the convergence of these methods is determined by the condition number
with one exception; the Conjugate Gradient method which often can be estimated in terms of the
square root of the condition number. One main advantage is however that these methods do not
require the determination of a τ to obtain convergence.
Example 8.1. Eigenvalues of an elliptic problem in 1D and 2D.
Let us consider an elliptic problem:
u − ∆u = f, in Ω, (8.4)
∂u
= 0, on ∂Ω. (8.5)
∂n
Notice that the lower order term u in front of −∆u makes removes the singularity associated with Neumann
conditions and that in the continuous case the smallest eigenvalue is 1 (associated with the eigenfunction that is
a constant throughout Ω). The following code computes the eigenvalues using linear Lagrangian elements and
Python code
1 from dolfin import *
2 from numpy import linalg
3
4 for D in [1, 2]:
5 for N in [4, 8, 16, 32]:
6 if D == 1: mesh = UnitIntervalMesh(N)
7 elif D == 2: mesh = UnitSquareMesh(N, N)
8
9 V = FunctionSpace(mesh, "Lagrange", 1)
10 u = TrialFunction(V)
11 v = TestFunction(V)
12
13 a = u*v*dx + inner(grad(u), grad(v))*dx
14 A = assemble(a)
15 e = linalg.eigvals(A.array())
16 e.sort()
17 c = e[-1] / e[0]
18
19 print "D=\%d, N=\%3d, min eigenvalue=\%5.3f, max eigenvalue=\%5.3f, cond. number=\%5.3f " \% (D, N,
e[0], e[-1], c)
72 Chapter 8. Iterative methods and Preconditioning
Output
1 D=1, N= 4, min eigenvalue=0.199, max eigenvalue=14.562, cond. number=73.041
2 D=1, N= 8, min eigenvalue=0.111, max eigenvalue=31.078, cond. number=279.992
3 D=1, N= 16, min eigenvalue=0.059, max eigenvalue=63.476, cond. number=1079.408
4 D=1, N= 32, min eigenvalue=0.030, max eigenvalue=127.721, cond. number=4215.105
5 D=2, N= 4, min eigenvalue=0.040, max eigenvalue=7.090, cond. number=178.444
6 D=2, N= 8, min eigenvalue=0.012, max eigenvalue=7.735, cond. number=627.873
7 D=2, N= 16, min eigenvalue=0.003, max eigenvalue=7.929, cond. number=2292.822
8 D=2, N= 32, min eigenvalue=0.001, max eigenvalue=7.982, cond. number=8693.355
The output shows that the condition number grows as h−2 in both 1D and 2D although the behaviour of the
eigenvalues clearly are dimension dependent (see Exercise 8.2). The smallest eigenvalue decrease in both 1D and
2D as h → 0 but at different rates. To obtain eigenvalues corresponding the true eigenvalue we would need to
solve a generalized eigenvalue problem as discussed in Chapter 3.
Eigenvalues and eigenfunctions of Lu are λk = (kπ )2 and vk = sin(kπx ) for k ∈ N. When discretizing
with FDM we get a Au = b system, where A is a tridiagonal matrix (A = tridiagonal(−1, 2, −1)) when
the Dirichlet conditions have been eliminated. The discrete and continuous eigenvectors are the same, but the
2 ), where h is the step lenght ∆x. We find the smallest and
eigenvalues are a little bit different: λk = h42 sin2 ( kπh
largest discrete eigenvalues
4
λmin ( A) = π 2 , λmax ( A) = 2 .
h
2
Let τ = λmax +λmin then from the analysis above,
1−K n 0
ken k 6 ( ) k e k.
1+K
The below code perform the Richardson iteration for various resolution on the 1D Poisson problem and stops
kr k
when the convergence criteria krk k ≤ 10−6 is obtained.
0
Python code
1 from numpy import *
2
3 def create_stiffness_matrix(N):
4 h = 1.0/(N-1)
5 A = zeros([N,N])
6 for i in range(N):
7 A[i,i] = 2.0/(h**2)
8 if i > 0:
9 A[i,i-1] = -1.0/(h**2)
10 if i < N-1:
11 A[i,i+1] = -1.0/(h**2)
12 A = matrix(A)
13 return A
14
15 Ns = [10, 20, 40, 80, 160, 320]
16 for N in Ns:
Chapter 8. Iterative methods and Preconditioning 73
Table 8.1: The number of iterations of the Richardson iteration for solving a 1D Poisson problem. The FLOPS is
estimated as the number of iterations times four times the number of unknowns, N, as the matrix is tridiagonal
and there is both a matrix vector product (3N) and a vector addtion involved in (8.1).
We remark that in this example we have initialized the iteration with a random vector because such a
vector contains errors at all frequencies. This is recommended practice when trying to estabilish a worst case
scenario. Testing the iterative method against a known analytical solution with a zero start vector will often
only require smooth error to be removed during the iterations and will therefore underestimate the complications
of a real-world problem.
In the Example 8.2 we considered the Richardson iteration applied to a Poisson problem in 1D. We saw
that in order to stop the iteration we had to choose a stopping criteria. Ideally we would like to stop
when the error was small enough. The problem is that the error is uknown. In fact, since en = un − u
we would be able to compute the exact solution if the error was known at the n’th iteration. What is
computable is the residual at the n’th iteration, defined by
r n = Aun − f .
But computing en from this relation would require the inversion of A (which we try to avoid at all
cost since it in general is a O( N 3 ) procedure). For this reason, the convergence criteria is typically
expressed in terms of some norm of the residual. We may bound the n’th error as
Au = b
with
BAu = Bb.
Both systems have the same solution (if B is nonsingular). However, B should be chosen as a cheap
approximation of A−1 or at least in such a way that BA has a smaller condition number than A.
Furthermore Bu should cost O( N ) operations to evaluate. Obviously, the preconditioner B = A−1
would make the condition number of BA be one and the Richardson iteration would converge in one
iteration. However, B = A−1 is a very computationally demanding preconditioner. We would rather
seek preconditioners that are O( N ) in both memory consumption and evaluation.
The generalized Richardson iteration becomes
Previously we stated that a good preconditioner is supposed to be similar to A−1 . The precise (and
most practical) property that is required of a preconditioner is:
Definition 8.1. Two linear operators or matrices A−1 and B, that are symmetric and positive definite are
spectral equivalent if:
If the preconditioner B is spectrally equivalent with A−1 then the preconditioned Richardson
iteration yields and order optimal algorithm. To see this, we note that en = ( I − τBA)en−1 . We can
estimate the behavior of en by using the A-norm, ρ A = k I − τBAk A . Then we get
k e n k A 6 ρ A k e n −1 k A .
Hence, if the condition number is independent of the discretization then the number of iterations as
estimated earlier in (8.3) will be bounded independently of the discretization.
In general, if A is a discretization of −∆ on a quasi-uniform mesh then both multigrid methods
and domain decomposition methods will yield preconditioners that are spectrally equivalent with the
inverse and close to O( N ) in evaluation and storage. The gain in using a proper preconditioner may
provide speed-up of several orders of magnitude, see Example 8.3.
General Advice for usage of different methods: We classify the methods according to the matrices they
are used to solve.
• If a matrix is Symmetric but indefinite, i.e. A = A T but both positive and negative eigenvalues
then the Minimal Residual method (MR) is the best choice. MR requires an SPD preconditioner,
see also Exercise 8.9.
• If the matrix is positive, i.e., x T Ax ≥ 0 ∀ x which is often the case for convection-diffusion
problems or flow problems then GMRES with either ILU or AMG are often good, but you might
need to experiment, see also Exercise 8.7.
• For matrices that are both nonsymmetric and indefinite there is a jungle of general purpose
methods but they may be categories in two different families. In our experience the BiCGStab
and GMRES methods are the two most prominent algorithms in these families. GMRES is
relatively roboust but may stagnate. BiCGStab may break down. GMRES has a parameter ’the
number of search vectors’ that may be tuned.
Most linear algebra libraries for high performance computing like for instance PETSc, Trilinos,
Hypre have these algorithms implemented. They are also implemented in various libraries in Python
and Matlab. There is usually no need to implement these algorithms yourself.
76 Chapter 8. Iterative methods and Preconditioning
90
lu
80 cg
cg/ilu
70 cg/amg
60
50
Time(sec) 40
30
20
10
00 200000 400000 600000 800000 1000000 1200000
Degrees of freedom
Figure 8.1: CPU time (in seconds) for solving a linear system of equation
with N degrees of freedom (x-axis) for different solvers
Example 8.3 (CPU times of different algorithms). In this example we will solve the problem
u − ∆u = f, in Ω
∂u
= 0, on ∂Ω
∂n
where Ω is the unit square with first order Lagrange elements. The problem is solved with four different methods:
• a LU solver,
for N = 322 , 642 , 1282 , 2562 , 5122 , 10242 , where N is the number of degrees of freedom.
Figure 8.1 shows that there is a dramatic difference between the algorithms. In fact the Conjugate gradient
(CG) with an AMG preconditioner is over 20 times faster then the slowest method, which is the CG solver
without preconditioner. One might wonder why the LU solver is doing so well in this example when it costs
O( N 2 ) – O( N 3 ) . However, if we increase the number of degrees of freedom, then the method would slow down
compared to the other methods. The problem is then that it would require too much memory and the program
would probably crash.
Python code
1 from dolfin import *
2 import time
3 lu_time = []; cgamg_time = []
4 cg_time = []; cgilu_time = []
5 Ns = []
6
7 parameters["krylov_solver"]["relative_tolerance"] = 1.0e-8
8 parameters["krylov_solver"]["absolute_tolerance"] = 1.0e-8
9 parameters["krylov_solver"]["monitor_convergence"] = False
10 parameters["krylov_solver"]["report"] = False
11 parameters["krylov_solver"]["maximum_iterations"] = 50000
Chapter 8. Iterative methods and Preconditioning 77
12
13 def solving_time(A,b, solver):
14 U = Function(V)
15 t0 = time.time()
16 if len(solver) == 2:
17 solve(A, U.vector(), b, solver[0], solver[1]);
18 else:
19 solve(A, U.vector(), b, solver[0]);
20 t1 = time.time()
21 return t1-t0
22
23 for N in [32, 64, 128, 256, 512, 1024]:
24
25 Ns.append(N)
26
27 mesh = UnitSquare(N, N)
28 print " N ", N, " dofs ", mesh.num_vertices()
29 V = FunctionSpace(mesh, "Lagrange", 1)
30 u = TrialFunction(V)
31 v = TestFunction(V)
32
33 f = Expression("sin(x[0]*12) - x[1]")
34 a = u*v*dx + inner(grad(u), grad(v))*dx
35 L = f*v*dx
36
37 A = assemble(a)
38 b = assemble(L)
39
40 t2 = solving_time(A,b, ["lu"])
41 print "Time for lu ", t2
42 lu_time.append(t2)
43
44 t2 = solving_time(A, b, ["cg"])
45 print "Time for cg ", t2
46 cg_time.append(t2)
47
48 t2 = solving_time(A, b, ["cg", "ilu"])
49 print "Time for cg/ilu ", t2
50 cgilu_time.append(t2)
51
52 t2 = solving_time(A, b, ["cg", "amg"])
53 print "Time for cg/amg ", t2
54 cgamg_time.append(t2)
55
56
57 import pylab
58
59 pylab.plot(Ns, lu_time)
60 pylab.plot(Ns, cg_time)
61 pylab.plot(Ns, cgilu_time)
62 pylab.plot(Ns, cgamg_time)
63 pylab.xlabel(’Unknowns’)
64 pylab.ylabel(’Time(sec)’)
65 pylab.legend(["lu", "cg", "cg/ilu", "cg/amg"])
66 pylab.show()
67
68 pylab.loglog(Ns, lu_time)
69 pylab.loglog(Ns, cg_time)
70 pylab.loglog(Ns, cgilu_time)
71 pylab.loglog(Ns, cgamg_time)
72 pylab.legend(["lu", "cg", "cg/ilu", "cg/amg"])
78 Chapter 8. Iterative methods and Preconditioning
73 pylab.savefig(’tmp_cpu.pdf’)
74 pylab.show()
When we employ iterative methods, we need to specify the convergence criterion. This is often not
an easy task. We have the continuous solution u, the discrete solution uh , and the appropriate discrete
solution, unh found by an iterative method at iteration n. Obviously, we may estimate the error as
and it does make sense that the values of ku − uh k and kuh − unh k are balanced. Still both terms may
be hard to estimate in challenging applications. In practice, an appropriate convergence criterion
is usually found by trial and error by choosing a stopping criterion based on the residual. Let us
therefore consider a concrete example and consider ku − unh k as a function of the mesh resolution and
a varying convergence criterion.
Table 8.2 shows the error and the corresponding CPU timings when solving a Poisson problem
at various resolutions and convergence criteria. Here, the convergence criteria is chosen as reducing
kr k
the relative residual, i.e., krk k by the factor e. This convergence criteria is very common, in particular
0
for stationary problems. There are several things to note here. For coarse resolution, N=64, the error
stagnates somewhere between 1.0e − 3 and 1.0e − 4 and this stagnation marks where an appropriate
stopping criteria is. It is however worth noticing that solving it to a criteria that is 1.0e − 6 is actually
only about 30% more computationally demanding than 1.0e − 3. This is due to the fact that we have a
very efficient method that reduces the error by about a factor 10 per iteration. If we consider the fine
resolution, N=1024, we see that the stagnation happens later and that we may not even have reached
the stagnating point even at e = 1.0e − 6. We also notice that the decreasing e in this case only lead
to a moderate growth in CPU time. If we look closer at the table, we find that the stagnation point
follows a staircase pattern. The code used to generate the table is as follows:
Python code
1 from dolfin import *
2
3 def boundary(x, on_boundary):
4 return on_boundary
5
6 parameters["krylov_solver"]["relative_tolerance"] = 1.0e-18
7 parameters["krylov_solver"]["absolute_tolerance"] = 1.0e-18
8 parameters["krylov_solver"]["monitor_convergence"] = True
9 parameters["krylov_solver"]["report"] = True
10 #parameters["krylov_solver"]["maximum_iterations"] = 50000
11 epss = [1.0e-1, 1.0e-2, 1.0e-3, 1.0e-4, 1.0e-5, 1.0e-6]
12 data = {}
13 Ns= [64, 128, 256, 512, 1024]
Chapter 8. Iterative methods and Preconditioning 79
Example 8.4. Eigenvalues of the preconditioned system. It is often interesting to assess the condition
number of the preconditioned system, BA. If the preconditioner is a matrix and the size of the system is moderate
we may be able to estimate the condition number of BA using NumPy, Matlab or Octave. However, when our
preconditioner is an algorithm representing a linear operator, such as in the case of multigrid, then this is not
possible. However, as described in Saad [2003], egenvalues may be estimated as a bi-product of the Conjugate
Gradient method. Without going into the algorithmic details of the implmementation, we mention that this is
implemented in the FEniCS module cbc.block, see Mardal and Haga [2012]. The following code shows the
usage.
Python code
1 from dolfin import *
2 from block.iterative import ConjGrad
3 from block.algebraic.petsc import ML
4 from numpy import random
5
6 def boundary(x, on_boundary):
7 return on_boundary
8
9 class Source(Expression):
10 def eval(self, values, x):
11 dx = x[0] - 0.5; dy = x[1] - 0.5
80 Chapter 8. Iterative methods and Preconditioning
In this example we see that the condition number increases logaritmic from 1.1 to 2.1 as the N increases from 8
to 1024. The AMG preconditioner has better performance and does not show logaritmic growth. For indefinite
symmetric systems, the CGN method provides the means for estimating the condition number, c.f., the cbc.block
documentation.
In the previous Chapters 6 and 7 we have discussed the well-posedness of the convection-diffusion
equations and the Stokes problem. In both cases, the problems were well-posed - meaning that the
differential operators as well as their inverse were continuous. However, when we discretize the
problems we get matrices where the condition number grows to infinity as the element size goes
to zero. This seem to contradict the well-posedness of our discrete problems and may potentially
destroy both the accuracy and efficiency of our numerical algorithms. Functional analysis explains
this apparent contradiction and explains how the problem is circumvented by preconditioning.
Let us now consider the seeming contradiction in more precise mathematical detail for the Poisson
problem with homogeneous Dirichlet conditions: Find u such that
−∆u = f , in Ω, (8.9)
u = 0, on ∂Ω. (8.10)
We know from Lax-Milgram’s theorem that the weak formulation of this problem: Find u ∈ H01
such that
a(u, v) = b(v), ∀v ∈ H01 .
Chapter 8. Iterative methods and Preconditioning 81
where
Z
a(u, v) = ∇u · ∇v dx, (8.11)
ZΩ
b(v) = f v dx, (8.12)
Ω
is well-posed because
Here | · |1 denotes the H 1 semi-norm which is known to be a norm on H01 due to Poincare. The
well-posedness is in this case stated as
1
|u| H1 ≤ k f k H −1 . (8.15)
0 α
In other words, −∆ takes a function u in H01 and returns a function f = −∆u which is in H −1 . We
have that k f k−1 = k − ∆uk−1 ≤ C kuk1 . Also, −∆−1 takes a function f in H −1 and returns a function
u = (−∆)−1 f which is in H01 . We have that kuk1 = k(−∆)−1 f k1 ≤ α1 k f k−1 . In fact, in this case
α = C = 1.
This play with words and symbols may be formalized by using operator norms that are equivalent
with matrix norms. Let B ∈ Rn,m then
k Bx kRn
k BkL(Rm ,Rn ) = maxm
x ∈R k x k Rm
Analogously, we may summarize the mapping properties of −∆ and (−∆)−1 in terms of the
conditions of Lax-Milgram’s theorem as
1
k − ∆kL( H1 ,H −1 ) ≤ C and k(−∆)−1 kL( H −1 ,H1 ) ≤ . (8.16)
0 0 α
where L( X, Y ) denotes the space of bounded linear operators mapping X to Y. In other words, −∆ is
a bounded linear map from H01 to H −1 and (−∆)−1 is a bounded linear map from H −1 to H01 . This is
a crucial observation in functional analysis that, in contrast to the case of a matrix which is a bounded
linear map from Rn to Rm , an operator may be map from one space to another.
From Chapter 3 we know that the eigenvalues and eigenvectors of −∆ with homogeneous Dirichlet
conditions on the unit interval in 1D are λk = (πk)2 and ek = sin(πkx ), respectively. Hence the
eigenvalues of −∆ obviously tend to ∞ as k grows to ∞ and similarly the eigenvalues of (−∆)−1
accumulate at zero as k → ∞. Hence the spectrum of −∆ is unbounded and the spectrum of (−∆)−1
has an accumulation point at zero. Still, the operator −∆ and its inverse are bounded from a functional
analysis point of view, in the sense of (8.18).
Let us for the moment assume that we have access to an operator B with mapping properties that
are inverse to that of A = −∆, i.e.,
would be bounded. In the discrete case, the mapping property (8.17) translates to the fact that B
should be spectrally equivalent with the inverse of A when B and A are both positive.
While the above discussion is mostly just a re-iteration of the concept of spectral equivalence in
the discrete case when the PDEs are elliptic, the insight from functional analysis can be powerful for
systems of PDEs. Let us consider the Stokes problem from Chapter 7. The problem reads:
−∆ −∇
u u u
A = =
p ∇· 0 p p
As discussed in Chapter 7
A : H01 × L2 → H −1 × L2
was a bounded linear mapping with a bounded inverse. Therefore, a preconditioner can be constructed
as
(−∆)−1 0
B=
0 I
Clearly
B : H −1 × L2 → H01 × L2
and is therefore a suitable preconditioner. However, we also notice that A and B −1 are quite
different. A is indefinite and has positive and negative egenvalues, while B is clearly positive.
Hence, the operators are not spectrally equivalent. Exercise 8.9 looks deeper into this construction
of preconditioners for Stokes problem. A more comprehensive description of this technique can be
found in Mardal and Winther [2011].
8.4 Exercises
Exercise 8.1. Estimate ratio of non-zeros per unknown of the stiffness matrix on the unit square with Lagrangian
elements of order 1, 2, 3 and 4. Hint: the number of non-zeros can be obtained from the function ’nnz’ of a
matrix object.
Exercise 8.2. Compute the smallest and largest eigenvalues of the mass matrix and the stiffness matrix in 1D,
2D and 3D. Assume that the condition number is on the form κ ≈ Chα , where C and α may depend on the
number of dimentions in space. Finally, compute the corresponding condition numbers. Does the condition
number have the same dependence on the number of dimentions in space?
Exercise 8.3. Repeat Exercise 8.2 but with Lagrange elements of order 1, 2 and 3. How does the order of the
polynomial affect the eigenvalues and condition numbers.
Exercise 8.4. Compute the eigenvalues the discretized Stokes problem using Taylor-Hood elements. Note that
the problem is indefinite and that there are both positive and negative eigenvalues. An appropriate condition
number is:
maxi |λi |
κ=
mini |λi |
Chapter 8. Iterative methods and Preconditioning 83
where λi are the eigenvalues of A. Compute corresponding condition numbers for the Mini and Crouzeix-Raviart
elements. Are the condition numbers similar?
Exercise 8.5. Implement the Jacobi iteration for a 1D Poisson problem with homogeneous Dirichlet conditions.
Start the iteration with an initial random vector and estimate the number of iterations required to reduce the L2
norm of the residual with a factor 104 . For relevant code see Example 8.3.
Exercise 8.6. Test CG method without preconditioer, with ILU preconditioner and with AMG preconditioner
for the Poisson problem in 1D and 2D with homogeneous Dirichlet conditions, with respect to different mesh
resolutions. Do some of the iterations suggest spectral equivalence?
Exercise 8.7. Test CG, BiCGStab, GMRES with ILU, AMG, and Jacobi preconditioning for
−µ∆u + v∇u = f in Ω
u = 0 on ∂Ω
Where Ω is the unit square, v = c sin(7x ), and c varies as 1, 10, 100, 1000, 10000 and the mesh resolution h
varies as 1/8, 1/16, 1/32, 1/64. You may assume homogeneous Dirichlet conditions.
Exercise 8.8. The following code snippet shows the assembly of the matrix and preconditioner for a Stokes
problem:
Python code
1 a = inner(grad(u), grad(v))*dx + div(v)*p*dx + q*div(u)*dx
2 L = inner(f, v)*dx
3
4 # Form for use in constructing preconditioner matrix
5 b = inner(grad(u), grad(v))*dx + p*q*dx
6
7 # Assemble system
8 A, bb = assemble_system(a, L, bcs)
9
10 # Assemble preconditioner system
11 P, btmp = assemble_system(b, L, bcs)
12
13 # Create Krylov solver and AMG preconditioner
14 solver = KrylovSolver("tfqmr", "amg")
15
16 # Associate operator (A) and preconditioner matrix (P)
17 solver.set_operators(A, P)
18
19 # Solve
20 U = Function(W)
21 solver.solve(U.vector(), bb)
Here, "tfqmr" is a variant of the Minimal residual method and "amg" is an algebraic multigrid implementation
in HYPRE. Test, by varying the mesh resolution, whether the code produces an order–optimal preconditioner.
HINT: You might want to change the "parameters" as done in Example 8.3:
Python code
1 # Create Krylov solver and AMG preconditioner
2 solver = KrylovSolver("tfqmr", "amg")
3 solver.parameters["relative_tolerance"] = 1.0e-8
4 solver.parameters["absolute_tolerance"] = 1.0e-8
5 solver.parameters["monitor_convergence"] = True
6 solver.parameters["report"] = True
7 solver.parameters["maximum_iterations"] = 50000
84 Chapter 8. Iterative methods and Preconditioning
Exercise 8.9. Consider the mixed formulation of linear elasticity that is appropriate when λ is large compared
to µ. That is,
Python code
1 a = inner(grad(u), grad(v))*dx + div(v)*p*dx + q*div(u)*dx - 1/lam*p*q*dx
2 L = inner(f, v)*dx
Python code
1 b1 = inner(grad(u), grad(v))*dx + p*q*dx
2 b2 = inner(grad(u), grad(v))*dx + 1/lam*p*q*dx
Compare the efficiency of the different preconditioners when increasing the resolution and when λ → ∞. Can
you explain why the first preconditioner is the best?
9 Linear elasticity and singular problems
By Anders Logg, Kent–Andre Mardal
9.1 Introduction
Let us consider an elastic body Ω0 that is being deformed under a load to become Ω. the deformation
χ of a body in the undeformed state Ω0 to deformed state Ω. A point in the body has then moved
u = x − X, (9.1)
Here, the domain Ω0 ⊂ R3 . From continuum mechanics, the elastic deformation is modelled by
the stress tensor σ which is a symmetric 3 × 3 tensor. In equilibrium (i.e. no accelration terms) the
Newton’s second law states the balance of forces as:
div σ= f, in Ω,
σ·n = g, on ∂Ω,
where f and g are body and surface forces, respectively and n is the outward normal vector.
For small deformations of an isotropic media, Hooke’s law is a good approximation. Hooke’s law
states that
σ = 2µe(u) + λ tr(e(u))δ.
Here, e(u) is the strain tensor or the symmetric gradient:
1
e(u) = (∇u + (∇u)T ),
2
85
86 Chapter 9. Linear elasticity and singular problems
µ and λ are the Lame constants, tr is the trace operator (the sum of the diagonal matrix entries), u is
the displacement, and
1 0 0
δ = 0 1 0 .
0 0 1
From Newton’s second law and Hooke’s law we arrive directly at the equation of linear elasticity:
The equation of linear elasticity (9.2) is an elliptic equation, but there are crucial differences between
this equation and a standard elliptic equation like −∆u = f . These differences often cause problems
in a numerical setting. To explain the numerical issues we will here focus on the differences between
the three operator:
1. −∆ = ∇ · ∇ = div grad,
2. ∇ · e = ∇ · ( 12 (∇ + (∇ T )),
In particular, the differences between the operators in 1. and 2. is that ∇ · e has a larger kernel than
−∆. The kernel consists of rigid motions and this leads to the usage of of one of Korn’s lemmas. This
is the subject of Section 9.2. The kernel of the operators grad div and div grad are also different but
here in fact the kernel of grad div is infinite dimentional and this has different consequences for the
numerical algorithms which not necessarily pick up this kernel at all. This is discussed in Section 9.3.
The challenge with the handling of the ∇ · e operator is the handling of the singularity in the case of
pure Neumann conditions. Let us therefore start with the simpler problem of the Poisson problem
with Neumann conditions, i.e.,
−∆u = f, in Ω, (9.3)
∂u
= g, on ∂Ω. (9.4)
∂n
It is easy to see that this problem is singular: Let u be a solution of the above equation, then u + C
∂(u+C )
with C ∈ R is also a solution because −∆u = ∆(u + C ) = f and ∂u ∂n = ∂n = g. Hence, the solution
is only determined up to a constant. This means that the kernel is 1-dimentional.
A proper formulation of the above problem can be obtained by using the method of Lagrange
multipliers to fixate the element of the 1-dimentional kernel. The following weak formulation is
well-posed: Find u ∈ H 1 and λ ∈ R such that
Here,
= (∇u, ∇v),
a(u, v) (9.7)
b(λ, v) = (λ, v), (9.8)
Z
f (v) = ( f , v) + gvds. (9.9)
∂Ω
Hence, the method of Lagrange multipliers turns the original problem into a saddle problem similar
that in Chapter 7. However, in this case the Brezzi conditions are easily verified. We remark however
that this formulation makes the problem indefinite rather than positive definite and for this reason
alternative techniques such as pin-pointing is often used instead. We will not avocate this approach as
it often causes numerical problems. Instead, we include a code example that demonstrate how this
problem can be implemented with the method of Lagrange multipliers in FEniCS.
Python code
1 from dolfin import *
2
3 mesh = UnitSquareMesh(64, 64)
4
5 # Build function space with Lagrange multiplier
6 P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
7 R = FiniteElement("Real", mesh.ufl_cell(), 0)
8 W = FunctionSpace(mesh, P1 * R)
9
10 # Define variational problem
11 (u, c) = TrialFunction(W)
12 (v, d) = TestFunctions(W)
13 f = Expression("10*exp(-(pow(x[0] - 0.5, 2) + pow(x[1] - 0.5, 2)) / 0.02)", degree=2)
14 g = Expression("-sin(5*x[0])", degree=2)
15 a = (inner(grad(u), grad(v)) + c*v + u*d)*dx
16 L = f*v*dx + g*v*ds
17
18 # Compute solution
19 w = Function(W)
20 solve(a == L, w)
21 (u, c) = w.split()
22
23 # Plot solution
24 plot(u, interactive=True)
The kernel of the e operator is the space of rigid motions, RM. The space consists of translations
and rotations. Rigid motions are on the following form in 2D and 3D:
a0 −y
RM2D = + a2 , (9.10)
a1 x
a0 0 a3 a4 x
RM3D = a1 + − a3 0 a5 y . (9.11)
a2 − a4 − a5 0 z
Hence, the kernel in 2D is three-dimentional and may be expressed as above in terms of the degrees
of freedom ( a0 , a1 , a2 ) whereas the kernel in 3D is six-dimentional ( a0 , . . . , a5 ).
The Korn’s lemmas states suitable conditions for solvability. Here, we include two of the three
inequalities typically listed.
• The second lemma: For all u ∈ H01 we have that ke(u)k ≥ C kuk1 .
These lemmas should be compared with the Poincare’s lemma and the equivalence of the | · |1 and
k · k1 norms. The second lemma states that when we have homogenous Dirichlet conditions we obtain
a well-posed problem in a similar manner as for a standard elliptic problem. This case is often called
fully-clamped conditions. For the Neumann problem, however, coersivity is not obtained unless we
remove the complete set of rigid motions for the function space used for trial and test functions.
Removing the rigid motions is most easily done by using the method of Lagrange multipliers.
Let us now consider a weak formulation of the linear elasticity problem and describe how to
implement it in FEniCS. For now we consider the case where λ and µ are of comparable magnitude.
In the next section we consider the case where λ µ. The weak formulation of the linear elasticity
problem is: Find u ∈ H 1 and λ ∈ RM such that
Here,
As we know from Chapter 7, this is a saddle point problem and we need to comply with the Brezzi
conditions. Verifying these conditions are left as Exercise 9.4.
Example 9.1. Our brain and spinal cord is floating in a water like fluid called the cerebrospinal fluid. While
the purpose of this fluid is not fully known, it is known that the pressure in the fluid oscillates with about 5-10
mmHg during a cardic cycle which is approximately one second, c.f., e.g., Eide [2016]. The Youngs’ modulus
has been estimated 16 kPa and 1 mmHg ≈ 133 Pa, c.f., e.g., Støverud et al. [2016]. To compute the deformation
of the brain during a cardiac cycle we consider solve the linear elasticity problem with Neumann condtions set as
pressure of 1 mm Hg and ... The following code shows the implmentation in FEniCS. The mesh of the brain was
in this case obtained from a T1 magnetic ressonance image and segmentation was performed by using FreeSurfer.
Python code
1 from fenics import *
2
3 mesh = Mesh(’mesh/res32.xdmf’) # mm
4
5 plot(mesh,interactive=True)
6
7 # Since the mesh is in mm pressure units in pascal must be scaled by alpha = (1e6)**(-1)
8 alpha = (1e6)**(-1)
9
10 # Mark boundaries
11 class Neumann_boundary(SubDomain):
12 def inside(self, x, on_boundry):
13 return on_boundry
14
15 mf = FacetFunction("size_t", mesh)
16 mf.set_all(0)
17
18 Neumann_boundary().mark(mf, 1)
19 ds = ds[mf]
Chapter 9. Linear elasticity and singular problems 89
20
21 # Continuum mechanics
22 E = 16*1e3 *alpha
23 nu = 0.25
24 mu, lambda_ = Constant(E/(2*(1 + nu))), Constant(E*nu/((1 + nu)*(1 - 2*nu)))
25 epsilon = lambda u: sym(grad(u))
26
27 p_outside = 133 *alpha
28 n = FacetNormal(mesh)
29 f = Constant((0, 0, 0))
30
31 V = VectorFunctionSpace(mesh, "Lagrange", 1)
32
33 # --------------- Handle Neumann-problem --------------- #
34 R = FunctionSpace(mesh, ’R’, 0) # space for one Lagrange multiplier
35 M = MixedFunctionSpace([R]*6) # space for all multipliers
36 W = MixedFunctionSpace([V, M])
37 u, mus = TrialFunctions(W)
38 v, nus = TestFunctions(W)
39
40 # Establish a basis for the nullspace of RM
41 e0 = Constant((1, 0, 0)) # translations
42 e1 = Constant((0, 1, 0))
43 e2 = Constant((0, 0, 1))
44
45 e3 = Expression((’-x[1]’, ’x[0]’, ’0’)) # rotations
46 e4 = Expression((’-x[2]’, ’0’, ’x[0]’))
47 e5 = Expression((’0’, ’-x[2]’, ’x[1]’))
48 basis_vectors = [e0, e1, e2, e3, e4, e5]
49
50 a = 2*mu*inner(epsilon(u),epsilon(v))*dx + lambda_*inner(div(u),div(v))*dx
51 L = inner(f, v)*dx + p_outside*inner(n,v)*ds(1)
52
53 # Lagrange multipliers contrib to a
54 for i, e in enumerate(basis_vectors):
55 mu = mus[i]
56 nu = nus[i]
57 a += mu*inner(v, e)*dx + nu*inner(u, e)*dx
58
59 # -------------------------------------------------------- #
60
61 # Assemble the system
62 A = PETScMatrix()
63 b = PETScVector()
64 assemble_system(a, L, A_tensor=A, b_tensor=b)
65
66 # Solve
67 uh = Function(W)
68 solver = PETScLUSolver(’mumps’) # NOTE: we use direct solver for simplicity
69 solver.set_operator(A)
70 solver.solve(uh.vector(), b)
71
72 # Split displacement and multipliers. Plot
73 u, ls = uh.split(deepcopy=True)
74 plot(u, mode=’displacement’, title=’Neumann_displacement’,interactive=True)
75
76 file = File(’deformed_brain.pvd’)
77 file << u
90 Chapter 9. Linear elasticity and singular problems
9.3 Locking
The locking phenomena has nothing to do with the problem related to the rigid motions studied in
the previous section. Therefore, we consider locking in the simplest case possible where we have
homogenous Dirichlet conditions. In this case the elasticity equation can be reduced to
The weak formulation of the problem then becomes: Find u ∈ H01 such that
where
The phenomen locking is a purely numerical artifact that arise when λ µ. Roughly speaking,
approximating ∇ and ∇· require different methods. While vertices based approximations work fine
for ∇, edge based methods are more natural for ∇· since this operator relates directly to the flux
through the element edges.
For smooth functions, it can be verified directly that
∆ = ∇ · ∇ = ∇∇ · +∇ × ∇×
Chapter 9. Linear elasticity and singular problems 91
Furthermore, it is well known (the Helmholz decomposition theorem) that any field in L2 or H 1
can be decomposed into a the gradient of a scalar potential (irrotational, curl-free vector field) and the
curl of scalar (a solenoidal, divergence-free vector field). That is,
u = ∇φ + ∇ × ψ,
∇ · ∇ × u = 0, (9.19)
∇ × ∇ · u = 0. (9.20)
−µ∆u − ∇ p = f ,
1
∇·u− p = 0.
µ+λ
This system of equations is similar to the Stokes problem. Hence, we may formulation a weak
problems as follows. Find u ∈ H01 and p ∈ L2 such that
Here,
The case when λ → ∞ then represents the Stokes problem as µ+1 λ → 0. Hence, for this problem
we know that stable discretizations can be obtained as long as we have Stokes-stable elements like
for instance Taylor–Hood. We also remark that Stokes-stable elements handle any µ, λ because the
−c( p, q) is a negative term that only stabilize. In fact, this problem is identical to the proposed penalty
method that was discussed for the Stokes problem.
Exercise 9.1. Show that the inner product of a symmetric matrix A and matrix B equals the inner product of
92 Chapter 9. Linear elasticity and singular problems
Exercise 9.2. Show that the term div e(u) in a weak setting may be written as (e(u), e(v)). Use the result of
Exercise 9.1.
Exercise 9.3. Show that the Brezzi conditions (7.14-7.17) for the singular problem of homogenous Neumann
conditions for the Poisson problem (9.6)–(9.9). Hint: use the following version of Poincare’s lemma:
ku − ūk0 ≤ C k∇uk0 , ∀u ∈ H 1 .
1
R
Here, ū = |Ω| Ω udx. As always, the inf-sup condition is challenging, but notice that
b(u, q) b(ū, q)
supu∈Vh ≥ .
kukVh kūkVh
Exercise 9.4. Show that three of Brezzi conditions (7.14-7.16) for problem linear elasticity problem with pure
Neumann conditions (9.12)-(9.13) are valid. Hint: use Korn’s lemma for the coersivity. As always, the inf-sup
condition is challenging and we refer to Kuchta et al. [2016].
Exercise 9.5. We will consider the topic ’locking’. Consider the following equation on the domain Ω = (0, 1)2 :
∂φ ∂φ
where u analytical = ( ∂y , − ∂x ) and φ = sin(πxy). Here, by construction, ∇ · u analytical = 0.
a) Derive an expression for f . Check that the expression is independent of λ.
b) Compute the numerical error for λ = 1, 100, 10000 at h = 8, 16, 32, 64 for polynomial order both 1 and 2.
c) Compute the order of convergence for different λ. Is locking occuring?
10 Finite element assembly
By Anders Logg, Kent–Andre Mardal
AU = b, (10.1)
where
Aij = a(φj , φi ) and bi = L(φi ).
Fundamental question: How to compute A? An obvious algorithm is:
for i = 1,. . . ,N do
for j = 1,. . . ,N do
Aij = a(φi , φj )
end for
end for
This algorithm is very inefficient! The reasons are:
1. A is sparse
Note that the numbering is arbitary as long as neighboring T and T 0 agree. However some numbering
schemes are more efficient then others, especially for parallel computing.
Note that
φι T (i) | T = φiT ⇔ φ I | T = φιT−1 ( I ) .
| T{z }
if it exists
93
94 Chapter 10. Finite element assembly
4 8 3 7 2
2 3 1
2
4 3 4 5
9 10 11 6
0
0 5 1
0 5 1
F
Figure 10.1: Red numbers indicate the local numbering, black number are
the gobal numbering. Here P2 elements where used, dim PK = 6.
We then define
AijT = a T (φiT , φjT ). (10.3)
A I J = a(φ J , φ I ) = ∑ aT (φ J , φ I ) (10.4)
T ∈T
= ∑ aT (φ J , φ I ), all triangles where both φi and φj are nonzero, (10.5)
T ∈T I J
= ∑ aT (φιT−1 ( J ) , φιT−1 ( I ) )
T T
(10.6)
T ∈T I J
= ∑ AιT−1 ( I )ι−1 ( J )
T T
(10.7)
T ∈T I J
x3
T
x̂3 = (0, 1) x = FT ( x̂ )
FT
x2
T̂
x1
x̂
Compute ι T
Insert A T to A according to ι T
end for
To be able to compute A T we will use affine mapping. This is a mapping between the reference
element T̂ to T, see figure 10.2.
x = FT ( x̂ ) = BT x̂ + c T , (10.8)
where BT is a matrix and c T is a vector. Let us look at a reference baisis function for P1 elements,
FT ( x̂ ) = Φ0 ( x̂ ) x0 + Φ1 ( x̂ ) x1 + Φ2 ( x̂ ) x2 (10.12)
ODE: u̇ = f (u, t)
(11.1)
PDE: u̇ + A(u) = f ( x, t)
Strong form
u : [0, T ] → R N
(11.3)
f : RN × R → RN
Weak form
Find u ∈ V such that Z T Z T
v · u̇ dt = v · f dt ∀ v ∈ V̂. (11.4)
0 0
where Vk and V̂k are the discrete trial space and discrete test space, respectively.
Solution algorithm
There are two different methods: continuous Galerkin, using CGq elements, or discontinuous Galerkin,
using DGq elements. In this chapter we will go through continuous Galerkin.
97
98 Chapter 11. The finite element method for time-dependent problems
Figure 11.1
Figure 11.2
In = (tn−1 , tn )
(11.6)
k n = tn − tn−1 = time step
here U n is unknown and U n−1 is known. Note that this derivation holds for all q, but it is sufficient to
determine U n for q = 1 only! We approximate (11.17) by quadrature
U n −1 + U n t n −1 + t n
Z tn
f dt ≈ k n f , (11.18)
t n −1 2 2
and obtain
U n −1 + U n t n −1 + t n
n n −1
U =U + kn f , . (11.19)
2 2
In general (11.19) is a nonlinear system. We use one of the following two approaches to solve it:
i) Fixed-point iteration
We will consider fixed-point iteration in this chapter. Take U n,0 = U n−1 , then the fixed-point iteration
for (11.19) will look as follows
An important question is: When does (11.20) converge? Remember the contraction mapping theorem:
x k = T ( x k −1 (11.21)
converges if
k T 0 k 6 M < 1. (11.22)
100 Chapter 11. The finite element method for time-dependent problems
Here:
U n −1 + x t n −1 + t n
T ( x ) = U n −1 + k n f , (11.23)
2 2
n −1
U + x t n −1 + t n
⇒ T 0 (x) = kn J , , (11.24)
2 2
where J is defined
∂ fi
Jij = . (11.25)
∂Uj
From equation (11.24) and the result from the contraction mapping theorem we see that equation (11.20)
converges when k n is small enough.
Stiff problems
If k n is small enough to give an accurate solution, but not small enough for (11.20) to converge, we say
that the problem is stiff.
Z tn q Z tn
q −1 q −1
∑ U n,j λ j (t) · λi
q
⇒ (t) dt = λi (t) f i dt (11.28)
t n −1 j =0 t n −1
This leads to a q × q linear system to be solved. It gives an implicit Runge–Kutta method for computing
U n,j , j = 1, 2, . . . , q.
Strong form
u̇ + A(u) = f in Ω × (0, T ],
u(·, 0) = u0 in Ω, (11.29)
+ BC.
Weak form
Solution algorithm
Vhk = span{v = vh vk : vh ∈ Vh , vk ∈ Vk }
(11.32)
V̂hk = span{v = vh vk : vh ∈ V̂h , vk ∈ V̂k }
Z TZ Z TZ Z TZ
vh vk u̇hk dx dt + vh vk A(uhk ) dx dt = v f dx dt (11.33)
0 Ω 0 Ω 0 Ω
Z T Z Z T Z Z TZ
vk vh u̇hk dx dt + vk vh A(uhk ) dx dt = v f dx dt (11.34)
0 Ω 0 Ω 0 Ω
Take
N
uhk ( x, t) = ∑ Uj (t)φj ( x )
j =1 (11.35)
vh = φi , i = 1, 2, . . . , N
0
vk ∑ U̇j Ω
φi φj dx dt +
0
vk ∑ Uj Ω
φi A(φj ) dx dt =
0 Ω
v f dx dt (11.36)
j =1 j =1
Thus, we obtain
Z T Z T Z T
vk · MU̇ dt + vk · Ak (U ) dt = vk b dt, (11.39)
0 0 0
where
U = (U1 , U2 , . . . , UN ) T , (11.40)
Z
b= vh f dx. (11.41)
Ω
u̇ + A(u) = f
FEM in space
Å Ò × Ô
M U̇ + Ak (U ) = b
FEM in
space-time
Å Ò × Ô ¹ Ø Ñ
FEM in time
Å Ò Ø Ñ
Timestepping scheme
Ì Ñ × Ø Ô Ô Ò × Ñ
Figure 11.3
D. Braess. Finite elements: Theory, fast solvers, and applications in solid mechanics. Cambridge University
Press, 2007.
S. C. Brenner and R. Scott. The mathematical theory of finite element methods, volume 15. Springer Science
& Business Media, 2008.
P. K. Eide. The correlation between pulsatile intracranial pressure and indices of intracranial pressure-
volume reserve capacity: results from ventricular infusion testing. Journal of neurosurgery, 125(6):
1493–1503, 2016.
H. C. Elman, D. Silvester, and A. Wathen. Finite elements and fast iterative solvers: with applications in
incompressible fluid dynamics. Oxford University Press, 2014.
M. Kuchta, K.-A. Mardal, and M. Mortensen. On the singular neumann problem in linear elasticity.
arXiv preprint arXiv:1609.09425, 2016.
K.-A. Mardal and J. B. Haga. Block preconditioning of systems of pdes. Automated Solution of Differential
Equations by the Finite Element Method, pages 643–655, 2012.
A. Quarteroni and A. Valli. Numerical approximation of partial differential equations, volume 23. Springer
Science & Business Media, 2008.
103