0% found this document useful (0 votes)
201 views291 pages

Orloff Complex Variable

Uploaded by

Melody
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
201 views291 pages

Orloff Complex Variable

Uploaded by

Melody
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 291

COMPLEX VARIABLES

WITH APPLICATIONS

Jeremy Orlo
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Complex Variables with Applications

Jeremy Orloff
This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds
of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all,
pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully
consult the applicable license(s) before pursuing such effects.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their
students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new
technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform
for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our
students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-
access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource
environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being
optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are
organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields)
integrated.
The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions
Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120,
1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our
activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog
(http://Blog.Libretexts.org).

This text was compiled on 02/01/2023


TABLE OF CONTENTS
Licensing
Preface

1: Complex Algebra and the Complex Plane


1.1: Motivation
1.2: Fundamental Theorem of Algebra
1.3: Terminology and Basic Arithmetic
1.4: The Complex Plane
1.5: Polar Coordinates
1.6: Euler's Formula
1.7: The Exponential Function
1.8: Complex Functions as Mappings
1.9: The function arg(z)
1.10: Concise summary of branches and branch cuts
1.11: The Function log(z)
1.12: Inverse Euler formula
1.13: de Moivre's formula
1.14: Representing Complex Multiplication as Matrix Multiplication

2: Analytic Functions
2.1: The Derivative - Preliminaries
2.2: Open Disks, Open Deleted Disks, and Open Regions
2.3: Limits and Continuous Functions
2.4: The Point at In nity
2.5: Derivatives
2.6: Cauchy-Riemann Equations
2.7: Cauchy-Riemann all the way down
2.8: Gallery of Functions
2.9: Branch Cuts and Function Composition
2.10: Appendix - Limits

3: Multivariable Calculus (Review)


3.1: Terminology and Notation
3.2: Parametrized curves
3.3: Chain rule
3.4: Grad, curl and div
3.5: Level Curves
3.6: Line Integrals
3.7: Green's Theorem
3.8: Extensions and Applications of Green’s Theorem

4: Line Integrals and Cauchy’s Theorem


4.1: Introduction to Line Integrals and Cauchy’s Theorem
4.2: Complex Line Integrals
4.3: Fundamental Theorem for Complex Line Integrals

1 https://math.libretexts.org/@go/page/51141
4.4: Path Independence
4.5: Examples
4.6: Cauchy's Theorem
4.7: Extensions of Cauchy's theorem

5: Cauchy Integral Formula


5.1: Cauchy's Integral for Functions
5.2: Cauchy’s Integral Formula for Derivatives
5.3: Proof of Cauchy's integral formula
5.4: Proof of Cauchy's integral formula for derivatives
5.5: Amazing consequence of Cauchy’s integral formula

6: Harmonic Functions
6.2: Harmonic Functions
6.3: Del notation
6.4: A second Proof that u and v are Harmonic
6.5: Maximum Principle and Mean Value Property
6.6: Orthogonality of Curves

7: Two Dimensional Hydrodynamics and Complex Potentials


7.1: Velocity Fields
7.2: Stationary Flows
7.3: Physical Assumptions and Mathematical Consequences
7.4: Complex Potentials
7.5: Stream Functions
7.6: More Examples with Pretty Pictures

8: Taylor and Laurent Series


8.1: Geometric Series
8.2: Convergence of Power Series
8.3: Taylor Series
8.4: Taylor Series Examples
8.5: Singularities
8.6: Appendix- Convergence
8.7: Laurent Series
8.8: Digression to Differential Equations
8.9: Poles

9: Residue Theorem
9.1: Poles and Zeros
9.2: Holomorphic and Meromorphic Functions
9.3: Behavior of functions near zeros and poles
9.4: Residues
9.5: Cauchy Residue Theorem
9.6: Residue at ∞

10: De nite Integrals Using the Residue Theorem


10.1: Integrals of functions that decay
10.2: Integrals

2 https://math.libretexts.org/@go/page/51141
10.3: Trigonometric Integrals
10.4: Integrands with branch cuts
10.5: Cauchy principal value
10.6: Integrals over portions of circles
10.7: Fourier transform
10.8: Solving DEs using the Fourier transform

11: Conformal Transformations


11.1: Geometric De nition of Conformal Mappings
11.2: Tangent vectors as complex numbers
11.3: Analytic functions are Conformal
11.4: Digression to harmonic functions
11.5: Riemann Mapping Theorem
11.6: Examples of conformal maps and excercises
11.7: Fractional Linear Transformations
11.8: Re ection and symmetry
11.9: Flows around cylinders
11.10: Solving the Dirichlet problem for harmonic functions

12: Argument Principle


12.1: Principle of the Argument
12.2: Nyquist Criterion for Stability
12.3: A Bit on Negative Feedback

13: Laplace Transform


13.1: A brief introduction to linear time invariant systems
13.2: Laplace transform
13.3: Exponential Type
13.4: Properties of Laplace transform
13.5: Differential equations
13.6: Table of Laplace transforms
13.7: System Functions and the Laplace Transform
13.8: Laplace inverse
13.9: Delay and Feedback

14: Analytic Continuation and the Gamma Function


14.1: Analytic Continuation
14.2: De nition and properties of the Gamma function
14.3: Connection to Laplace
14.4: Proofs of (some) properties

Index
Detailed Licensing

3 https://math.libretexts.org/@go/page/51141
Licensing
A detailed breakdown of this resource's licensing can be found in Back Matter/Detailed Licensing.

1 https://math.libretexts.org/@go/page/115359
Preface
This text is an adaptation of a class originally taught by Andre Nachbin, who deserves most of the credit for the course design. The
topic notes were written by me with many corrections and improvements contributed by Jörn Dunkel. Of course, any responsibility
for typos and errors lies entirely with me.

Brief course description


Complex analysis is a beautiful, tightly integrated subject. It revolves around complex analytic functions. These are functions that
have a complex derivative. Unlike calculus using real variables, the mere existence of a complex derivative has strong implications
for the properties of the function.
Complex analysis is a basic tool in many mathematical theories. By itself and through some of these theories it also has a great
many practical applications.
There are a small number of far-reaching theorems that we’ll explore in the first part of the class. Along the way, we’ll touch on
some mathematical and engineering applications of these theorems. The last third of the class will be devoted to a deeper look at
applications.
The main theorems are Cauchy’s Theorem, Cauchy’s integral formula, and the existence of Taylor and Laurent series. Among the
applications will be harmonic functions, two dimensional fluid flow, easy methods for computing (seemingly) hard integrals,
Laplace transforms, and Fourier transforms with applications to engineering and physics.

Topics needed from prerequisite math classes


We will review these topics as we need them:
Limits
Power series
Vector fields
Line integrals
Green’s theorem
Level of mathematical rigor We will make careful arguments to justify our results. Though, in many places we will allow ourselves
to skip some technical details if they get in the way of understanding the main point, but we will note what was left out.

Speed of the class


Do not be fooled by the fact things start slow. This is the kind of course where things keep on building up continuously, with new
things appearing rather often. Nothing is really very hard, but the total integration can be staggering - and it will sneak up on you if
you do not watch it. Or, to express it in mathematically sounding lingo, this course is "locally easy" but "globally hard". That
means that if you keep up-to-date with the homework and lectures, and read the class notes regularly, you should not have any
problems.
- Jeremey Orloff

1 https://math.libretexts.org/@go/page/50902
CHAPTER OVERVIEW
1: Complex Algebra and the Complex Plane
1.1: Motivation
1.2: Fundamental Theorem of Algebra
1.3: Terminology and Basic Arithmetic
1.4: The Complex Plane
1.5: Polar Coordinates
1.6: Euler's Formula
1.7: The Exponential Function
1.8: Complex Functions as Mappings
1.9: The function arg(z)
1.10: Concise summary of branches and branch cuts
1.11: The Function log(z)
1.12: Inverse Euler formula
1.13: de Moivre's formula
1.14: Representing Complex Multiplication as Matrix Multiplication

This page titled 1: Complex Algebra and the Complex Plane is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
1.1: Motivation
The equation x = −1 has no real solutions, yet we know that this equation arises naturally and we want to use its roots. So we
2

make up a new symbol for the roots and call it a complex number.

 Definition: complex numbers


The symbols ±i will stand for the solutions to the equation x
2
= −1 . We will call these new numbers complex numbers. We
will also write

−−
√−1 = ±i (1.1.1)

Note: Engineers typically use j while mathematicians and physicists use i. We’ll follow the mathematical custom in 18.04.

The number i is called an imaginary number. This is a historical term. These are perfectly valid numbers that don’t happen to lie
on the real number line. (Our motivation for using complex numbers is not the same as the historical motivation. Historically,
mathematicians were willing to say x = −1 had no solutions. The issue that pushed them to accept complex numbers had to do
2

with the formula for the roots of cubics. Cubics always have at least one real root, and when square roots of negative numbers
appeared in this formula, even for the real roots, mathematicians were forced to take a closer look at these (seemingly) exotic
objects.) We’re going to look at the algebra, geometry and, most important for us, the exponentiation of complex numbers.
Before starting a systematic exposition of complex numbers, we’ll work a simple example.

 Example 1.1.1
Solve the equation z 2
+z+1 = 0 .
Solution
We can apply the quadratic formula to get
−−−− −
−− – −
−− –
−1 ± √1 − 4 −1 ± √−3 −1 ± √3√−1 −1 ± √3i
z = = = =
2 2 2 2

Think: Do you know how to solve quadratic equations by completing the square? This is how the quadratic formula is derived and
is well worth knowing!

This page titled 1.1: Motivation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

1.1.1 https://math.libretexts.org/@go/page/6466
1.2: Fundamental Theorem of Algebra
One of the reasons for using complex numbers is because allowing complex roots means every polynomial has exactly the
expected number of roots. This is called the fundamental theorem of algebra.

 Theorem 1.2.1 Fundamental theorem of algebra


A polynomial of degree n has exactly n complex roots (repeated roots are counted with multiplicity).

In a few weeks, we will be able to prove this theorem as a remarkably simple consequence of one of our main theorems.

This page titled 1.2: Fundamental Theorem of Algebra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1.2.1 https://math.libretexts.org/@go/page/6467
1.3: Terminology and Basic Arithmetic
 Definition

Complex numbers are defined as the set of all numbers


z = x + yi (1.3.1)

where x and y are real numbers.


We denote the set of all complex numbers by C.
We call x the real part of z . This is denoted by x = Re(z) .
We call y the imaginary part of z . This is denoted by y = Im(z) .
Important: The imaginary part of z is a real number. It does not include the i.

The basic arithmetic operations follow the standard rules. All you have to remember is that i = −1 . We will go through these
2

quickly using some simple examples. It almost goes without saying that it is essential that you become fluent with these
manipulations.
Addition: (3 + 4i) + (7 + 11i) = 10 + 15i
Subtraction: (3 + 4i) − (7 + 11i) = −4 − 7i
Multiplication:
2
(3 + 4i)(7 + 11i) = 21 + 28i + 33i + 44 i = −23 + 61i.

Here we have used the fact that 44i 2


= −44 .
Before talking about division and absolute value we introduce a new operation called conjugation. It will prove useful to have a
name and symbol for this, since we will use it frequently.

 Definition: Complex Conjugation

Complex conjugation is denoted with a bar and defined by


¯
¯¯¯¯¯¯¯¯¯¯¯
¯
x + iy = x − iy (1.3.2)

If z = x + iy then its conjugate is z̄ = x − iy and we read this as "z-bar = x − iy ".

 Example 1.3.1
¯
¯¯¯¯¯¯¯¯¯¯¯
¯
3 + 5i = 3 − 5i .

The following is a very useful property of conjugation: If z = x + iy then


2 2
zz̄ = (x + iy)(x − iy) = x +y

Note that zz̄ is real. We will use this property in the next example to help with division.

 Example 1.3.2 (Division).


3 + 4i
Write in the standard form x + iy .
1 + 2i

Solution
We use the useful property of conjugation to clear the denominator:
3 + 4i 3 + 4i 1 − 2i 11 − 2i 11 2
= ⋅ = = − i .
1 + 2i 1 + 2i 1 − 2i 5 5 5

1.3.1 https://math.libretexts.org/@go/page/6468
In the next section we will discuss the geometry of complex numbers, which gives some insight into the meaning of the magnitude
of a complex number. For now we just give the definition.

 Definition: Magnitude
The magnitude of the complex number x + iy is defined as
−−−−−−
2 2
|z| = √ x +y (1.3.3)

The magnitude is also called the absolute value, norm or modulus.

 Example 1.3.3
−−−−− −−
The norm of 3 + 5i = √9 + 25 = √34 .

Important. The norm is the sum of x and y . It does not include the i and is therefore always positive.
2 2

This page titled 1.3: Terminology and Basic Arithmetic is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1.3.2 https://math.libretexts.org/@go/page/6468
1.4: The Complex Plane
Geometry of Complex Numbers
Because it takes two numbers x and y to describe the complex number z = x + iy we can visualize complex numbers as points in
the xy-plane. When we do this we call it the complex plane. Since x is the real part of z we call the x-axis the real axis. Likewise,
the y -axis is the imaginary axis.

Triangle Inequality
The triangle inequality says that for a triangle the sum of the lengths of any two legs is greater than the length of the third leg.

Triangle inequality: |AB| + |BC | > |AC |


For complex numbers the triangle inequality translates to a statement about complex magnitudes.
Precisely: for complex numbers z 1, z2

| z1 | + | z2 | ≥ | z1 + z2 | (1.4.1)

with equality only if one of them is 0 or if arg(z 1) = arg(z2 ) . This is illustrated in the following figure.

Triangle inequality:
| z1 | + | z2 | ≥ | z1 + z2 | (1.4.2)

We get equality only if z and z are on the same ray from the origin, i.e. they have the same argument.
1 2

This page titled 1.4: The Complex Plane is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.4.1 https://math.libretexts.org/@go/page/6469
1.5: Polar Coordinates
In the figures above we have marked the length r and polar angle θ of the vector from the origin to the point z = x + iy . These are
the same polar coordinates you saw previously. There are a number of synonyms for both r and θ
r = |z| = magnitude = length = norm = absolute value = modulus
θ = arg(z) = argument of z = polar angle of z
You should be able to visualize polar coordinates by thinking about the distance r from the origin and the angle θ with the x-axis.

 Example 1.5.1

Let's make a table of z , r and θ for some complex numbers. Notice that θ is not uniquely defined since we can always add a
multiple of 2π to θ and still be at the same point in the plane.
z = a + bi r θ

1 1 0, 2π, 4π, . . . Argument = 0, means z is along the x-axis

i 1 π/2, π/2 + 2π. . . Argument = π/2, means z is along the y-axis


– ∘
1 +i √2 π/4, π/4 + 2π. . . Argument = π/4, means z is along the ray at 45 to the x-axis

When we want to be clear which value of θ is meant, we will specify a branch of arg. For example, 0 ≤ θ < 2π or
−π < θ ≤ π . This will be discussed in much more detail in the coming weeks. Keeping careful track of the branches of arg

will turn out to be one of the key requirements of complex analysis.

This page titled 1.5: Polar Coordinates is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.5.1 https://math.libretexts.org/@go/page/6470
1.6: Euler's Formula
Euler’s (pronounced ‘oilers’) formula connects complex exponentials, polar coordinates, and sines and cosines. It turns messy trig
identities into tidy rules for exponentials. We will use it a lot. The formula is the following:

e = cos(θ) + i sin(θ). (1.6.1)

There are many ways to approach Euler’s formula. Our approach is to simply take Equation 1.6.1 as the definition of complex
exponentials. This is legal, but does not show that it’s a good definition. To do that we need to show the e obeys all the rules we iθ

expect of an exponential. To do that we go systematically through the properties of exponentials and check that they hold for
complex exponentials.

e

behaves like a true exponential

 P1
e
it
differentiates as expected:
it
de
it
= ie .
dt

Proof
This follows directly from the definition in Equation 1.6.1:
it
de d
= (cos(t) + i sin(t))
dt dt

= − sin(t) + i cos(t)

= i(cos(t) + i sin(t))

it
= ie .

 P2
i⋅0
e = 1.

Proof
This follows directly from the definition in Equation 1.6.1:
e
i⋅0
= cos(0) + i sin(0) = 1 .

 P3
The usual rules of exponents hold:
ia ib i(a+b)
e e =e .

Proof
This relies on the cosine and sine addition formulas and the definition in Equation 1.6.1:
ia ib
e ⋅e = (cos(a) + i sin(a)) ⋅ (cos(b) + i sin(b))

= cos(a) cos(b) − sin(a) sin(b) + i(cos(a) sin(b) + sin(a) cos(b))

i(a+b)
= cos(a + b) + i sin(a + b) = e .

1.6.1 https://math.libretexts.org/@go/page/6471
 P4

The definition of e is consistent with the power series for e .


iθ x

Proof
To see this we have to recall the power series for e , cos(x) and sin(x) . They are
x

2 3 4
x
x x x
e = 1 +x + + + +. . .
2! 3! 4!

2 4 6
x x x
cos(x) =1− + − +…
2! 4! 6!

3 5
x x
sin(x) = x − + +. . .
3! 5!

Now we can write the power series for e and then split it into the power series for sine and cosine:

∞ n
(iθ)

e =∑
n!
0

∞ 2k ∞ 2k+1
θ θ
k k
= ∑(−1 ) + i ∑(−1 )
(2k)! (2k + 1)!
0 0

= cos(θ) + i sin(θ).

So the Euler formula definition is consistent with the usual power series for e . x

Properties P1-P4 should convince you that e behaves like an exponential.


Complex Exponentials and Polar Form


Now let’s turn to the relation between polar coordinates and complex exponentials.
Suppose z = x + iy has polar coordinates r and θ . That is, we have x = r cos(θ) and y = r sin(θ) . Thus, we get the important
relationship
z = x + iy

= r cos(θ) + ir sin(θ)

= r(cos(θ) + i sin(θ))


= re .

This is so important you shouldn’t proceed without understanding. We also record it without the intermediate equation.

z = x + iy = re . (1.6.2)

Because r and θ are the polar coordinates of (x, y) we call z = re iθ


the polar form of z .
Let’s now verify that magnitude, argument, conjugate, multiplication and division are easy in polar form.

 Magnitude
|e

| =1 .

Proof

|e | = | cos(θ) + i sin(θ)|

−−−−−−−−−−−−−
2 2
= √ cos (θ) + sin (θ)

= 1.

1.6.2 https://math.libretexts.org/@go/page/6471
In words, this says that e is always on the unit circle - this is useful to remember!

Likewise, if z = re then |z| = r . You can calculate this, but it should be clear from the definitions:

|z| is the distance
from z to the origin, which is exactly the same definition as for r .

 Argument

If z = re iθ
then arg(z) = θ .

Proof
This is again the definition: the argument is the polar angle θ .

 Conjugate
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
(z = re

) = re
−iθ
.

Proof
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
iθ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
(z = re ) = r(cos(θ) + i sin(θ))

= r(cos(θ) − i sin(θ))

= r(cos(−θ) + i sin(−θ))

−iθ
= re .

In words: complex conjugation changes the sign of the argument.

 Multiplication

If z 1 = r1 e
iθ1
and z 2 = r2 e
iθ2
then
i( θ1 +θ2 )
z1 z2 = r1 r2 e .

This is what mathematicians call trivial to see, just write the multiplication down. In words, the formula says the for z 1 z2 the
magnitudes multiply and the arguments add.

 Division

Again it's trivial that


iθ1
r1 e r1
i( θ1 −θ2 )
= e .
iθ2
r2 e r2

 Example 1.6.1: Multiplication by 2i

Here’s a simple but important example. By looking at the graph we see that the number 2i has magnitude 2 and argument π/2.
So in polar coordinates it equals 2e . This means that multiplication by 2i multiplies lengths by 2 and add π/2 to arguments,
iπ/2

i.e. rotates by 90 . The effect is shown in the figures below


1.6.3 https://math.libretexts.org/@go/page/6471
 Example 1.6.2: Rasing to a power

1 + i √3
Let's compute (1 + i) and (
6
)
3

Solution
– –
1 +i has magnitude = √2 and arg = π/4, so 1 + i = √2e iπ/4
. Rasing to a power is now easy:
– iπ/4 6
(1 + i )
6
= (√2e ) = 8e
6iπ/4
= 8e
3iπ/2
= −8i .
– –
1 + i √3 1 + i √3
Similarly, =e
iπ/3
, so ( )
3
= (1 ⋅ e
iπ/3 3
) =e

= −1
2 2

Complexification or Complex Replacement


In the next example we will illustrate the technique of complexification or complex replacement. This can be used to simplify a
trigonometric integral. It will come in handy when we need to compute certain integrals.

 Example 1.6.3

Use complex replacement to compute

x
I =∫ e cos(2x) dx. (1.6.3)

Solution
We have Euler's formula
2ix
e = cos(2x) + i sin(2x), (1.6.4)

so cos(2x) = Re(e 2ix


. The complex replacement trick is to replace cos(2x) by e
)
2ix
. We get (justification below)

x x
Ic = ∫ e cos 2x + i e sin 2x dx (1.6.5)

with

I = Re(Ic ) (1.6.6)

Computing I is straightforward:
c

x(1+2i)
x i2x x(1+2i)
e
Ic = ∫ e e dx = ∫ e dx = . (1.6.7)
1 + 2i

Here we will do the computation first in rectangular coordinates. In applications, for example throughout 18.03, polar form is
often preferred because it is easier and gives the answer in a more useable form.

1.6.4 https://math.libretexts.org/@go/page/6471
x(1+2i)
e 1 − 2i
Ic = ⋅
1 + 2i 1 − 2i
x
e (cos(2x) + i sin(2x))(1 − 2i)
(1.6.8)
=
5

1
x
= e (cos(2x) + 2 sin(2x) + i(−2 cos(2x) + sin(2x)))
5

So,
1
x
I = Re(Ic ) = e (cos(2x) + 2 sin(2x)). (1.6.9)
5

Justification of complex replacement. The trick comes by cleverly adding a new integral to I as follows, Let
J = ∫ e sin(2x) dx . Then we let
x

x x 2ix
Ic = I + iJ = ∫ e (cos(2x) + i sin(2x)) dx = ∫ e 2 dx. (1.6.10)

Clearly, by construction, Re(I c) =I as claimed above.


Alternative using polar coordinates to simplify the expression for I : c


In polar form, we have 1 + 2i = re , where iϕ
r = √5 and ϕ = arg(1 + 2i) = tan −1
(2) in the first quadrant. Then:
x(1+2i) x x
e e e
Ic = – = –e
i(2x−ϕ)
= – (cos(2x − ϕ) + i sin(2x − ϕ)) .
√5eiϕ √5 √5

Thus,
x
e
I = Re(Ic ) = – cos(2x − ϕ). (1.6.11)
√5

N th roots
We are going to need to be able to find the n th roots of complex numbers, i.e., solve equations of the form
N
z = c, (1.6.12)

where c is a given complex number. This can be done most conveniently by expressing c and z in polar form, c = Re

and
z = re

. Then, upon substituting, we have to solve
N iN θ iϕ
r e = Re (1.6.13)

For the complex numbers on the left and right to be equal, their magnitudes must be the same and their arguments can only differ
by an integer multiple of 2π. This gives
1/N
r =R (N θ = ϕ + 2πn), where n = 0, ±1, ±2, . . . (1.6.14)

Solving for θ , we have


ϕ 2πn
θ = + . (1.6.15)
N N

 Example 1.6.4

Find all 5 fifth roots of 2.


Solution
For c = 2 , we have R = 2 and ϕ = 0 , so the fifth roots of 2 are
zn = 2
1/5
e
2nπi/5
, where n = 0, ±1, ±2, . . .
Looking at the right hand side we see that for n = 5 we have 2 e which is exactly the same as the root when n = 0 , i.e.
1/5 2πi

e . Likewise n = 6 gives exactly the same root as n = 1 , and so on. This means, we have 5 different roots corresponding
1/5 0i
2

1.6.5 https://math.libretexts.org/@go/page/6471
to n = 0, 1, 2, 3, 4.
1/5 1/5 2πi/5 1/5 4πi/5 1/5 6πi/5 1/5 8πi/5
zn = 2 ,e e ,e e ,e e ,e e

Similarly we can say that in general c = Re iϕ


has N distinct N th roots:
zn = r
1/N
e
iϕ/N +i2π(n/N )
for n = 0, 1, 2, . . . , N − 1 .

 Example 1.6.5

Find the 4 forth roots of 1.


Solution
We need to solve z 4
=1 , so ϕ = 0 . So the 4 distinct fourth roots are in polar form
iπ/2 iπ i3π/2
zn = 1, e ,e ,e (1.6.16)

and in Cartesian representation


zn = 1, i, −1, −i. (1.6.17)

 Example 1.6.6

Find the 3 cube roots of -1.


Solution
z
2
= −1 = e
iπ+i2πn
. So, zn = e
iπ+i2π(n/3)
and the 3 cube roots are e
iπ/3
, e

, e
i5π/3
. Since π/3 radians is 60

we can
simplify:
– –
1 √3 1 √3
iπ/3
e = cos(π/3) + i sin(π/3) = +i ⇒ zn = −1, ±i
2 2 2 2

 Example 1.6.7

Find the 5 fifth roots of 1 + i .


Solution
5 – i(π/4+2nπ)
z = 1 + i = √2e (1.6.18)

for n = 0, 1, 2, . . .. So, the 5 fifth roots are


2
1/10
e
iπ/20
,2
1/10
e
i9π/20
,2 1/10
e
i17π/20
,2 1/10
e
i25π/20
,2 1/10
e
i33π/20
.
Using a calculator we could write these numerically as a + bi , but there is no easy simplification.

 Example 1.6.8

We should check that our technique works as expected for a simple problem. Find the 2 square roots of 4.
Solution
z
2
= 4e
i2πn
. So, z
n = 2e
iπn
, with n = 0, 1 . So the two roots are 2e 0
=2 and 2e iπ
= −2 as expected!

geometry of N th roots
Looking at the examples above we see that roots are always spaced evenly around a circle centered at the origin. For example, the
fifth roots of 1 + i are spaced at increments of 2π/5 radians around the circle of radius 2 . 1/5

Note also that the roots of real numbers always come in conjugate pairs.

1.6.6 https://math.libretexts.org/@go/page/6471
This page titled 1.6: Euler's Formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.6.7 https://math.libretexts.org/@go/page/6471
1.7: The Exponential Function
We have Euler's formula: e iθ
= cos(θ) + i sin(θ) . We can extend this to the complex exponential function e .z

 Definition: Complex Exponential Functions


For z = x + iy the complex exponential function is defined as
z x+iy x iy x
e =e =e e = e (cos(y) + i sin(y)). (1.7.1)

In this definition e is the usual exponential function for a real variable x.


x

It is easy to see that all the usual rules of exponents hold:


1. e = 1
0

2. ez1 +z2
=e e
z1 z2

3. (e ) = e for positive integers n .


z n nz

4. (e ) = e
z −1 −z

5. e ≠ 0
z

z
de
It will turn out that the property =e
z
also holds, but we can’t prove this yet because we haven’t defined what we mean by
dz
d
the complex derivative .
dz
Here are some more simple, but extremely important properties of e . You should become fluent in their use and know how to
z

prove them.
6. |e | = 1

Proof.
−−−−−−−−−−−−−
iθ 2 2
|e | = | cos(θ) + i sin(θ)| = √cos (θ) + sin (θ) = 1

7. |e x+iy
| =e (as usual z = x + iy and x, y are real).
x

Proof. You should be able to supply this. If not: ask a teacher or TA.
8. The path e for 0 < t < ∞ wraps counterclockwise around the unit circle. It does so infinitely many times. This is illustrated
it

in the following picture.

Figure 1.7.1 : The map t → e wraps the real axis around the unit circle.
it

This page titled 1.7: The Exponential Function is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.7.1 https://math.libretexts.org/@go/page/47203
1.8: Complex Functions as Mappings
A complex function w = f (z) is hard to graph because it takes 4 dimensions: 2 for z and 2 for w. So, to visualize them we will
think of complex functions as mappings. That is we will think of w = f (z) as taking a point in the complex z -plane and mapping
(sending) it to a point in the complex w-plane.
We will use the following terms and symbols to discuss mappings.
A function w = f (z) will also be called a mapping of z to w.
Alternatively we will write z ↦ w or z ↦ f (z) . This is read as "z maps to w".
We will say that "w is the image of z under the mapping" or more simply "w is the image of z ".
If we have a set of points in the z -plane we will talk of the image of that set under the mapping.
For example, under the mapping z ↦ iz the image of the imaginary z -axis is the real w-axis.

The image of the imaginary axis under z ↦ iz

 Example 1.8.1
The mapping w = z . We visualize this by putting the z -plane on the left and the w-plane on the right. We then draw various
2

curves and regions in the z -plane and the corresponding image under z in the w-plane.
2

In the first figure we show that rays from the origin are mapped by z to rays from the origin. We see that
2

1. The ray L at π/4 radians is mapped to the ray f (L ) at π/2 radians.


2 2

2. The rays L and L are both mapped to the same ray. This is true for each pair of diametrically opposed rays.
2 6

3. A ray at angle θ is mapped to the ray at angle 2θ.

f (z) = z
2
maps rays from the origin to rays from the origin.
The next figure gives another view of the mapping. Here we see vertical stripes in the first quadrant are mapped to parabolic
stripes that live in the first and second quadrants.

1.8.1 https://math.libretexts.org/@go/page/47204
z
2
= (x
2 2
− y ) + i2xy maps vertical lines to left facing parabolas.
The next figure is similar to the previous one, except in this figure we look at vertical stripes in both the first and second
quadrants. We see that they map to parabolic stripes that live in all four quadrants.

f (z) = z
2
maps the first two quadrants to the entire plane.
The next figure shows the mapping of stripes in the first and fourth quadrants. The image map is identical to the previous
figure. This is because the fourth quadrant is minus the second quadrant, but z = (−z) 2 2

1.8.2 https://math.libretexts.org/@go/page/47204
Vertical stripes in quadrant 4 are mapped identically to vertical stripes in quadrant 2.

Simplified view of the first quadrant being mapped to the first two quadrants.

1.8.3 https://math.libretexts.org/@go/page/47204
Simplified view of the first two quadrants being mapped to the entire plane.

 Example 1.8.2

The mapping w = e . Here we present a series of plots showing how the exponential function maps z to w.
z

Notice that vertical lines are mapped to circles and horizontal lines to rays from the origin.
The next four figures all show essentially the same thing: the exponential function maps horizontal stripes to circular sectors.
Any horizontal stripe of width 2π gets mapped to the entire plane minus the origin,
Because the plane minus the origin comes up frequently we give it a name:

 Definition: punctured plane


The punctured plane is the complex plane minus the origin. In symbols we can write it as C - {0} or C /{0}.

1.8.4 https://math.libretexts.org/@go/page/47204
The horizontal strip 0 ≤ y < 2π is mapped to the punctured plane

The horizontal strip −π < y ≤ π is mapped to the punctured plane

1.8.5 https://math.libretexts.org/@go/page/47204
Simplified view showing e maps the horizontal stripe 0 ≤ y < 2π to the punctured plane.
z

Simplified view showing e maps the horizontal stripe −π < y ≤ π to the punctured plane.
z

This page titled 1.8: Complex Functions as Mappings is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1.8.6 https://math.libretexts.org/@go/page/47204
1.9: The function arg(z)
Many-to-one functions
The function f (z) = z maps ±z to the same value, e.g. f (2) = f (−2) = 4 . We say that f (z) is a 2-to-1 function. That is, it maps
2

2 different points to each value. (Technically, it only maps one point to 0, but we will gloss over that for now.) Here are some other
examples of many-to-one functions.

 Example 1.9.1
w =z
3
is a 3-to-1 function. For example, 3 different z values get mapped to w = 1 :
– –
3
−1 + √3i −1 − √3i
3 3
1 =( ) =( ) =1
2 2

 Example 1.9.2
The function w = e maps infinitedly many points to each value. For example
z

0 2πi 4πi 2nπi


e =e =e =. . . = e =. . . = 1

iπ/2 iπ/2+2πi iπ/2+4πi iπ/2+2nπi


e =e =e =. . . = e =. . . = 1

In general, e z+2nπi
has the same value for every integer n .

Branches of arg(z)

 Important Note

You should master this section. Branches of arg(z) are the key that really underlies all our other examples. Fortunately it is
reasonably straightforward.
The key point is that the argument is only defined up to multiples of 2πi so every z produces infinitely many values for arg(z).
Because of this we will say that arg(z) is a multiple-valued function.

 Note
In general a function should take just one value. What that means in practice is that whenever we use such a function will have
to be careful to specify which of the possible values we mean. This is known as specifying a branch of the function.

 Definition
By a branch of the argument function we mean a choice of range so that it becomes single-valued. By specifying a branch we
are saying that we will take the single value of arg(z) that lies in the branch.

Let’s look at several different branches to understand how they work:


(i). If we specify the branch as 0 ≤ arg(z) < 2π then we have the following arguments.
arg(1) = 0 ; arg(i) = π/2; arg(−1) = π; arg(−i) = 3π/2
This branch and these points are shown graphically in Figure (i) below.

1.9.1 https://math.libretexts.org/@go/page/47641
Figure (i): The branch 0 ≤ arg(z) < 2π of \text{arg} (z)\).
Notice that if we start at z = 1 on the positive real axis we have arg(z) = 0 . Then arg(z) increases as we move counterclockwise
around the circle. The argument is continuous until we get back to the positive real axis. There it jumps from almost 2π back to 0.
There is no getting around (no pun intended) this discontinuity. If we need arg(z) to be continuous we will need to remove (cut)
the points of discontinuity out of the domain. The branch cut for this branch of arg(z) is shown as a thick orange line in the figure.
If we make the branch cut then the domain for arg(z) is the plane minus the cut, i.e. we will only consider arg(z) for z not on the
cut.
For future reference you should note that, on this branch, arg(z) is continuous near the negative real axis, i.e. the arguments of
nearby points are close to each other.
(ii). If we specify the branch as −π < arg(z) ≤ π then we have the following arguments:
arg(1) = 0 ; arg(i) = π/2; arg(−1) = π; arg(−i) = −π/2
This branch and these points are shown graphically in Figure (ii) below.

1.9.2 https://math.libretexts.org/@go/page/47641
Figure (ii): The branch −π < arg(z) ≤ π of arg(z).
Compare Figure (ii) with Figure (i). The values of arg(z) are the same in the upper half plane, but in the lower half plane they
differ by 2π.
For this branch the branch cut is along the negative real axis. As we cross the branch cut the value of arg(z) jumps from π to
something close to −π.
(iii). Figure (iii) shows the branch of arg(z) with π/4 ≤ arg(z) < 9π/4.

Figure (iii): The branch π/4 ≤ arg(z) < 9π/4 of arg(z).


Notice that on this branch arg(z) is continuous at both the positive and negative real axes. The jump of 2π occurs along the ray at
angle π/4.
(iv). Obviously, there are many many possible branches. For example,

1.9.3 https://math.libretexts.org/@go/page/47641
42 < arg(z) ≤ 42 + 2π .
(v). We won’t make use of this in 18.04, but, in fact, the branch cut doesn’t have to be a straight line. Any curve that goes from the
origin to infinity will do. The argument will be continuous except for a jump by 2π when z crosses the branch cut.

principal branch of arg(z)


Branch (ii) in the previous section is singled out and given a name:

 Definition
The branch −π < arg(z) ≤ π is called the principal branch of arg(z). We will use the notation Arg(z) (capital A) to indicate
that we are using the principal branch. (Of course, in cases where we don’t want there to be any doubt we will say explicitly
that we are using the principal branch.)

Continuity of arg(z)
The examples above show that there is no getting around the jump of 2π as we cross the branch cut. This means that when we need
arg(z) to be continuous we will have to restrict its domain to the plane minus a branch cut.

This page titled 1.9: The function arg(z) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.9.4 https://math.libretexts.org/@go/page/47641
1.10: Concise summary of branches and branch cuts
We discussed branches and branch cuts for arg(z). Before talking about log(z) and its branches and branch cuts we will give a
short review of what these terms mean. You should probably scan this section now and then come back to it after reading about
log(z).

Consider the function w = f (z) . Suppose that z = x + iy and w = u + iv .


Domain. The domain of f is the set of z where we are allowed to compute f (z) .
Range. The range (image) of f is the set of all f (z) for z in the domain, i.e. the set of all w reached by f .
Branch. For a multiple-valued function, a branch is a choice of range for the function. We choose the range to exclude all but
one possible value for each element of the domain.
Branch cut. A branch cut removes (cuts) points out of the domain. This is done to remove points where the function is
discontinuous.

This page titled 1.10: Concise summary of branches and branch cuts is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

1.10.1 https://math.libretexts.org/@go/page/50403
1.11: The Function log(z)
Our goal in this section is to define the log function. We want log(z) to be the inverse of e . That is, we want e
z log(z)
=z . We will
see that log(z) is multiple-valued, so when we use it we will have to specify a branch.
We start by looking at the simplest example which illustrates that log(z) is multiple-valued.

 Example 1.11.1
Find log(1).
Solution
We know that e 0
=1 , so log(1) = 0 is one answer.
We also know that e 2πi
=1 , so log(1) = 2πi is another possible answer. In fact, we can choose any multiple of 2πi:

log(1) = 2nπi

where n is any integer.

This example leads us to consider the polar form for z as we try to define log(z). If z = re iθ
then one possible value for log(z) is

log(z) = log(re )

= log(r) + iθ,

here log(r) is the usual logarithm of a real positive number. For completeness we show explicitly that with this definition
e
log(z)
=z :
log(z) log(r)+iθ
e =e

log(r) iθ
=e e


= re

=z

Since r = |z| and θ = arg(z) we have arrived at our definition.

 Definition: Complex Log Function


The function log(z) is defined as

log(z) = log(|z|) + iarg(z), (1.11.1)

where log(|z|) is the usual natural logarithm of a positive real number.

Remarks.
1. Since arg(z) has infinitely many possible values, so does log(z).
2. log(0) is not defined. (Both because arg(0) is not defined and log(|0|) is not defined.)
3. Choosing a branch for arg(z) makes log(z) single valued. The usual terminology is to say we have chosen a branch of the log
function.
4. The principal branch of log comes from the principal branch of arg. That is,
log(z) = log(|z|) + iarg(z) , where −π < arg(z) ≤ π (principal branch).

 Example 1.11.2
Compute all the values of log(i). Specify which one comes from the principal branch.
Solution

1.11.1 https://math.libretexts.org/@go/page/50404
π
We have that |i| = 1 and arg(i) = + 2nπ , so
2

π
log(i) = log(1) + i + i2nπ
2

π
=i + i2nπ,
2

where n is any integer.


The principal branch of arg(z) is between −π and π, so Arg(i) = π/2 . Therefore, the value of log(i) from the principal
branch is iπ/2.

 Example 1.11.3

Compute all the values of log(−1 − √3i) . Specify which one comes from the principal branch.
Solution

Let z = −1 − √3i . Then |z| = 2 and in the principal branch Arg(z) = −2π/3. So all the values of log(z) are

log(z) = log(2) − i + i2nπ. (1.11.2)
3

The value from the principal branch is log(z) = log(2) − i2π/3.

1.11.1: Figures showing w = log(z) as a mapping


The figures below show different aspects of the mapping given by log(z).
In the first figure we see that a point z is mapped to (infinitely) many values of w. In this case we show log(1) (blue dots), log(4)
(red dots), log(i) (blue cross), and log(4i) (red cross). The values in the principal branch are inside the shaded region in the w-
plane. Note that the values of log(z) for a given z are placed at intervals of 2πi in the w-plane.

Mapping log(z) : log(1), log(4), log(i), log(4i)


The next figure illustrates that the principal branch of log maps the punctured plane to the horizontal strip −π < Im(w) ≤ π . We
again show the values of log(1), log(4), log(i), log(4i) . Since we’ve chosen a branch, there is only one value shown for each log.

1.11.2 https://math.libretexts.org/@go/page/50404
Mapping log(z): the principal branch and the punctured plane
The third figure shows how circles centered on 0 are mapped to vertical lines, and rays from the origin are mapped to horizontal
lines. If we restrict ourselves to the principal branch the circles are mapped to vertical line segments and rays to a single horizontal
line in the principal (shaded) region of the w-plane.

Mapping log(z): mapping circles and rays

1.11.2: Complex Powers


We can use the log function to define complex powers.

 Definition: Complex Power

Let z and a be complex numbers then the power z is defined as


a

a alog(z)
z =e . (1.11.3)

This is generally multiple-valued, so to specify a single value requires choosing a branch of log(z).

1.11.3 https://math.libretexts.org/@go/page/50404
 Example 1.11.4


Compute all the values of √2i . Give the value associated to the principal branch of log(z).
Solution
We have

π
log(2i) = log(2 e 2 ) = log(2) + i + i2nπ.
2

So,

− 1/2
√2i = (2i)

log( 2i)

=e 2

log( 2)

+ +inπ
2
=e 4


+inπ

= √2e 4 .

(As usual n is an integer.) As we saw earlier, this only gives two distinct values. The principal branch has Arg(2i) = π/2, so

− – ( iπ
)
√2i = √2e 4

– (1 + i)
= √2 –
√2

= 1 + i.

The other distinct value is when n = 1 and gives minus the value just above.

 Example 1.11.5

Cube roots: Compute all the cube roots of i. Give the value which comes from the principal branch of log(z).
Solution
π
We have log(i) = i + i2nπ , where n is any integer. So,
2
log( i) π 2nπ
1/3 i +i
i =e 3
=e 6 3

This gives only three distinct values


iπ/6 i5π/6 i9π/6
e ,e ,e

π
On the principal branch log(i) = i , so the value of i 1/3
which comes from this is
2

√3 i
e
iπ/6
= + .
2 2

 Example 1.11.6

Compute all the values of 1 . What is the value from the principal branch?
i

Solution
This is similar to the problems above. log(1) = 2nπi, so
i ilog(1) i2nπi −2nπ
1 =e =e =e ,

where n is an integer.

1.11.4 https://math.libretexts.org/@go/page/50404
The principal branch has log(1) = 0 so 1 i
=1 .

This page titled 1.11: The Function log(z) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.11.5 https://math.libretexts.org/@go/page/50404
1.12: Inverse Euler formula
Euler's formula gives a complex exponential in terms of sines and cosines. We can turn this around to get the inverse Euler
formulas.
Euler’s formula says:
it
e = cos(t) + i sin(t) (1.12.1)

and
−it
e = cos(t) − i sin(t). (1.12.2)

By adding and subtracting we get:


it −it
e +e
cos(t) = (1.12.3)
2

and
it −it
e −e
sin(t) = . (1.12.4)
2i

Please take note of these formulas we will use them frequently!

This page titled 1.12: Inverse Euler formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.12.1 https://math.libretexts.org/@go/page/47198
1.13: de Moivre's formula
 Moivre’s Formula

For positive integers n we have de Moivre’s formula:


n
(cos(θ) + i sin(θ)) = cos(nθ) + i sin(nθ) (1.13.1)

Proof
This is a simple consequence of Euler’s formula:
n iθ n inθ
(cos(θ) + i sin(θ)) = (e ) =e = cos(nθ) + i sin(nθ)

The reason this simple fact has a name is that historically de Moivre stated it before Euler’s formula was known. Without Euler’s
formula there is not such a simple proof.

This page titled 1.13: de Moivre's formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1.13.1 https://math.libretexts.org/@go/page/47199
1.14: Representing Complex Multiplication as Matrix Multiplication
Consider two complex number z 1 = a + bi and z 2 = c + di and their product
z1 z2 = (a + bi)(c + id) = (ac − bd) + i(bc + ad) =: ω (1.14.1)

Now let's define two matrices

a −b
Z1 = [ ] (1.14.2)
b a

c −d
Z2 = [ ] (1.14.3)
d c

Note that these matrices store the same information as z and z , respectively. Let’s compute their matrix product
1 2

a −b c −d ac − bd −(bc + ad)
Z1 Z2 = [ ][ ] =[ ] := W . (1.14.4)
b a d c bc + ad ac − bd

Comparing W just above with w in Equation 1.14.1, we see that W is indeed the matrix corresponding to the complex number
w = z z . Thus, we can represent any complex number z equivalently by the matrix
1 2

Rez −Imz
Z =[ ] (1.14.5)
Imz Rez

and complex multiplication then simply becomes matrix multiplication. Further note that we can write
1 0 0 −1
Z = Rez [ ] + Imz [ ], (1.14.6)
0 1 1 0

0 −1
i.e., the imaginary unit i corresponds to the matrix [ ] and i 2
= −1 becomes
1 0

0 −1 0 −1 1 0
[ ][ ] = −[ ]. (1.14.7)
1 0 1 0 0 1

Polar Form (Decomposition)


Writing z = re iθ
= r(cos θ + i sin θ) , we find
cos θ − sin θ
Z = r[ ]
sin θ cos θ

r 0 cos θ − sin θ
=[ ][ ]
0 r sin θ cos θ

corresponding to a stretch factor r multiplied by a 2D rotation matrix. In particular, multiplication by i corresponds to the rotation
with angle θ = π/2 and r = 1 .
We will not make a lot of use of the matrix representation of complex numbers, but later it will help us remember certain formulas
and facts.

This page titled 1.14: Representing Complex Multiplication as Matrix Multiplication is shared under a CC BY-NC-SA 4.0 license and was
authored, remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the
LibreTexts platform; a detailed edit history is available upon request.

1.14.1 https://math.libretexts.org/@go/page/47201
CHAPTER OVERVIEW
2: Analytic Functions
The main goal of this topic is to define and give some of the important properties of complex analytic functions. A function f (z) is
analytic if it has a complex derivative f (z) . In general, the rules for computing derivatives will be familiar to you from single

variable calculus. However, a much richer set of conclusions can be drawn about a complex analytic function than is generally true
about real differentiable functions.
2.1: The Derivative - Preliminaries
2.2: Open Disks, Open Deleted Disks, and Open Regions
2.3: Limits and Continuous Functions
2.4: The Point at Infinity
2.5: Derivatives
2.6: Cauchy-Riemann Equations
2.7: Cauchy-Riemann all the way down
2.8: Gallery of Functions
2.9: Branch Cuts and Function Composition
2.10: Appendix - Limits

This page titled 2: Analytic Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
2.1: The Derivative - Preliminaries
In calculus we defined the derivative as a limit. In complex analysis we will do the same.
Δf f (z + Δz) − f (z)

f (z) = lim = lim . (2.1.1)
Δz→0 Δz Δz→0 Δz

Before giving the derivative our full attention we are going to have to spend some time exploring and understanding limits. To
motivate this we’ll first look at two simple examples – one positive and one negative.

 Example 2.1.1

Find the derivative of f (z) = z . 2

Solution
We compute using the definition of the derivative as a limit.
2 2 2 2 2
(z + Δz) −z z + 2zΔz + (Δz) −z
lim = lim = lim 2z + Δz = 2z. (2.1.2)
Δz→0 Δz Δz→0 Δz Δz→0

That was a positive example. Here’s a negative one which shows that we need a careful understanding of limits.

 Example 2.1.2
Let f (z) = z̄ . Show that the limit for f
¯
¯ ′
(0) does not converge.
Solution
Let's try to compute f ′
(0) using a limit:
¯
¯¯¯¯¯
¯
f (Δz) − f (0) Δz Δx − iΔy

f (0) = lim = lim = . (2.1.3)
Δz→0 Δz Δz→0 Δz Δx + iΔy

Here we used Δz = Δx + iΔy .


Now, Δz → 0 means both Δx and Δy have to go to 0. There are lots of ways to do this. For example, if we let Δz go to 0
along the x-axis then, Δx goes to 0. In this case, we would have


Δx
f (0) = lim = 1. (2.1.4)
Δx→0 Δx

On the other hand, if we let Δz go to 0 along the positive y -axis then


−iΔy
f (0) = lim = −1. (2.1.5)
Δy→0 iΔy

The limits don’t agree! The problem is that the limit depends on how Δz approaches 0. If we came from other directions we’d
get other values. There’s nothing to do, but agree that the limit does not exist.
Well, there is something we can do: explore and understand limits. Let’s do that now.

This page titled 2.1: The Derivative - Preliminaries is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

2.1.1 https://math.libretexts.org/@go/page/6474
2.2: Open Disks, Open Deleted Disks, and Open Regions
 Definition: Open Disk

The open disk of radius r around z is the set of points z with |z − z


0 0| <r , i.e. all points within distance r of z .
0

The open deleted disk of radius r around z is the set of points z with 0 < |z − z
0 0| <r . That is, we remove the center z from
0

the open disk. A deleted disk is also called a punctured disk.

Left: an open disk around z ; right: a deleted open disk around z


0 0

 Definition: Open Region

An open region in the complex plane is a set A with the property that every point in A can be be surrounded by an open disk
that lies entirely in A . We will often drop the word open and simply call A a region.
In the figure below, the set A on the left is an open region because for every point in A we can draw a little circle around the
point that is completely in A . (The dashed boundary line indicates that the boundary of A is not part of A .) In contrast, the set
B is not an open region. Notice the point z shown is on the boundary, so every disk around z contains points outside B .

Figure 2.2.1 : Left: an open region A; right: B is not an open region. (CC BY-NC; Ümit Kaya)

This page titled 2.2: Open Disks, Open Deleted Disks, and Open Regions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

2.2.1 https://math.libretexts.org/@go/page/6475
2.3: Limits and Continuous Functions
 Definition: Limit

If f (z) is defined on a punctured disk around z then we say


0

lim f (z) = w0 (2.3.1)


z→z0

if f (z) goes to w no matter what direction z approaches z .


0 0

The figure below shows several sequences of points that approach z . If lim 0 z→z0 f (z) = w0 then f (z) must go to w along each of
0

these sequences.

Figure 2.3.1 : Sequences going to z are mapped to sequences going to w . (CC BY-NC; Ümit Kaya)
0 0

 Example 2.3.1
Many functions have obvious limits. For example:
2
lim z =4
z→2

and
2
z +2
lim = 6/9.
z→2 3
z +1

Here is an example where the limit doesn’t exist because different sequences give different limits.

 Example 2.3.2: No limit


Show that
z x + iy
lim = lim
¯
¯¯
z→0 z z→0 x − iy

does not exist.


Solution
On the real axis we have
z x
= = 1,
¯
¯¯
z x

so the limit as z → 0 along the real axis is 1. By contrast, on the imaginary axis we have

2.3.1 https://math.libretexts.org/@go/page/6476
z iy
= = −1,
¯
¯¯
z −iy

so the limit as z → 0 along the imaginary axis is -1. Since the two limits do not agree the limit as z → 0 does not exist!

Properties of limits
We have the usual properties of limits. Suppose
lim f (z) = w1 and lim g(z) = w2 (2.3.2)
z→z0 z→z0

then
limz→z f (z) + g(z) = w1 + w2
0

limz→z f (z)g(z) = w ⋅ w
0
1 2.
If w ≠ 0 then lim
2 f (z)/g(z) = w / w
z→z0 1 2

If h(z) is continuous and defined on a neighborhood of w then lim 1 z→z0 h(f (z)) = h(w1 ) (Note: we will give the official
definition of continuity in the next section.)
We won’t give a proof of these properties. As a challenge, you can try to supply it using the formal definition of limits given in the
appendix.
We can restate the definition of limit in terms of functions of (x, y). To this end, let’s write
f (z) = f (x + iy) = u(x, y) + iv(x, y) (2.3.3)

and abbreviate
P = (x, y), P0 = (x0 , y0 ), w0 = u0 + i v0 . (2.3.4)

Then

limP →P0 u(x, y) = u0


lim f (z) = w0 iff { (2.3.5)
z→z0 limP →P v(x, y) = v0
0

Note. The term ‘iff’ stands for ‘if and only if’ which is another way of saying ‘is equivalent to’.

Continuous Functions
A function is continuous if it doesn’t have any sudden jumps. This is the gist of the following definition.

 Definition: Continuous Function

If the function f (z) is defined on an open disk around z and lim f (z) = f (z ) then we say f is continuous at z . If f is
0 z→z0 0 0

defined on an open region A then the phrase 'f is continuous on A ' means that f is continuous at every point in A .

As usual, we can rephrase this in terms of functions of (x, y):


Fact. f (z) = u(x, y) + iv(x, y) is continuous iff u(x, y) and v(x, y) are continuous as functions oftwo variables.

 Example 2.3.3: Some Continuous Functions

(i) A polynomial
2 n
P (z) = a0 + a1 z + a2 z +. . . +an z

is continuous on the entire plane. Reason: it is clear that each power (x + iy) is continuous as a function of (x, y).
k

(ii) The exponential function is continuous on the entire plane. Reason:


z x+iy x x
e =e =e cos(y) + i e sin(y).

So the both the real and imaginary parts are clearly continuous as a function of (x, y).

2.3.2 https://math.libretexts.org/@go/page/6476
(iii) The principal branch Arg(z) is continuous on the plane minus the non-positive real axis. Reason: this is clear and is the
reason we defined branch cuts for arg. We have to remove the negative real axis because Arg(z) jumps by 2π when you cross
it. We also have to remove z = 0 because Arg(z) is not even defined at 0.
(iv) The principal branch of the function log(z) is continuous on the plane minus the non-positive real axis. Reason: the
principal branch of log has

log(z) = log(r) + iArg(z).

So the continuity of log(z) follows from the continuity of Arg(z).

Properties of Continuous Functions


Since continuity is defined in terms of limits, we have the following properties of continuous functions.
Suppose f (z) and g(z) are continuous on a region A . Then
f (z) + g(z) is continuous on A .
f (z)g(z) is continuous on A .
f (z)/g(z) is continuous on A except (possibly) at points where g(z) = 0 .

If h is continuous on f (A) then h(f (z)) is continuous on A .


Using these properties we can claim continuity for each of the following functions:
2
z
e
iz −iz
cos(z) = (e +e )/2

If P (z) and Q(z) are polynomials then P (z)/Q(z) is continuous except at roots of Q(z).

This page titled 2.3: Limits and Continuous Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

2.3.3 https://math.libretexts.org/@go/page/6476
2.4: The Point at Infinity
By definition the extended complex plane = C ∪ {∞} . That is, we have one point at infinity to be thought of in a limiting sense
described as follows.
A sequence of points {z } goes to infinity if |z | goes to infinity. This “point at infinity” is approached in any direction we go. All
n n

of the sequences shown in Figure 2.4.1 are growing, so they all go to the (same) “point at infinity”.
Im(z)

To ∞

Re(z)

To ∞
To ∞

Figure 2.4.1 : Various sequences all going to infinity. (CC BY-NC; Ümit Kaya)
If we draw a large circle around 0 in the plane, then we call the region outside this circle a neighborhood of infinity (Figure 2.4.2).

Figure 2.4.2 : The shaded region outside the circle of radius R is a neighborhood of infinity.

Limits involving infinity


The key idea is 1/∞ = 0 . By this we mean
1
lim =0 (2.4.1)
z→∞ z

We then have the following facts:


limz→z f (z) = ∞ ⇔ limz→z 1/f (z) = 0
0 0

limz→∞ = w0 ⇔ limz→0 f (1/z) = w0

1
limz→∞ = ∞ ⇔ limz→0 =0
f (1/z)

 Example 2.4.1
limz→∞ e
z
is not defined because it has different values if we go to infinity in different directions, e.g. we have e z x
=e e
iy
and
x iy
limx→−∞ e e =0
x iy
limx→+∞ e e =∞

limy→+∞ e e
x iy
is not defined, since x is constant, so e x
e
iy
loops in a circle indefinitely.

2.4.1 https://math.libretexts.org/@go/page/6477
 Example 2.4.2

Show lim z→∞ z


n
=∞ (for n a positive integer).
Solution
We need to show that |z n
| gets large as |z| gets large. Write z = Re , then iθ

n n inθ n n
|z | = |R e | =R = |z|

Stereographic Projection from the Riemann Sphere


One way to visualize the point at ∞ is by using a (unit) Riemann sphere and the associated stereo-graphic projection. Figure 2.4.4
shows a sphere whose equator is the unit circle in the complex plane.

Figure 2.4.3 : Stereographic projection from the sphere to the plane. (CC BY-NC; Ümit Kaya)
Stereographic projection from the sphere to the plane is accomplished by drawing the secant line from the north pole N through a
point on the sphere and seeing where it intersects the plane. This gives a 1-1 correspondence between a point on the sphere P and a
point in the complex plane z . It is easy to see show that the formula for stereographic projection is
a b
P = (a, b, c) ↦ z = +i . (2.4.2)
1 −c 1 −c

The point N = (0, 0, 1) is special, the secant lines from N through P become tangent lines to the sphere at N which never
intersect the plane. We consider N the point at infinity.
In the figure above, the region outside the large circle through the point z is a neighborhood of infinity. It corresponds to the small
circular cap around N on the sphere. That is, the small cap around N is a neighborhood of the point at infinity on the sphere!
Figure 2.4.4 shows another common version of stereographic projection. In this figure the sphere sits with its south pole at the
origin. We still project using secant lines from the north pole.

Figure 2.4.4 : Alternative stereographic projection from the sphere to the plane. (CC BY-NC; Ümit Kaya)

This page titled 2.4: The Point at Infinity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

2.4.2 https://math.libretexts.org/@go/page/6477
2.5: Derivatives
The definition of the complex derivative of a complex function is similar to that of a real derivative of a real function: For a
function f (z) the derivative f at z is defined as
0

f (z) − f (z0 )

f (z0 ) = lim (2.5.1)
z→z0 z − z0

Provided, of course, that the limit exists. If the limit exists we say f is analytic at z or f is differentiable at z .
0 0

Remember: The limit has to exist and be the same no matter how you approach z ! 0

If f is analytic at all the points in an open region A then we say f is analytic on A .


As usual with derivatives there are several alternative notations. For example, if w = f (z) we can write
dw f (z) − f (z0 ) Δw

f (z0 ) = |z0 = lim = lim (2.5.2)
dz z→z0 z − z0 Δ→0 Δz

 Example 2.5.1
Find the derivative of f (z) = z .2

Solution
We did this above in Example 2.2.1. Take a look at that now. Of course, f ′
(z) = 2z .

 Example 2.5.2
Show f (z) = z̄ is not differentiable at any point z .
¯
¯

Solution
We did this above in Example 2.2.2. Take a look at that now.

Challenge. Use polar coordinates to show the limit in the previous example can be any value with modulus 1 depending on the
angle at which z approaches z . 0

Derivative rules
It wouldn’t be much fun to compute every derivative using limits. Fortunately, we have the same differentiation formulas as for
real-valued functions. That is, assuming f and g are differentiable we have:
Sum rule:
d
′ ′
(f (z) + g(z)) = f +g (2.5.3)
dz

Product rule:
d
′ ′
(f (z)g(z)) = f g + f g (2.5.4)
dz

Quotient rule:
′ ′
d f g −fg
(f (z)/g(z)) = (2.5.5)
2
dz g

Chain rule:
d ′ ′
g(f (z)) = g (f (z))f (z) (2.5.6)
dz

Inverse rule:

2.5.1 https://math.libretexts.org/@go/page/6478
−1
df (z) 1
= (2.5.7)
′ −1
dz f (f (z))

To give you the flavor of these arguments we’ll prove the product rule.
d f (z)g(z) − f (z0 )g(z0 )
(f (z)g(z)) = limz→z0
dz z − z0

(f (z) − f (z0 ))g(z) + f (z0 )(g(z) − g(z0 ))


= limz→z
0
z − z0 (2.5.8)

f (z) − f (z0 ) (g(z) − g(z0 ))


= limz→z0 g(z) + f (z0 )
z − z0 z − z0
′ ′
= f (z0 )g(z0 ) + f (z0 )g (z0 )

Here is an important fact that you would have guessed. We will prove it in the next section.

 Theorem 2.5.1

If f (z) is defined and differentiable on an open disk and f ′


(z) = 0 on the disk then f (z) is constant.

This page titled 2.5: Derivatives is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

2.5.2 https://math.libretexts.org/@go/page/6478
2.6: Cauchy-Riemann Equations
The Cauchy-Riemann equations are our first consequence of the fact that the limit defining f (z) must be the same no matter which
direction you approach z from. The Cauchy-Riemann equations will be one of the most important tools in our toolbox.

Partial Derivatives as Limits


Before getting to the Cauchy-Riemann equations we remind you about partial derivatives. If u(x, y) is a function of two variables
then the partial derivatives of u are defined as
∂u u(x + Δ, y) − u(x, y)
(x, y) = lim , (2.6.1)
∂x Δx→0 Δx

i.e., the derivative of u holding y constant.


∂u u(x, y + Δy) − u(x, y)
(x, y) = lim , (2.6.2)
∂y Δy→0 Δy

i.e., the derivative of u holding x constant.

Cauchy-Riemann Equations
The Cauchy-Riemann equations use the partial derivatives of u and v to allow us to do two things: first, to check if f has a
complex derivative and second, to compute that derivative. We start by stating the equations as a theorem.

 Theorem 2.6.1: Cauchy-Riemann Equations

If f (z) = u(x, y) + iv(x, y) is analytic (complex differentiable) then


∂u ∂v ∂v ∂u
f (z) = +i = −i (2.6.3)
∂x ∂x ∂y ∂y

In particular,
∂u ∂v ∂u ∂v
= and =− . (2.6.4)
∂x ∂y ∂y ∂x

This last set of partial differential equations is what is usually meant by the Cauchy-Riemann equations.
Here is the short form of the Cauchy-Riemann equations:
ux = vy (2.6.5)

uy = −vx (2.6.6)

Proof
Let's suppose that f (z) is differentiable in some region A and

f (z) = f (x + iy) = u(x, y) + iv(x, y). (2.6.7)

We'll compute f ′
(z) by approaching z first from the horizontal direction and then from the vertical direction. We’ll use the
formula
f (z + Δz) − f (z)

f (z) = lim , (2.6.8)
Δ→0 Δz

where Δz = Δx + iΔy .
Horizontal direction: Δy = 0, Δz = Δx

2.6.1 https://math.libretexts.org/@go/page/45403
f (z + Δz) − f (z)

f (z) = limΔz→0
Δz

f (x + Δx + iy) − f (x + iy)
= limΔx→0
Δx

(u(x + Δ, y) + iv(x + Δx, y)) − (u(x, y) + iv(x, y))


= limΔx→0 (2.6.9)
Δx

u(x + Δx, y) − u(x, y) v(x + Δx, y) − v(x, y)


= limΔx→0 +i
Δx Δx

∂u ∂v
= (x, y) + i (x, y)
∂x ∂x

Vertical direction: Δx = 0 , Δz = iΔy (We'll do this one a little faster.)


f (z + Δz) − f (z)

f (z) = limΔz→0
Δz

(u(x, y + Δy) + iv(x, y + Δy)) − (u(x, y) + iv(x, y))


= limΔy→0
iΔy

u(x, y + Δy) − u(x, y) v(x, y + Δy) − v(x, y)


= limΔy→0 +i (2.6.10)
iΔy iΔy

1 ∂u ∂v
= (x, y) + (x, y)
i ∂y ∂y

∂v ∂u
= (x, y) − i (x, y)
∂y ∂y

We have found two different representations of f ′


(z) in terms of the partials of u and v . If put them together we have the
Cauchy-Riemann equations:
∂u ∂v ∂v ∂u ∂u ∂v ∂u ∂v

f (z) = +i = −i ⇒ = , and − = . (2.6.11)
∂x ∂x ∂y ∂y ∂x ∂y ∂y ∂x

It turns out that the converse is true and will be very useful to us.

 Theorem 2.6.2

Consider the function f (z) = u(x, y) + iv(x, y) defined on a region A . If u and v satisfy the Cauchy-Riemann equations and
have continuous partials then f (z) is differentiable on A .

Proof
The proof of this is a tricky exercise in analysis. It is somewhat beyond the scope of this class, so we will skip it. If you’re
interested, with a little effort you should be able to grasp it.

Using the Cauchy-Riemann equations


The Cauchy-Riemann equations provide us with a direct way of checking that a function is differen- tiable and computing its
derivative.

 Example 2.6.1

Use the Cauchy-Riemann equations to show that e is differentiable and its derivative is e .
z z

Solution
We write
z x+iy x x
e =e =e cos(y) + i e sin(y).

So

2.6.2 https://math.libretexts.org/@go/page/45403
x
u(x, y) = e cos(y)

and
x
v(x, y) = e sin(y).

Computing partial derivatives we have


ux = e
x
cos(y) ,
uy = −e
x
sin(y) ,
vx = e
x
sin(y) ,
uy = e
x
cos(y) .
We see that u x = uy and u y = −vx , so the Cauchy-Riemann equations are satisfied. Thus, e is differentiable and z

d z x x z
e = ux + i vx = e cos(y) + i e sin(y) = e .
dz

 Example 2.6.2

Use the Cauchy-Riemann equations to show that f (z) = z̄ is not differentiable. ¯


¯

Solution
f (x + iy) = x − iy , so u(x, y) = x, v(x, y) = −y . Taking partial derivatives
ux = 1 ,u y =0 ,v x =0 ,v
y = −1

Since u x ≠ vy the Cauchy-Riemann equations are not satisfied and therefore f is not differentiable.

 Theorem 2.6.3

If f (z) is differentiable on a disk and f ′


(z) = 0 on the disk then f (z) is constant.

Proof
Since f is differentiable and f ′
(z) ≡ 0 , the Cauchy-Riemann equations show that

ux (x, y) = uy (x, y) = vx (x, y) = vy (x, y) = 0

We know from multivariable calculus that a function of (x, y) with both partials identically zero is constant. Thus u and v
are constant, and therefore so is f .


f (z) as a 2 × 2 matrix
Recall that we could represent a complex number a + ib as a 2 × 2 matrix
a −b
a + ib ↔ [ ]. (2.6.12)
b a

Now if we write f (z in terms of (x, y) we have

f (z) = f (x + iy) = u(x, y) + iv(x, y) ↔ f (x, y) = (u(x, y), v(x, y)). (2.6.13)

We have

f (z) = ux + i vx , (2.6.14)

so we can represent f ′
(z) as

ux −vx
[ ]. (2.6.15)
vx ux

Using the Cauchy-Riemann equations we can replace −v by u and u by v which gives us the representation
x y x y

2.6.3 https://math.libretexts.org/@go/page/45403
ux uy

f (z) ↔ [ ], (2.6.16)
vx vy

i.e, f ′
(z) is just the Jacobian of f (x, y).
For me, it is easier to remember the Jacobian than the Cauchy-Riemann equations. Since f ′
(z) is a complex number I can use the
matrix representation in Equation 1 to remember the Cauchy-Riemann equations!

This page titled 2.6: Cauchy-Riemann Equations is shared under a not declared license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

2.6.4 https://math.libretexts.org/@go/page/45403
2.7: Cauchy-Riemann all the way down
We’ve defined an analytic function as one having a complex derivative. The following theorem shows that if f is analytic then so is
f . Thus, there are derivatives all the way down!

 Theorem 2.7.1
Assume the second order partials of u and v exist and are continuous. If f (z) = u + iv is analytic, then so is f ′
(z) .

Proof
To show this we have to prove that f ′
(z) satisfies the Cauchy-Riemann equations. If f = u + iv we know
ux = vy ,u
y = −vx ,f ′
= ux + i vx .
Let's write

f = U + iV , (2.7.1)

so, by Cauchy-Riemann,
U = ux = uy , V = vx = −uy . (2.7.2)

We want to show that U x = Vy and U x = −Vx . We do them one at a time.


To prove U x = Vy , we use Equation 2.8.2 to see that

Ux = vyx and Vy = vxy . (2.7.3)

Since v xy = vyx , we have U x = Vy .


Similarly, to show U y = −Vx , we compute

Uy = uxy and Vx = −uyx . (2.7.4)

So, U y = −Vx . QED.

Technical point. We’ve assumed as many partials as we need. So far we can’t guarantee that all the partials exist. Soon we will
have a theorem which says that an analytic function has derivatives of all order. We’ll just assume that for now. In any case, in most
examples this will be obvious.

This page titled 2.7: Cauchy-Riemann all the way down is shared under a not declared license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

2.7.1 https://math.libretexts.org/@go/page/45404
2.8: Gallery of Functions
In this section we’ll look at many of the functions you know and love as functions of z . For each one we’ll have to do four things.
1. Define how to compute it.
2. Specify a branch (if necessary) giving its range.
3. Specify a domain (with branch cut if necessary) where it is analytic.
4. Compute its derivative.
Most often, we can compute the derivatives of a function using the algebraic rules like the quotient rule. If necessary we can use
the Cauchy-Riemann equations or, as a last resort, even the definition of the derivative as a limit.
Before we start on the gallery we define the term “entire function”.

 Definition
A function that is analytic at every point in the complex plane is called an entire function. We will see that e , z , z n
sin(z) are
all entire functions.

Gallery of functions, derivatives and properties


The following is a concise list of a number of functions and their complex derivatives. None of the derivatives will surprise you.
We also give important properties for some of the functions. The proofs for each follow below.
1. f (z) = e = e cos(y) + i e sin(y).
z x x

Domain = all of C (f is entire).


f (z) = e .
′ z

2. f (z) ≡ c (constant)
Domain = all of C (f is entire).
f (z) = 0 .

3. f (z) = z (n an integer ≥ 0 )
n

Domain = all of C (f is entire).



f (z) = nz .
n−1

4. P (z) (polynomial)
A polynomial has the form P (z) = a nz
n
+ an−1 z
n−1
+. . . +a0 .
Domian = all of C (P (z) is entire).
′ n−1 n−1
P (z) = nan z + (n − 1)an−1 z +. . . +2 a2 z + a1 .

5. f (z) = 1/z
Domain = C - {0} (the punctured plane).
′ 2
f (z) = −1/ z .

6. f (z) = P (z)/Q(z) (rational function)


When P and Q are polynomials P (z)/Q(z) is called a rational function.
If we assume that P and Q have no common roots, then:
Domain = C - {roots of Q}
′ ′
P Q −PQ

f (z) = .
2
Q

7. sin(z), cos(z)

 Definition
iz −iz iz −iz
e +e e −e
cos(z) = , sin(z) =
2 2i

(By Euler’s formula we know this is consistent with cos(x) and sin(x) when z = x is real.)

Domian: these functions are entire.

2.8.1 https://math.libretexts.org/@go/page/45405
d cos(z) d sin(z)
= − sin(z), = cos(z). (2.8.1)
dz dz

Other key properties of sin and cos:

- cos (z) + sin (z) = 1


2 2

- e = cos(z) + i sin(z)
z

- Periodic in x with period 2π, e.g. sin(x + 2π + iy) = sin(x + iy) .


- They are not bounded!
- In the form f (z) = u(x, y) + iv(x, y) we have
cos(z) = cos(x) cosh(y) − i sin(x) sinh(y) (2.8.2)

sin(z) = sin(x) cosh(y) + i cos(x) sinh(y)

(cosh and sinh are defined below.)


- The zeros of sin(z) are z = nπ for n any integer.
The zeros of cos(z) are z = π/2 + nπ for n any integer.
(That is, they have only real zeros that you learned about in your trig. class.)
8. Other trig functions cot(z), sec(z) etc.

 Definition

The same as for the real versions of these function, e.g. cot(z) = cos(z)/ sin(z) , sec(z) = 1/ cos(z) .

Domain: The entire plane minus the zeros of the denominator.


Derivative: Compute using the quotient rule, e.g.
d tan(z) d sin(z) cos(z) cos(z) − sin(z)(− sin(z)) 1 2
= ( ) = = = sec z (2.8.3)
2
dz dz cos(z) cos (z) cos2 (z)

(No surprises there!)


9. sinh(z), cosh(z) (hyperbolic sine and cosine)

 Definition
z −z z −z
e +e e −e
cosh(z) = , sinh(z) = (2.8.4)
2 2

Domain: these functions are entire.


d cosh(z) d sinh(z)
= sinh(z), = cosh(z) (2.8.5)
dz dz

Other key properties of cosh and sinh:


- cosh (z) − sinh (z) = 1
2 2

- For real x, cosh(x) is real and positive, sinh(x) is real.


- cosh(iz) = cos(z) , sinh(z) = −i sin(iz).
10. log(z) (See Topic 1.)

 Definition

log(z) = log(|z|) + iarg(z). (2.8.6)

2.8.2 https://math.libretexts.org/@go/page/45405
Branch: Any branch of arg(z).
Domain: C minus a branch cut where the chosen branch of arg(z) is discontinuous.
d 1
log(z) = (2.8.7)
dz z

11. z (any complex a ) (See Topic 1.)


a

 Definition
a a log(z)
z =e . (2.8.8)

Branch: Any branch of arg(z).


Domain: Generally the domain is C minus a branch cut of log. If a is an integer ≥ 0 then z is entire. If a is a negative integer
a

then z is defined and analytic on C - {0}.


a

a
dz
a−1
= az . (2.8.9)
dz

12. sin
−1
(z)

 Definition
−−−−−
−1 2
sin (z) = −i log(iz + √ 1 − z ). (2.8.10)

The definition is chosen so that sin(sin (z)) = z . The derivation of the formula is as follows.
−1

Let w = sin (z) , so z = sin(w) . Then,


−1

iw −iw
e −e
2iw iw
z = ⇒ e − 2ize −1 = 0 (2.8.11)
2i

Solving the quadratic in e iw


gives
−−−−−− −
2iz + √ −4 z 2 + 4 − −−− −
iw 2
e = = iz + √ 1 − z . (2.8.12)
2

Taking the log gives


−−−−− −−−−−
2 2
iw = log(iz + √ 1 − z ) ⇔ w = −i log(iz + √ 1 − z . (2.8.13)

From the definition we can compute the derivative:


d 1
−1
sin (z) = − − −−−. (2.8.14)
dz √ 1 − z2

Choosing a branch is tricky because both the square root and the log require choices. We will look at this more carefully in the
future.
For now, the following discussion and figure are for your amusement.
Sine (likewise cosine) is not a 1-1 function, so if we want sin (z) to be single-valued then we have to choose a region where
−1

sin(z) is 1-1. (This will be a branch of sin (z) , i.e. a range for the image,) The figure below shows a domain where sin(z) is
−1

1-1. The domain consists of the vertical strip z = x + iy with −π/2 < x < π/2 together with the two rays on boundary where
y ≥ 0 (shown as red lines). The figure indicates how the regions making up the domain in the z -plane are mapped to the

quadrants in the w-plane.

2.8.3 https://math.libretexts.org/@go/page/45405
Figure 2.8.1 : A domain where z ↦ w = sin(z) is one-to-one. (CC BY-NC; Ümit Kaya)

proofs
Here we prove at least some of the facts stated in the list just above.
1. f (z) = e . This was done in Example 2.7.1 using the Cauchy-Riemann equations.
z

2. f (z) ≡ c (constant). This case is trivial.


3. f (z) = z (n an integer ≥ 0 ): show f (z) = nz
n ′ n−1

It’s probably easiest to use the definition of derivative directly. Before doing that we note the factorization
n n n−1 n−2 n−3 2 2 n−3 n−2 n−1
z −z = (z − z0 )(z +z z0 + z z +. . . +z z + zz +z ) (2.8.15)
0 0 0 0 0

Now
n n
f (z) − f (z0 ) z −z
′ 0
f (z0 ) = limz→z = limz→z
0 0
z − z0 z − z0

n−1 n−2 n−3 2 2 n−3 n−2 n−1 (2.8.16)


= limz→z0 (z +z z0 + z z +. . . +z z + zz +z
0 0 0 0

n−1
= nz .
0

Since we showed directly that the derivative exists for all z , the function must be entire.
4. P (z) (polynomial). Since a polynomial is a sum of monomials, the formula for the derivative follows from the derivative rule
for sums and the case f (z) = z . Likewise the fact the P (z) is entire.
n

5. f (z) = 1/z . This follows from the quotient rule.


6. f (z) = P (z)/Q(z) . This also follows from the quotient rule.
7. sin(z) , cos(z). All the facts about sin(z) and cos(z) follow from their definition in terms of exponentials.
8. Other trig functions cot(z), sec(z) etc. Since these are all defined in terms of cos and sin, all the facts about these functions
follow from the derivative rules.
9. sinh(z) , cosh(z) . All the facts about sinh(z) and cosh(z) follow from their definition in terms of exponentials.
10. log(z). The derivative of log(z) can be found by differentiating the relation e = z using the chain rule. Let w = log(z) , so
log(z)

= z and
w
e

w
d w
dz de dw w
dw dw 1
e = =1 ⇒ =1 ⇒ e =1 ⇒ = (2.8.17)
w
dz dz dw dz dz dz e

Using w = log(z) we get


d log(z) 1
= . (2.8.18)
dz z

2.8.4 https://math.libretexts.org/@go/page/45405
11. z (any complex a ). The derivative for this follows from the formula
a

a a
a a log(z)
dz a log(z)
a az a−1
z =e ⇒ =e ⋅ = = az (2.8.19)
dz z z

This page titled 2.8: Gallery of Functions is shared under a not declared license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

2.8.5 https://math.libretexts.org/@go/page/45405
2.9: Branch Cuts and Function Composition
We often compose functions, i.e. f (g(z)) . In general in this case we have the chain rule to compute the derivative. However we
need to specify the domain for z where the function is analytic. And when branches and branch cuts are involved we need to take
care.

 Example 2.9.1

Let f (z) = e . Since e and z are both entire functions, so is f (z) = e . The chain rule gives us
2 2
z z 2 z

2
′ z
f (z) = e (2z). (2.9.1)

 Example 2.9.2

Let f (z) = e and g(z) = 1/z . f (z) is entire and g(z) is analytic everywhere but 0. So f (g(z)) is analytic except at 0 and
z

C - { 2πni, where n is any integer }


The quotient rule gives h (z) = −e /(e − 1) . A little more formally: h(z) = f (g(z)) . where f (w) = 1/w and
′ z z 2

w = g(z) = e − 1 . We know that g(z) is entire and f (w) is analytic everywhere except w = 0 . Therefore, f (g(z)) is
z

analytic everywhere except where g(z) = 0 .

 Example 2.9.3
It can happen that the derivative has a larger domain where it is analytic than the original function. The main example is
f (z) = log(z) . This is analytic on C minus a branch cut. However

d 1
log(z) = (2.9.2)
dz z

is analytic on C - {0}. The converse can’t happen.

 Example 2.9.4
−−−−
Define a region where √1 − z is analytic.
Solution
Choosing the principal branch of argument, we have √−

w is analytic on

C − {x ≤ 0, y = 0} , (see figure below.).


−−−−
So √1 − z is analytic except where w = 1 − z is on the branch cut, i.e. where w = 1 − z is real and ≤ 0 . It's easy to see that
w = 1 −z is real and ≤ 0 ⇔ z is real and ≥ 1 .
−−−−
So √1 − z is analytic on the region (see figure below)

C − {x ≥ 1, y = 0} (2.9.3)

 Note

− −−−−
A different branch choice for √w would lead to a different region where √1 − z is analytic.

The figure below shows the domains with branch cuts for this example.

2.9.1 https://math.libretexts.org/@go/page/45406
 Example 2.9.5
−−−−−
Define a region where f (z) = √1 + e is analytic.
z

Solution
Again, let's take √−

w to be analytic on the region

C − {x ≤ 0, y = 0} (2.9.4)

So, f (z) is analytic except where 1 + e is real and ≤ 0 . That is, except where e is real and ≤ −1 . Now, e = e e is real
z z z x iy

only when y is a multiple of π. It is negative only when y is an odd mutltiple of π. It has magnitude greater than 1 only when
x > 0 . Therefore f (z) is analytic on the region

C − {x ≥ 0, y = odd multiple of π} (2.9.5)

The figure below shows the domains with branch cuts for this example.

This page titled 2.9: Branch Cuts and Function Composition is shared under a not declared license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

2.9.2 https://math.libretexts.org/@go/page/45406
2.10: Appendix - Limits
The intuitive idea behind limits is relatively simple. Still, in the 19th century mathematicians were troubled by the lack of rigor, so
they set about putting limits and analysis on a firm footing with careful definitions and proofs. In this appendix we give you the
formal definition and connect it to the intuitive idea. In 18.04 we will not need this level of formality. Still, it’s nice to know the
foundations are solid, and some students may find this interesting.

Limits of Sequences
Intuitively, we say a sequence of complex numbers z , z , . . . converges to a if for large n , z is really close to a . To be a little
1 2 n

more precise, if we put a small circle of radius ϵ around a then eventually the sequence should stay inside the circle. Let’s refer to
this as the sequence being captured by the circle. This has to be true for any circle no matter how small, though it may take longer
for the sequence to be ‘captured’ by a smaller circle.
This is illustrated in Figure 2.10.1. The sequence is strung along the curve shown heading towards a . The bigger circle of radius ϵ 2

captures the sequence by the time n = 47 , the smaller circle doesn’t capture it till n = 59 . Note that z is inside the larger circle,
25

but since later points are outside the circle we don’t say the sequence is captured at n = 25 .

z15
z20

z10

z58 z25

z5
a

z48 z30
z1 ϵ1
ϵ2

z45
z35

z38
z40

Figure 2.10.1 : A sequence of points converging to a . (CC BY-NC; Ümit Kaya)

 Definition
The sequence z , z , z , . . . converges to the value
1 2 3 a if for every ϵ>0 there is a number Nϵ such that | zn − a| < ϵ for all
n > N . We write this as
ϵ

lim zn = a. (2.10.1)
n→∞

Again, the definition just says that eventually the sequence is within ϵ of a , no matter how small you choose ϵ.

 Example 2.10.1
Show that the sequence z n = (1/n + i )
2
has limit -1.
Solution
This is clear because 1/n → 0 . For practice, let’s phrase it in terms of epsilons: given ϵ > 0 we have to choose N such that ϵ

| zn − (−1)| < ϵ for all n > Nϵ

One strategy is to look at |z n + 1| and see that N should be. We have


ϵ

∣ 1 2 ∣ ∣ 1 2i ∣ 1 2
| zn − (−1)| = ∣( + i) + 1∣ = ∣ + ∣ < +
∣ n ∣ ∣ n2 n ∣ n
2
n

2.10.1 https://math.libretexts.org/@go/page/45407
So all we have to do is pick N large enough that
ϵ

1 2
+ <ϵ
2
Nϵ Nϵ

Since this can clearly be done we have proved that z n → i .

This was clearly more work than we want to do for every limit. Fortunately, most of the time we can apply general rules to
determine a limit without resorting to epsilons!
Remarks
1. In 18.04 we will be able to spot the limit of most concrete examples of sequences. The formal definition is needed when dealing
abstractly with sequences.
2. To mathematicians ϵ is one of the go-to symbols for a small number. The prominent and rather eccentric mathematician Paul
Erdos used to refer to children as epsilons, as in ‘How are the epsilons doing?’
3. The term ‘captured by the circle’ is not in common usage, but it does capture what is happening.

limz→z f (z)
0

Sometimes we need limits of the form lim z→z0 f (z) = a . Again, the intuitive meaning is clear: as z gets close to z we should see 0

f (z) get close to a . Here is the technical definition

 Definition

Suppose f (z) is defined on a punctured disk 0 < |z − z 0| <r around z . We say lim
0 z→z0 f (z) = a if for every ϵ > 0 there is
a δ such that
|f (z) − a| < ϵ whenever 0 < |z − z 0| <δ

This says exactly that as z gets closer (within δ ) to z we have f (z) is close (with ϵ) to a . Since ϵ can be made as small as we
0

want, f (z) must go to a .

Remarks
1. Using the punctured disk (also called a deleted neighborhood) means that f (z) does not have to be defined at z and, if it is 0

then f (z ) does not necessarily equal a . If f (z ) = a then we say the f is continuous at z .


0 0 0

2. Ask any mathematician to complete the phrase “For every ϵ" and the odds are that they will respond “there is a δ ..."

Connection between limits of sequences and limits of functions


Here’s an equivalent way to define limits of functions: the limit lim z→z0 f (z) = a if, for every sequence of points {z n} with limit
z the sequence {f (z )} has limit a .
0 n

This page titled 2.10: Appendix - Limits is shared under a not declared license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

2.10.2 https://math.libretexts.org/@go/page/45407
CHAPTER OVERVIEW
3: Multivariable Calculus (Review)
These notes are a terse summary of what we’ll need from multivariable calculus. If, after reading these, some parts are still unclear,
you should consult your notes or book from your multivariable calculus or ask about it at office hours. We’ve also posted a more
detailed review of line integrals and Green’s theorem. You should consult that if needed. We’ve seen that complex exponentials
make trigonometric functions easier to work with and give insight into many of the properties of trig functions. Similarly, we’ll
eventually reformulate some mathematics into complex form. We’ll see that it’s easier to present and the main properties are more
transparent in complex form.
3.1: Terminology and Notation
3.2: Parametrized curves
3.3: Chain rule
3.4: Grad, curl and div
3.5: Level Curves
3.6: Line Integrals
3.7: Green's Theorem
3.8: Extensions and Applications of Green’s Theorem

https://en.Wikipedia.org/wiki/Level_...au_contour.svg

This page titled 3: Multivariable Calculus (Review) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
3.1: Terminology and Notation
Vectors. We’ll denote vectors in the plane by (x, y)

 Note
In physics and in 18.02 we usually write vectors in the plane as xi + y j. This use of i and j would be confusing in 18.04, so we
will write this vector as (x, y).

In 18.02 you might have used angled brackets ⟨x, y⟩ for vectors and round brackets (x, y) for points. In 18.04 we will adopt the
more standard mathematical convention and use round brackets for both vectors and points. It shouldn’t lead to any confusion.
Orthogonal. Orthogonal is a synonym for perpendicular. Two vectors are orthogonal if their dot product is zero, i.e. v = (v1 , v2 )

and w = (w , w ) are orthogonal if


1 2

v ⋅ w = (v1, v2 ) ⋅ (w1 , w2 ) = v1 w1 + v2 w2 = 0.

Composition. Composition of functions will be denoted f (g(z)) or f ∘ g(z) , which is read as 'f composed with g '

This page titled 3.1: Terminology and Notation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

3.1.1 https://math.libretexts.org/@go/page/6488
3.2: Parametrized curves
We often use the Greek letter gamma for a parameterized curve, i.e.

γ(t) = (x(t), y(t)).

We think of this as a moving point tracing out a curve in the plane. The tangent vector
′ ′ ′
γ (t) = (x (t), y (t))

is tangent to the curve at the point (x(t), y(t)). It's length |γ ′


(t)| is the instantaneous speed of the moving point.

y
γ' (t)
γ (t)

γ' (t)

Figure 3.2.1 : Parametrized curve γ(t) with some tangent vectors γ ′


(t) . (CC BY-NC; Ümit Kaya)

 Example 3.2.1
Parametrize the straight line from the point (x 0, y0 ) to (x 1, y1 ) .
Solution
There are always many parametrizations of a given curve. A standard one for straight lines is

γ(t) = (x, y) = (x0 , y0 ) + t(x1 − x0 , y1 − y0 ), with 0 ≤ t ≤ 1.

 Example 3.2.2
Parametrize the circle of radius r around the point (x 0, y0 ) .
Solution
Again there are many parametrizations. Here is the standard one with the circle traversed in the counterclockwise direction:

γ(t) = (x, y) = (x0 , y0 ) + r(cos(t), sin(t)), with 0 ≤ t ≤ 2π.

Figure 3.2.2 : Line from (x


0, y0 ) to (x 1, y1 ) and circle around (x 0, y0 ) . (CC BY-NC; Ümit Kaya)

3.2.1 https://math.libretexts.org/@go/page/6489
This page titled 3.2: Parametrized curves is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

3.2.2 https://math.libretexts.org/@go/page/6489
3.3: Chain rule
For a function f (x, y) and a curve γ(t) = (x(t), y(t)) the chain rule gives
df (γ(t)) ∂f ∣ ∂f ∣
′ ′ ′
= ∣ x (t) + ∣ y (t) = ∇f (γ(t)) ⋅ y (t) dot product of vectors. (3.3.1)
dt ∂x ∣γ(t) ∂y ∣γ(t)

Here ∇f is the gradient of f defined in the next section.

This page titled 3.3: Chain rule is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

3.3.1 https://math.libretexts.org/@go/page/6490
3.4: Grad, curl and div
Gradient. For a function f (x, y), the gradient is defined as gradf = ∇f = (fx , fy ) . A vector field F which is the gradient of some
function is called a gradient vector field.
Curl. For a vector in the plane F(x, y) = (M (x, y), N (x, y)) we define
curlF = N x − My .
Note. The curl is a scalar. In general, the curl of a vector field is another vector field. However, for vectors fields in the plane the
curl is always in the k
ˆ
direction, so we have simply dropped the k ˆ
and made curl a scalar.
Divergence. The divergence of the vector field F = (M , N ) is
divF = M x + Ny .

This page titled 3.4: Grad, curl and div is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

3.4.1 https://math.libretexts.org/@go/page/6491
3.5: Level Curves
Recall that the level curves of a function f (x, y) are the curves given by f (x, y) = constant.
Recall also that the gradient ∇f is orthogonal to the level curves of f

This page titled 3.5: Level Curves is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

3.5.1 https://math.libretexts.org/@go/page/6492
3.6: Line Integrals
The ingredients for line (also called path or contour) integrals are the following:
A vector field F = (M , N )
A curve γ(t) = (x(t), y(t)) defined for a ≤ t ≤ b
Then the line integral of F along γ is defined by
b

∫ F ⋅ dr = ∫ F (γ(t)) ⋅ y (t)dt = ∫ M dx + N dy. (3.6.1)
γ a γ

 Example 3.6.1
Let F 2
= (−y/ r , x/ r )
2
and let γ be the unit circle. Compute line integral of F along γ.
Solution
You should be able to supply the answer to this example

Properties of line integrals


1. Independent of parametrization.
2. Reverse direction on curve ⇒ change sign. That is,

∫ F ⋅ dr = − ∫ F ⋅ dr. (3.6.2)
−C C

(Here, −C means the same curve traversed in the opposite direction.)


3. If C is closed then we sometimes indicate this with the notation ∮ C
F ⋅ dr = ∮
C
M dx + N dy .

Fundamental theorem for gradient fields

 Theorem 3.6.1 Fundamental theorem for gradient fields


If F = nablaf then ∫
γ
F ⋅ dr = f (P ) − f (Q) , where Q, P are the beginning and endpoints respectively of γ.

Proof
By the chain rule we have
df (γ(t))
′ ′
= ∇f (γ(t)) ⋅ γ (t) = F (γ(t)) ⋅ y (t). (3.6.3)
dt

The last equality follows from our assumption that F = ∇f . Now we can this when we compute the line integral:
b ′
∫ F ⋅ dr = ∫ F (γ(t)) ⋅ y (t) dt
γ a

b
df (γ(t))
= ∫ dt
a (3.6.4)
dt

= f (γ(b)) − f (γ(a))

= f (P ) − f (Q)

Notice that the third equality follows from the fundamental theorem of calculus.

 Definition: Potential Function


If a vector field F is a gradient field, with F = ∇f , then we call f a potential function for F .
Note: the usual physics terminology would be to call f the potential function for F .

3.6.1 https://math.libretexts.org/@go/page/50414
independence and conservative functions

 Definition: Path Independence

For a vector field F , the line integral ∫ F ⋅ dr is called path independent if, for any two points P and Q, the line integral has
the same value for every path between P and Q.

 Theorem 3.6.2


C
F ⋅ dr is path independent is equivalent to ∮ C
F ⋅ dr = 0 for any closed path.

Sketch of Proof
Draw two paths from Q to P . Following one from Q to P and the reverse of the other back to P is a closed path. The
equivalence follows easily. We refer you to the more detailed review of line integrals and Green’s theorem for more details.

 Definition: Conservative Vector Field

A vector field with path independent line integrals, equivalently a field whose line integrals around any closed loop is 0 is
called a conservative vector field.

 Theorem 3.6.3

We have the following equivalence: On a connected region, a gradient field is conservative and a conservative field is a
gradient field.

Proof
Again we refer you to the more detailed review for details. Essentially, if F is conservative then we can define a potential
function f (x, y) as the line integral of F from some base point to (x, y) .

This page titled 3.6: Line Integrals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

3.6.2 https://math.libretexts.org/@go/page/50414
3.7: Green's Theorem
Ingredients: C a simple closed curve (i.e. no self-intersection), and R the interior of C .
C must be piecewise smooth (traversed so interior region R is on the left) and piecewise smooth (a few corners are okay).

Figure 3.7.1 : Examples of piecewise smooth and piecewise smooth regions. (CC BY-NC; Ümit Kaya)

 Theorem 3.7.1: Green's Theorem

If the vector field F = (M , N ) is defined and differentiable on R then

∮ M dx + N dy = ∫ ∫ Nx − My dA. (3.7.1)
C R

In vector form this is written

∮ F ⋅ dr = ∫ ∫ curlF dA. (3.7.2)


C R

where the curl is defined as curlF = (Nx − My )

This page titled 3.7: Green's Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

3.7.1 https://math.libretexts.org/@go/page/50593
3.8: Extensions and Applications of Green’s Theorem
Simply Connected Regions

 Definition: Simply Connected Regions

A region D in the plane is simply connected if it has “no holes”. Said differently, it is simply connected for every simple
closed curve C in D, the interior of C is fully contained in D.

 Examples

Figure 3.8.1 : Examples


D1 - D are simply connected. For any simple closed curve
5 C inside any of these regions the interior of C is entirely inside
the region.
Note: Sometimes we say any curve can be shrunk to a point without leaving the region.

The regions below are not simply connected. For each, the interior of the curve C is not entirely in the region.

Figure 3.8.2 : Not simply connected regions. (CC BY-NC; Ümit Kaya)

Potential Theorem
Here is an application of Green’s theorem which tells us how to spot a conservative field on a simply connected region. The
theorem does not have a standard name, so we choose to call it the Potential Theorem.

 Theorem 3.8.1: Potential Theorem


Take F = (M , N ) defined and differentiable on a region D.
a. If F = ∇f then curlF = N − M x y =0 .
b. If D is simply connected and curlF = 0 on D, then F = ∇f for some f .
We know that on a connected region, being a gradient field is equivalent to being conservative. So we can restate the Potential
Theorem as: on a simply connected region, F is conservative is equivalent to curlF = 0 .

Proof
Proof of (a): F = (fx , fy ) , so curlF = fyx − fxy = 0 .

3.8.1 https://math.libretexts.org/@go/page/50594
Proof of (b): Suppose C is a simple closed curve in D . Since D is simply connected the interior of C is also in D .
Therefore, using Green’s theorem we have,

∮ F ⋅ dr = ∫ ∫ curlF dA = 0. (3.8.1)
C R

Figure 3.8.3: Potential Theorem


This shows that F is conservative in D. Therefore, this is a gradient field.

Summary: Suppose the vector field F = (M , N ) is defined on a simply connected region D. Then, the following statements are
equivalent.
Q
1. ∫ F ⋅ dr is path independent.
P

2. ∮ F ⋅ dr = 0 for any closed path C .


C

3. F = ∇f for some f in D.
4. F is conservative in D.
If F is continuously differentiably then 1, 2, 3, 4 all imply 5:
5. curlF = N − M = 0 in D
x y

need simply connected in the Potential Theorem


If there is a hole then F might not be defined on the interior of C . (Figure 3.8.4)

C D

Figure 3.8.4 : A hole in the region. (CC BY-NC; Ümit Kaya)

Extended Green’s Theorem


We can extend Green’s theorem to a region R which has multiple boundary curves.
Suppose R is the region between the two simple closed curves C and C . 1 2

Figure 3.8.5 : Multiple boundary curves as an example of the extended Green's theorem. (CC BY-NC; Ümit Kaya)
(Note R is always to the left as you traverse either curve in the direction indicated.)
Then we can extend Green’s theorem to this setting by

3.8.2 https://math.libretexts.org/@go/page/50594
∮ F ⋅ dr + ∮ F ⋅ dr = ∫ ∫ curlF dA. (3.8.2)
C1 C2 R

Likewise for more than two curves:

∮ F ⋅ dr + ∮ F ⋅ dr + ∮ F ⋅ dr + ∮ F ⋅ dr = ∫ ∫ curlF dA. (3.8.3)


C1 C2 C3 C4 R

P roof . The proof is based on the following figure. We ‘cut’ both C and C and connect them by two copies of C , one in each
1 2 3

direction. (In the figure we have drawn the two copies of C as separate curves, in reality they are the same curve traversed in
3

opposite directions.)
Now the curve C = C + C + C − C is a simple closed curve and Green’s theorem holds on it. But the region inside
1 3 2 3 C is
exactly R and the contributions of the two copies of C cancel. That is, we have shown that
3

∫ ∫ curlF dA = ∫ F ⋅ dr = ∫ F ⋅ dr. (3.8.4)


R C1 +C3 +C2 −C3 C1 +C2

This is exactly Green's theorem, which we wanted to prove.

Figure 3.8.6 : The punctured plane. (CC BY-NC; Ümit Kaya)

 Example 3.8.1
(−y, x)
Let F =
2
("tangential field")
r

F is defined on D = plane - (0, 0) = the punctured plane (Figure 3.8.7).

Figure 3.8.7 : A punctured plane. (CC BY-NC; Ümit Kaya)


It’s easy to compute (we’ve done it before) that curlF =0 in D.
Question: For the tangential field F what values can ∮ C
F ⋅ dr take for C a simple closed curve (positively oriented)?
Solution
We have two cases (i) C not around 0 (ii) C around 0
1 2

3.8.3 https://math.libretexts.org/@go/page/50594
Figure 3.8.8 : Two cases. (CC BY-NC; Ümit Kaya)
In case (i) Green’s theorem applies because the interior does not contain the problem point at the origin. Thus,

∮ F ⋅ dr = ∫ ∫ curlF dA = 0. (3.8.5)
C1 R

For case (ii) we will show that


let C be a small circle of radius a , entirely inside C . By the extended Green’s theorem we have
3 2

∮ F ⋅ dr − ∮ F ⋅ dr = ∫ ∫ curlF dA = 0. (3.8.6)
C2 C3 R

Thus, ∮ C2
F ⋅ dr = ∮
C3
F ⋅ dr .
Using the usual parametrization of a circle we can easily compute that the line integral is

∫ F ⋅ dr = ∫ 1 dt = 2π. QED. (3.8.7)


C3 0

Figure 3.8.1 : A punctured region. (CC BY-NC; Ümit Kaya)


Answer to the question: The only possible values are 0 and 2π.
We can extend this answer in the following way:
If C is not simple, then the possible values of ∮
C
F ⋅ dr are 2πn, where n is the number of times C goes (counterclockwise)
around (0,0).
Not for class: n is called the winding number of C around 0. n also equals the number of times C crosses the positive x-axis,
counting +1 from below and -1 from above.

3.8.4 https://math.libretexts.org/@go/page/50594
Figure 3.8.1 : An example of a non-simple region. (CC BY-NC; Ümit Kaya)

This page titled 3.8: Extensions and Applications of Green’s Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

3.8.5 https://math.libretexts.org/@go/page/50594
CHAPTER OVERVIEW
4: Line Integrals and Cauchy’s Theorem
4.1: Introduction to Line Integrals and Cauchy’s Theorem
4.2: Complex Line Integrals
4.3: Fundamental Theorem for Complex Line Integrals
4.4: Path Independence
4.5: Examples
4.6: Cauchy's Theorem
4.7: Extensions of Cauchy's theorem

This page titled 4: Line Integrals and Cauchy’s Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
4.1: Introduction to Line Integrals and Cauchy’s Theorem
The basic theme here is that complex line integrals will mirror much of what we’ve seen for multi- variable calculus line integrals.
But, just like working with e is easier than working with sine and cosine, complex line integrals are easier to work with than their

multivariable analogs. At the same time they will give deep insight into the workings of these integrals.
To define complex line integrals, we will need the following ingredients:
The complex plane: z = x + iy
The complex differential dz = dx + idy
A curve in the complex plane: γ(t) = x(t) + iy(t) , defined for a ≤ t ≤ b .
A complex function: f (z) = u(x, y) + iv(x, y)

This page titled 4.1: Introduction to Line Integrals and Cauchy’s Theorem is shared under a CC BY-NC-SA 4.0 license and was authored,
remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

4.1.1 https://math.libretexts.org/@go/page/6480
4.2: Complex Line Integrals
Line integrals are also called path or contour integrals. Given the ingredients we define the complex lineintegral ∫ γ
f (z) dz by
b

∫ f (z) dz := ∫ f (γ(t))γ (t) dt. (4.2.1)
γ a

You should note that this notation looks just like integrals of a real variable. We don’t need the vectors and dot products of line
integrals in R . Also, make sure you understand that the product f (γ(t))γ (t) is just a product of complex numbers.
2 ′

An alternative notation uses dz = dx + idy to write

∫ f (z) dz = ∫ (u + iv)(dx + idy) (4.2.2)


γ γ

Let’s check that Equations 4.2.1 and 4.2.2 are the same. Equation 4.2.2 is really a multivariable calculus expression, so thinking of
γ(t) as (x(t), y(t)) it becomes

b
′ ′
∫ f (z) dz = ∫ [u(x(t), y(t)) + iv(x(t), y(t)](x (t) + i y (t))dt (4.2.3)
γ a

but
u(x(t), y(t)) + iv(x(t), y(t)) = f (γ(t)) (4.2.4)

and
′ ′ ′
x (t) + i y (t) = γ (t) (4.2.5)

so the right hand side of Equation 4.2.2 is


b

∫ f (γ(t))γ (t) dt. (4.2.6)
a

That is, it is exactly the same as the expression in Equation 4.2.1

 Example 4.2.1
Compute ∫ γ
z
2
dz along the straight line from 0 to 1 + i .

Solution
We parametrize the curve as γ(t) = t(1 + i) with 0 ≤ t ≤ 1 . So γ ′
(t) = 1 + i . The line integral is
1
2i(1 + i)
2 2 2
∫ z dz = ∫ t (1 + i ) (1 + i) dt = .
0 3

 Example 4.2.2

Compute ∫ γ
¯
¯¯
z dz along the straight line from 0 to 1 + i .
Solution
We can use the same parametrization as in the previous example. So,
1

¯¯
∫ z̄ dz = ∫ t(1 − i)(1 + i) dt = 1.
γ 0

4.2.1 https://math.libretexts.org/@go/page/6481
 Example 4.2.3

Compute ∫ γ
z
2
dz along the unit circle.

Solution
We parametrize the unit circle by γ(θ) = e , where 0 ≤ θ ≤ 2π . We have γ
iθ ′
(θ) = i e

. So, the integral becomes
2π 2π 3iθ
e
2 2iθ iθ 3iθ 2π
∫ z dz = ∫ e ie dθ = ∫ ie dθ = | = 0.
0
γ 0 0
3

 Example 4.2.4

Compute ∫ z̄ dz along the unit circle.


¯
¯

Solution
Parametrize C : γ(t) = e , with 0 ≤ t ≤ 2π . So, γ
it ′
(t) = i e
it
. Putting this into the integral gives
2π 2π
¯¯¯¯¯
¯¯ it it
∫ z̄ dz = ∫ e ie dt = ∫ i dt = 2πi.
C 0 0

This page titled 4.2: Complex Line Integrals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

4.2.2 https://math.libretexts.org/@go/page/6481
4.3: Fundamental Theorem for Complex Line Integrals
This is exactly analogous to the fundamental theorem of calculus.

 Theorem 4.3.1: Fundamental Theorem of Complex Line Integrals


If f (z) is a complex analytic function on an open region A and γ is a curve in A from z to z then 0 1


∫ f (z) dz = f (z1 ) − f (z0 ).
γ

Proof
This is an application of the chain rule. We have
df (γ(t))
′ ′
= f (γ(t))γ (t).
dt

So
b b
df (γ(t))
′ ′ ′ b
∫ f (z) dz = ∫ f (γ(t))γ (t) dt = ∫ dt = f (γ(t))|a = f (z1 ) − f (z0 )
γ a a
dt

Another equivalent way to state the fundamental theorem is: if f has an antiderivative F , i.e. F ′
=f then

∫ f (z) dz = F (z1 ) − F (z0 ).


γ

 Example 4.3.1

Redo ∫ γ
z
2
dz , with γ the straight line from 0 to 1 + i .
Solution
We can check by inspection that z has an antiderivative F (z) = z
2 3
/3 . Therefore the fundamental theorem implies
3 1+i 3
z ∣ (1 + i) 2i(1 + i)
2
∫ z dz = ∣ = = .
γ
3 ∣ 3 3
0

 Example 4.3.2
Redo ∫ γ
z
2
dz , with γ the unit circle.

Solution
Again, since z had antiderivative z /3 we can evaluate the integral by plugging the end-points of γ into the
2 3 3
z /3 . Since the
endpoints are the same the resulting difference will be 0!

This page titled 4.3: Fundamental Theorem for Complex Line Integrals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

4.3.1 https://math.libretexts.org/@go/page/6482
4.4: Path Independence
We say the integral ∫ f (z) dz is path in dependent if it has the same value for any two paths with the same endpoints. More
γ

precisely, if f (z) is defined on a region A then ∫ f (z) dz is path independent in A , if it has the same value for any two paths in A
γ

with the same endpoints.


The following theorem follows directly from the fundamental theorem. The proof uses the same argument as Example 4.3.2.

 Theorem 4.4.1

If f (z) has an antiderivative in an open region A , then the path integral ∫ f (z) dz is path independent for all paths in A .
γ

Proof
Since f (z) has an antiderivative of f (z), the fundamental theorem tells us that the integral only depends on the endpoints
of γ , i.e.

∫ f (z) dz = F (z1 ) − F (z0 )


γ

where z and z are the beginning and end point of γ .


0 1

An alternative way to express path independence uses closed paths.

 Theorem 4.4.2
The following two things are equivalent.

1. The integral ∫ f (z) dz is path independent.


γ

2. The integral ∫ f (z) dz around any closed path is 0.


γ

Proof
This is essentially identical to the equivalent multivariable proof. We have to show two things:
i. Path independence implies the line integral around any closed path is 0.
ii. If the line integral around all closed paths is 0 then we have path independence.
To see (i), assume path independence and consider the closed path C shows in figure (i) below. Since the starting point z 0

in the same as the endpoint z the integral ∫ f (z) dz must have the same value as the line integral over the curve
1
C

consisting of the single point z . Since that is clearly 0 we must have the integral over C is 0.
0

To see (ii), assume ∫ f (z) dz = 0 for any closed curve. Consider the two curves C and
C
1 C2 shown in figure (ii). Both
start at z and end at z . By the assumption that integrals over closed paths are 0 we have ∫
0 1
C1 −C2
f (z) dz = 0 . So,

fC1 f (z) dz = ∫ f (z) dz.


C2

That is, any two paths from z to z have the same line integral. This shows that the line integrals are path independent.
0 1

4.4.1 https://math.libretexts.org/@go/page/6483
This page titled 4.4: Path Independence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

4.4.2 https://math.libretexts.org/@go/page/6483
4.5: Examples
 Example 4.5.1

Why can't we compute ∫ γ


¯
¯¯
z dz using the fundamental theorem.
Solution
Because z doesn’t have an antiderivative. We can also see this by noting that if z had an antiderivative, then its integral around
¯
¯¯ ¯
¯¯

the unit circle would have to be 0. But, we saw in Example 4.2.4 that this is not the case.

 Example 4.5.2
1
Compute ∫ γ
dz over each of the following contours
z

i. The line from 1 to 1 + i .


ii. The circle of radius 1 around z = 3 .
iii. The unit circle.
Solution
For parts (i) and (ii) there is no problem using the antiderivative log(z) because these curves are contained in a simply
connected region that doesn’t contain the origin.
(i)
1 – π
∫ dz = log(1 + i) − log(1) = log(√2) + i . (4.5.1)
γ z 4

(ii) Since the beginning and end points are the same, we get
1
∫ dz = 0 (4.5.2)
γ
z

(iii) We parametrize the unit circle by γ(θ) = e iθ


with 0 ≤ θ ≤ 2π . We compute γ ′
(θ) = i e

. So the integral becomes
2π 2π
1 1

∫ dz = ∫ ie dt = ∫ i dt = 2πi. (4.5.3)

γ
z 0 e 0

Notice that we could use log(z) if we were careful to let the argument increase by 2π as it went around the origin once.

 Example 4.5.3
1
Compute ∫ γ 2
dz , where γ is the unit circle in two ways.
z

i. Using the fundamental theorem.


ii. Directly from the definition.
Solution
(i) Let f (z) = −1/z . Since f ′
(z) = 1/ z
2
, the fundamental theorem says
1

∫ dz = ∫ f (z) dz = f (endpint) − f (start point) = 0. (4.5.4)
2
γ z γ

It equals 0 because the start and endpoints are the same.


(ii) As usual, we parametrize the unit circle as γ(θ = e iθ
with 0 ≤ θ ≤ 2π . So, γ ′
(θ) = i e

and the integral becomes
2π 2π
1 1 iθ −iθ −iθ 2π
∫ dz = ∫ ie dθ = ∫ ie dθ = −e | = 0. (4.5.5)
0
γ
z2 0 e
2iθ
0

4.5.1 https://math.libretexts.org/@go/page/6484
This page titled 4.5: Examples is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

4.5.2 https://math.libretexts.org/@go/page/6484
4.6: Cauchy's Theorem
Cauchy’s theorem is analogous to Green’s theorem for curl free vector fields.

 Theorem 4.6.1 Cauchy's theorem


Suppose A is a simply connected region, f (z) is analytic on A and C is a simple closed curve in A . Then the following three
things hold:
(i) ∫
C
f (z) dz = 0

(i') We can drop the requirement that C is simple in part (i).


(ii) Integrals of f on paths within A are path independent. That is, two paths with the same endpoints integrate to the same
value.
(iii) f has an antiderivative in A .

Proof
We will prove (i) using Green’s theorem – we could give a proof that didn’t rely on Green’s, but it would be quite similar in
flavor to the proof of Green’s theorem.
Let R be the region inside the curve. And write f = u + iv . Now we write out the integral as follows

∫ f (z) dz = ∫ (u + iv)(dx + idy) = ∫ (u dx − v dy) + i(v dx + u dy). (4.6.1)


C C C

Let’s apply Green’s theorem to the real and imaginary pieces separately. First the real piece:

∫ u dx − v dy = ∫ (−vx − uy ) dx dy = 0. (4.6.2)
C R

Likewise for the imaginary piece:

∫ v dx + u dy = ∫ (ux − vy ) dx dy = 0. (4.6.3)
C R

We get 0 because the Cauchy-Riemann equations say u x = vy , so u x − vy = 0 .


To see part (i′) you should draw a few curves that intersect themselves and convince yourself that they can be broken into a
sum of simple closed curves. Thus, (i′) follows from (i). (In order to truly prove part (i′) we would need a more technically
precise definition of simply connected so we could say that all closed curves within A can be continuously deformed to
each other.)
Part (ii) follows from (i) and Theorem 4.4.2.
To see (iii), pick a base point z 0 ∈ A and let
z

F (z) = ∫ f (w) dw. (4.6.4)


z0

Here the itnegral is over any path in A connecting z to z . By part (ii), F (z) is well defined. If we can show that
0

F (z) = f (z) then we’ll be done. Doing this amounts to managing the notation to apply the fundamental theorem of

calculus and the Cauchy-Riemann equations. So, let’s write

f (z) = u(x, y) + iv(x, y), F (z) = U (x, y) + iV (x, y). (4.6.5)

Then we can write


∂f
= ux + i vx , etc. (4.6.6)
∂x

We can formulate the Cauchy-Riemann equations for F (z) as

4.6.1 https://math.libretexts.org/@go/page/6485
∂F 1 ∂F

F (z) = = (4.6.7)
∂x i ∂y

i.e.


1
F (z) = Ux + i Vx = (Uy + i Vy ) = Vy − i Uy . (4.6.8)
i

For reference, we note that using the path γ(t) = x(t) + iy(t) , with γ(0) = z and γ(b) = z we have 0

z z
F (z) = ∫ f (w) dw = ∫ (u(x, y) + iv(x, y))(dx + idy)
z0 z0
(4.6.9)
b ′ ′
= ∫ (u(x(t), y(t)) + iv(x(t), y(t))(x (t) + i y (t)) dt.
0

Our goal now is to prove that the Cauchy-Riemann equations given in Equation 4.6.9 hold for F (z) . The figure below
shows an arbitrary path from z to z , which can be used to compute f (z). To compute the partials of F we’ll need the
0

straight lines that continue C to z + h or z + ih .

Paths for proof of Cauchy’s theorem


To prepare the rest of the argument we remind you that the fundamental theorem of calculus implies
h
∫ g(t) dt
0
lim = g(0). (4.6.10)
h→0 h

(That is, the derivative of the integral is the original function.)


∂F
First we'll look at . So, fix z = x + iy . Looking at the paths in the figure above we have
∂x

F (z + h) − F (z) = ∫ f (w) dw − ∫ f (w) dw = ∫ f (w) dw. (4.6.11)


C +Cx C Cx

The curve C is parametrized by γ(t) + x + t + iy , with 0 ≤ t ≤ h . So,


x

F (z + h) − F (z) ∫ f (w) dw
∂F Cx
= limh→0 = limh→0
∂x h h
h
∫ u(x + t, y) + iv(x + t, y) dt
0
(4.6.12)
= limh→0
h

= u(x, y) + iv(x, y)

= f (z).

The second to last equality follows from Equation 4.6.10.


Similarly, we get (remember: w = z + it , so dw = i dt )

4.6.2 https://math.libretexts.org/@go/page/6485
∫ f (w) dw
1 ∂F F (z + ih) − F (z) Cy

= limh→0 = limh→0
i ∂y ih ih

h
∫ u(x, y + t) + iv(x, y + t)i dt
0 (4.6.13)
= limh→0
ih

= u(x, y) + iv(x, y)

= f (z).

Together Equations 4.6.12 and 4.6.13 show


∂F 1 ∂F
f (z) = = (4.6.14)
∂x i ∂y

By Equation 4.6.7 we have shown that F is analytic and F ′


=f .

This page titled 4.6: Cauchy's Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

4.6.3 https://math.libretexts.org/@go/page/6485
4.7: Extensions of Cauchy's theorem
Cauchy’s theorem requires that the function f (z) be analytic on a simply connected region. In cases where it is not, we can extend
it in a useful way.
Suppose R is the region between the two simple closed curves C1 and C2 . Note, both C1 and C2 are oriented in a
counterclockwise direction.

Figure 4.7.1: (CC BY-NC; Ümit Kaya)

 Theorem 4.7.1 Extended Cauchy's theorem

If f (z) is analytic on R then

∫ f (z) dz = 0. (4.7.1)
C1 −C2

Proof
The proof is based on the following figure. We ‘cut’ both C and C and connect them by two copies of C , one in each
1 2 3

direction. (In the figure we have drawn the two copies of C as separate curves, in reality they are the same curve traversed
3

in opposite directions.)

With C acting as a cut, the region enclosed by


3 C1 + C3 − C2 − C3 is simply connected, so Cauchy's Theorem 4.6.1
applies. We get

∫ f (z) dz = 0 (4.7.2)
C1 +C3 −C2 −C3

The contributions of C and −C cancel, which leaves ∫


3 3 C1 −C2
f (z) dz = 0. QED

 Note

This clearly implies ∫C1


f (z) dz = ∫
C2
f (z) dz .

4.7.1 https://math.libretexts.org/@go/page/50602
 Example 4.7.1

Let f (z) = 1/z . f (z) is defined and analytic on the punctured plane. What values can ∫ C
f (z) dz take for C a simple closed
curve (positively oriented) in the plane?

Punctured plane: C - {0} (CC BY-NC; Ümit Kaya)


Solution
We have two cases (i) C not around 0, and (ii) C around 0
1 2

Case (i): Cauchy’s theorem applies directly because the interior does not contain the problem point at the origin. Thus,

∫ f (z) dz = 0. (4.7.3)
C1

Case (ii): we will show that

∫ f (z) dz = 2πi. (4.7.4)


C2

Let C be a small circle of radius a centered at 0 and entirely inside C .


3 2

4.7.2 https://math.libretexts.org/@go/page/50602
Figure for part (ii) (CC BY-NC; Ümit Kaya)
By the extended Cauchy theorem we have

∫ f (z) dz = ∫ f (z) dz = ∫ i dt = 2πi. (4.7.5)


C2 C3 0

Here, the lline integral for C was computed directly using the usual parametrization of a circle.
3

Answer to the question


The only possible values are 0 and 2πi.
We can extend this answer in the following way:
If C is not simple, then the possible values of

∫ f (z) dz
C

are 2πni, where n is the number of times C goes (counterclockwise) around the origin 0.

 Definition: Winding Number

n is called the winding number of C around 0. n also equals the number of times C crosses the positive x-axis, counting ±1

for crossing from below and -1 for crossing from above.

A curve with winding number 2 around the origin. (CC BY-NC; Ümit Kaya)

4.7.3 https://math.libretexts.org/@go/page/50602
 Example 4.7.2
A further extension: using the same trick of cutting the region by curves to make it simply connected we can show that if f is
analytic in the region R shown below then

∫ f (z) dz = 0.
C1 −C2 −C3 −C4

That is, C 1 − C2 − C3 − C4 is the boundary of the region R .


Orientation
It is important to get the orientation of the curves correct. One way to do this is to make sure that the region R is always to the
left as you traverse the curve. In the above example. The region is to the right as you traverse C , C or C in the direction
2 3 4

indicated. This is why we put a minus sign on each when describing the boundary.

This page titled 4.7: Extensions of Cauchy's theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

4.7.4 https://math.libretexts.org/@go/page/50602
CHAPTER OVERVIEW
5: Cauchy Integral Formula
Cauchy’s theorem is a big theorem which we will use almost daily from here on out. Right away it will reveal a number of
interesting and useful properties of analytic functions. More will follow as the course progresses. We start with a statement of the
theorem for functions. After some examples, we’ll give a generalization to all derivatives of a function. After some more examples
we will prove the theorems. After that we will see some remarkable consequences that follow fairly directly from the Cauchy’s
formula.
5.1: Cauchy's Integral for Functions
5.2: Cauchy’s Integral Formula for Derivatives
5.3: Proof of Cauchy's integral formula
5.4: Proof of Cauchy's integral formula for derivatives
5.5: Amazing consequence of Cauchy’s integral formula

Thumbnail: https://wiki.seg.org/wiki/Cauchy%27s_theorem

This page titled 5: Cauchy Integral Formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
5.1: Cauchy's Integral for Functions
 Theorem 5.1.1: Cauchy's Integral Formula

Suppose C is a simple closed curve and the function f (z) is analytic on a region containing C and its interior (Figure 5.1.1).
We assume C is oriented counterclockwise.

Figure 5.1.1 : Cauchy's integral formula: simple closed curve C , f (z) analytic on and inside C . (CC BY-NC; Ümit Kaya)
Then for any z inside C :0

1 f (z)
f (z0 ) = ∫ dz (5.1.1)
2πi C
z − z0

This is remarkable: it says that knowing the values of f on the boundary curve C means we know everything about f inside
C !! This is probably unlike anything you’ve encountered with functions of real variables.

Aside 1. With a slight change of notation (z becomes w and z becomes z ) we often write the formula as
0

1 f (w)
f (z) = ∫ dw (5.1.2)
2πi C
w −z

Aside 2. We’re not being entirely fair to functions of real variables. We will see that for f = u + iv the real and imaginary
parts u and v have many similar remarkable properties. u and v are called conjugate harmonic functions.

 Example 5.1.1
2
z
e
Compute ∫ c
dz , where C is the curve shown in Figure 5.1.2.
z−2

Figure 5.1.2 : Curve in Example. (CC BY-NC; Ümit Kaya)


Solution
Let f (z) = e . f (z) is entire. Since C is a simple closed curve (counterclockwise) and is inside , Cauchy’s integral
2
z
z =2 C

formula says that the integral is 2πif (2) = 2πie . 4

5.1.1 https://math.libretexts.org/@go/page/6495
 Example 5.1.2

Do the same integral as the previous example with C the curve shown in Figure 5.1.3.

Figure 5.1.3 : Curve used in Example. (CC BY-NC; Ümit Kaya)


Solution
2

Since f (z) = e z
/(z − 2) is analytic on and inside C , Cauchy’s theorem says that the integral is 0.

 Example 5.1.3

Do the same integral as the previous examples with C the curve shown.

Im(z)

Re(z)
2

Figure 5.1.4 : Curve for Example. (CC BY-NC; Ümit Kaya)


Solution
This one is trickier. Let f (z) = e . The curve C goes around 2 twice in the clockwise direction, so we break C into C
2
z
1 + C2

as shown in the next figure.

Figure 5.1.5 : Solution to Example. (CC BY-NC; Ümit Kaya)


These are both simple closed curves, so we can apply the Cauchy integral formula to each separately. (The negative signs are
because they go clockwise around z = 2 .)

5.1.2 https://math.libretexts.org/@go/page/6495
f (z) f (z) f (z)
∫ dz = ∫ dz + ∫ dz = −2πif (2) − 2πif (2) = −4πif (2). (5.1.3)
C z−2 C1 z−2 C2 z−2

This page titled 5.1: Cauchy's Integral for Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

5.1.3 https://math.libretexts.org/@go/page/6495
5.2: Cauchy’s Integral Formula for Derivatives
Cauchy’s integral formula is worth repeating several times. So, now we give it for all derivatives f (n)
(z) of f . This will include the
formula for functions as a special case.

 Theorem 5.2.1 Cauchy's integral formula for derivatives

If f (z) and C satisfy the same hypotheses as for Cauchy’s integral formula then, for all z inside C we have
n! f (w)
(n)
f (z) = ∫ dw, n = 0, 1, 2, . . . (5.2.1)
n+1
2πi C (w − z)

where, C is a simple closed curve, oriented counterclockwise, z is inside C and f (w) is analytic on and inside C .

 Example 5.2.1
2z
e
Evaluate I =∫
C 4
dz where C : |z| = 1 .
z

Solution
With Cauchy’s formula for derivatives this is easy. Let f (z) = e . Then, 2z

f (z) 2πi 8
′′′
I =∫ dz = f (0) = πi. (5.2.2)
4
C z 3! 3

 Example 5.2.2

Now let C be the contour shown below and evaluate the same integral as in the previous example.

Solution
8
Again this is easy: the integral is the same as the previous example, i.e. I = πi .
3

Another approach to some basic examples


Suppose C is a simple closed curve around 0. We have seen that
1
∫ dz = 2πi. (5.2.3)
C
z

The Cauchy integral formula gives the same result. That is, let f (z) = 1 , then the formula says
1 f (z)
∫ dz = f (0) = 1. (5.2.4)
2πi C
z−0

5.2.1 https://math.libretexts.org/@go/page/6496
Likewise Cauchy’s formula for derivatives shows
1 f (z)
(n)
∫ dz = ∫ dz = f (0) = 0, for integers n > 1. (5.2.5)
n n+1
C (z) C z

More examples

 Example 5.2.3
cos(z)
Compute ∫ C 2
dz over the contour shown.
z(z + 0)

Solution
Let f (z) = cos(z)/(z + 8) . f (z) is analytic on and inside the curve C . That is, the roots of z
2 2
+8 are outside the curve. So,
we rewrite the integral as
2
cos(z)/(z + 8) f (z) 1 πi
∫ dz = ∫ dz = 2πif (0) = 2πi = . (5.2.6)
C
z C
z 8 4

 Example 5.2.4
1
Compute ∫ C 2 2
dz over the contour shown.
(z + 4)

Solution

5.2.2 https://math.libretexts.org/@go/page/6496
We factor the denominator as
1 1
= . (5.2.7)
(z 2 + 4 )2 (z − 2i )2 (z + 2i )2

Let
1
f (z) = . (5.2.8)
2
(z + 2i)

Clear f (z) is analytic inside C . So, by Cauchy’s formula for derivatives:


1 f (z) −2 4πi π

∫ dz = ∫ = 2πi f (2i) = 2πi[ ]z=2i = = (5.2.9)
2 2 2 3
C (z + 4) C (z − 2i) (z + 2i) 64i 16

 Example 5.2.5
z
Compute ∫ C 2
dz over the curve C shown below.
z +4

Solution
The integrand has singularities at ±2i and the curve C encloses them both. The solution to the previous solution won’t work
because we can’t find an appropriate f (z) that is analytic on the whole interior of C . Our solution is to split the curve into two
pieces. Notice that C is traversed both forward and backward.
3

5.2.3 https://math.libretexts.org/@go/page/6496
Split the original curve C into 2 pieces that each surround just one singularity.
We have
z z
= . (5.2.10)
2
z +4 (z − 2i)(z + 2i)

We let
z z
f1 (z) = and f2 (z) = . (5.2.11)
z + 2i z − 2i

So,
z f1 (z) f2 (z)
= = . (5.2.12)
2
z +4 z − 2i z + 2i

The integral, can be written out as


z z f1 (z) f2 (z)
∫ dz = ∫ dz = ∫ dz + ∫ dz (5.2.13)
2 2
C z +4 C1 +C3 −C3 +C2 z +4 C1 +C3
z − 2i C2 −C3
z + 2i

Since f is analytic inside the simple closed curve C + C and


1 1 3 f2 is analytic inside the simple closed curve C2 − C3 ,
Cauchy’s formula applies to both integrals. The total integral equals
2πi(f1 (2i) + f2 (−2i)) = 2πi(1/2 + 1/2) = 2πi. (5.2.14)

Remarks. 1. We could also have done this problem using partial fractions:
z A B
= + . (5.2.15)
(z − 2i)(z + 2i) z − 2i z + 2i

It will turn out that A = f 1 (2i) and B = f 2 (−2i) . It is easy to apply the Cauchy integral formula to both terms.
2. Important note. In an upcoming topic we will formulate the Cauchy residue theorem. This will allow us to compute the
integrals in Examples 5.3.3-5.3.5 in an easier and less ad hoc manner.

triangle inequality for integrals


We discussed the triangle inequality in the Topic 1 notes. It says that

| z1 + z2 | ≤ | z1 | + | z2 |, (5.2.16)

5.2.4 https://math.libretexts.org/@go/page/6496
with equality if and only if z and z lie on the same ray from the origin.
1 2

A useful variant of this statement is

| z1 | − | z2 | ≤ | z1 − z2 |. (5.2.17)

This follows because Equation 5.3.17 implies

| z1 | = |(z1 − z2 ) + z2 | ≤ | z1 − z2 | + | z2 |. (5.2.18)

Now substracting z from both sides give Equation 5.3.18


2

Since an integral is basically a sum, this translates to the triangle inequality for integrals. We’ll state it in two ways that will be
useful to us.

 Theorem 5.2.2 Triangle inequality for integrals

Suppose g(t) is a complex valued function of a real variable, defined on a ≤ t ≤ b . Then


b b

|∫ g(t) dt| ≤ ∫ |g(t)| dt, (5.2.19)


a a

with equality if and only if the values of g(t) all lie on the same ray from the origin.

Proof
This follows by approximating the integral as a Riemann sum.
b b

|∫ g(t) dt| ≈ | ∑ g(tk )Δt| ≤ ∑ |g(tk )|Δt ≈ ∫ |g(t)| dt. (5.2.20)


a a

The middle inequality is just the standard triangle inequality for sums of complex numbers.

 Theorem 5.2.3 Triangle inequality for integrals II

For any function f (z) and any curve γ, we have

|∫ f (z) dz| ≤ ∫ |f (z)| |dz|. (5.2.21)


γ γ

Here dz = γ ′
(t) dt and |dz| = |γ ′
(t)| dt .

Proof
This follows immediately from the previous theorem:
b b
′ ′
|∫ f (z) dz| = | ∫ f (γ(t))γ (t) dt| ≤ ∫ |f (γ(t))| | γ (t)| dt = ∫ |f (z)| |dz|. (5.2.22)
γ a a γ

 Corollary

If |f (z)| < M on C then

|∫ f (z) dz| ≤ M ⋅ (length of C ). (5.2.23)


C

Proof
Let γ(t), with a ≤ t ≤ b , be a parametrization of C . Using the triangle inequality
b b
′ ′
|∫ f (z) dz| ≤ ∫ |f (z)| |dz| = ∫ |f (γ(t))| | γ (t)| dt ≤ ∫ M | γ (t)| dt = M ⋅ (length of C ). (5.2.24)
C C a a

5.2.5 https://math.libretexts.org/@go/page/6496
Here we have used that
−−−−−−−−−−
′ ′ 2 ′ 2
| γ (t)| dt = √ (x ) + (y ) dt = ds, (5.2.25)

the arclength element.

 Example 5.2.6

Compute the real integral



1
I =∫ dx (5.2.26)
−∞
(x2 + 1 )2

Solution
The trick is to integrate f (z) = 1/(z + 1) over the closed contour C
2 2
1 + CR shown, and then show that the contribution of
C Rto this integral vanishes as R goes to ∞.

The only singularity of


1
f (z) = (5.2.27)
2 2
(z + i ) (z − i )

inside the contour is at z = i . Let


1
g(z) = . (5.2.28)
2
(z + i)

Since g is analytic on and inside the contour, Cauchy’s formula gives


g(z) −2 π

∫ f (z) dz = ∫ dz = 2πi g (i) = 2πi = . (5.2.29)
C1 +CR C1 +CR
(z − i)2 (2i)3 2

We parametrize C by 1

γ(x) = x, with − R ≤ x ≤ R. (5.2.30)

So,
R
1
∫ f (z) dz = ∫ dx. (5.2.31)
2
C1 −R
(x + 1 )2

This goes to I (the value we want to compute) as R → ∞ .


Next, we parametrize C by R


γ(θ) = Re , with 0 ≤ θ ≤ π. (5.2.32)

So,

5.2.6 https://math.libretexts.org/@go/page/6496
π
1

∫ f (z) dz = ∫ iRe dθ (5.2.33)
CR 0 (R2 e2iθ + 1 )2

By the triangle inequality for integrals, if R > 1


π
1

|∫ f (z) dz| ≤ ∫ | iRe | dθ. (5.2.34)
2 2iθ 2
CR 0 (R e + 1)

From the triangle equality in the form Equation 5.3.18 we know that
2 2iθ 2 2iθ 2
|R e + 1| ≥ | R e | − |1| = R − 1. (5.2.35)

Thus,
1 1 1 1
≤ ⇒ ≤ . (5.2.36)
2 2iθ 2 2 2 2
|R e + 1| R −1 |R e
2 2iθ
+ 1| (R − 1)

Using Equation 5.3.34, we then have


π π
1 R 1

|∫ f (z) dz| ≤ ∫ | iRe | dθ ≤ ∫ dθ = . (5.2.37)
2 2iθ 2 2 2 2 2
CR 0 (R e + 1) 0 (R − 1) (R − 1)

Clearly this goes to 0 as R goes to infinity. Thus, the integral over the contour C 1 + CR goes to I as R gets large. But

∫ f (z) dz = π/2 (5.2.38)


C1 +CR

for all R > 1 . We can therefore conclude that I = π/2 .


As a sanity check, we note that our answer is real and positive as it needs to be.

This page titled 5.2: Cauchy’s Integral Formula for Derivatives is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

5.2.7 https://math.libretexts.org/@go/page/6496
5.3: Proof of Cauchy's integral formula
Useful theorem
Before proving the theorem we’ll need a theorem that will be useful in its own right.

Theorem 5.3.1: A second extension of Cauchy's theorem

Suppose that A is a simply connected region containing the point z . Suppose g is a function which is
0

1. Analytic on A - {z }
0

2. Continuous on A . (In particular, g does not below up at z .) 0

Then

∫ g(z) dz = 0 (5.3.1)
C

for all closed curves C in A .

Proof
The extended version of Cauchy’s theorem in the Topic 3 notes tells us that

∫ g(z) dz = ∫ g(z) dz, (5.3.2)


C Cr

where C is a circle of radius r around z .


r 0

Since g(z) is continuous we know that |g(z)| is bounded inside Cr . Say, |g(z)| < M . The corollary to the triangle
inequality says that

|∫ g(z) dz| ≤ M (length of Cr ) = M 2πr. (5.3.3)


Cr

Since r can be as small as we want, this implies that

∫ g(z) dz = 0. (5.3.4)
Cr

Note
Using this, we can show that g(z) is, in fact, analytic at z0 . The proof will be the same as in our proof of Cauchy’s
theorem that g(z) has an antiderivative.

Proof of Cauchy’s integral formula


1 f (z)
We reiterate Cauchy’s integral formula from Equation 5.2.1: f (z 0) = ∫
C
dz .
2πi z − z0

5.3.1 https://math.libretexts.org/@go/page/6497
P roof . (of Cauchy’s integral formula) We use a trick that is useful enough to be worth remembering. Let
f (z) − f (z0 )
g(z) = . (5.3.5)
z − z0

Since f (z) is analytic on A , we know that g(z) is analytic on A − {z 0} . Since the derivative of f exists at z , we know that
0


lim g(z) = f (z0 ). (5.3.6)
z→z0

That is, if we define g(z 0)



= f (z0 ) then g is continuous at z . From the extension of Cauchy's theorem just above, we have
0

f (z) − f (z0 )
∫ g(z) dz = 0, i.e. ∫ dz = 0. (5.3.7)
C C
z − z0

Thus
f (z) f (z0 ) 1
∫ dz = ∫ dz = f (z0 ) ∫ dz = 2πif (z0 ). (5.3.8)
C
z − z0 C
z − z0 C
z − z0

The last equality follows from our, by now, well known integral of 1/(z − z 0) on a loop around z .
0

This page titled 5.3: Proof of Cauchy's integral formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

5.3.2 https://math.libretexts.org/@go/page/6497
5.4: Proof of Cauchy's integral formula for derivatives
Recall that Cauchy’s integral formula in Equation 5.3.1 says
n! f (w)
(n)
f (z) = ∫ dw, n = 0, 1, 2, . . . (5.4.1)
n+1
2πi C (w − z)

First we’ll offer a quick proof which captures the reason behind the formula, and then a formal proof.

Quick Proof
We have an integral representation for f (z) , z ∈ A , we use that to find an integral representation for f ′
(z) , z ∈ A.
d 1 f (w) 1 d f (w) 1 f (w)

f (z) = [ ∫ dw] = ∫ ( ) dw = ∫ dw (5.4.2)
2
dz 2πi C
w −z 2πi C
dz w −z 2πi C (w − z)

(Note, since z ∈ A and w ∈ C , we know that w − z ≠ 0 ) Thus,


1 f (w)

f (z) = ∫ dw (5.4.3)
2
2πi C (w − z)

Now, by iterating this process, i.e. by mathematical induction, we can show the formula for higher order derivatives.

Formal Proof
We do this by taking the limit of
f (z + Δz) − f (z)
lim (5.4.4)
Δ→0 Δz

using the integral representation of both terms:


1 f (w) 1 f (w)
f (z + Δz) = ∫ dw, f (z) = ∫ dw (5.4.5)
2πi C w − z − Δz 2πi C
w −z

Now, using a little algebraic manipulation we get


f (z + Δz) − f (z) 1 f (w) f (w)
= ∫ − dw
Δz 2πiΔz C
w − z − Δz w −z

1 f (w)Δz
= ∫ dw
2πiΔz C (w − z − Δz)(w − z)

1 f (w)
= ∫ dw
2
2πi C (w − z) − Δz(w − z)

Letting Δz go to 0, we get Cauchy's formula for f ′


(z) :
1 f (w)

f (z) = ∫ dw (5.4.6)
2
2πi C (w − z)

There is no problem taking the limit under the integral sign because everything is continuous and the denominator is never 0.

This page titled 5.4: Proof of Cauchy's integral formula for derivatives is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

5.4.1 https://math.libretexts.org/@go/page/6498
5.5: Amazing consequence of Cauchy’s integral formula
Existence of derivatives

 Theorem 5.5.1

Suppose f (z) is analytic on a region A . Then, f has derivatives of all order.

Proof
This follows from Cauchy’s integral formula for derivatives. That is, we have a formula for all the derivatives, so in
particular the derivatives all exist.
A little more precisely: for any point z in A we can put a small disk around z that is entirely contained in A. Let C be the
0

boundary of the disk, then Cauchy’s formula gives a formula for all the derivatives f (z ) in terms of integrals over C . In (n)
0

particular, those derivatives exist.

Remark. If you look at the proof of Cauchy’s formula for derivatives you’ll see that f having derivatives of all orders boils down
to 1/(w − z) having derivatives of all orders for w on a curve not containing z .
Important remark. We have at times assumed that for f (u + iv) analytic, u and v have continuous higher order partial
derivatives. This theorem confirms that fact. In particular, u xy = uyx , etc.

Cauchy’s inequality

 Theorem 5.5.2 Cauchy's inequality


Let C be the circle |z − z | = R . Assume that f (z) is analytic on C and its interior, i.e. on the disk
R 0 R |z − z0 | ≤ R . Finally
let M = max |f (z)| over z on C . Then
R R

n!MR
(n)
|f (z0 )| ≤ , n = 1, 2, 3, . . . (5.5.1)
n
R

Proof
Using Cauchy’s integral formula for derivatives (Equation 5.3.1) we have

n! |f (w)| n! MR n! MR
(n)
|f (z0 )| ≤ ∫ |dw| ≤ ∫ |dw| = ∫ ⋅2πR (5.5.2)
2π n+1 2π Rn+1 2π Rn+1
CR |w − z0 | CR CR

Liouville’s theorem

 Theorem 5.5.3 Liouville's theorem

Assume f (z) is entire and suppose it is bounded in the complex plane, namely |f (z)| < M for all z ∈ C then f (z) is
constant.

Proof
M
For any circle of radius R around z the Cauchy inequality says
0

| f (z0 )| ≤ . But, R can be as large as we like so we
R
conclude that |f ′
(z0 )| = 0 for every z 0 ∈ C . Since the derivative is 0, the function itself is constant.
In short:
If f is entire and bounded then f is constant.

5.5.1 https://math.libretexts.org/@go/page/6499
 Note

P (z) = an z
n
+. . . +a0 , sin(z) , e are all entire but not bounded.
z

Now, practically for free, we get the fundamental theorem of algebra.

 Corollary (Fundamental theorem of algebra)

Any polynomial P of degree n ≥ 1 , i.e.


n
P (z) = a0 + a1 z+. . . +an z , an ≠ 0, (5.5.3)

has exactly n roots.

Proof
There are two parts to the proof.
Hard part: Show that P has at least one root.
This is done by contradiction, together with Liouville’s theorem. Suppose P (z) does not have a zero. Then
1. f (z) = 1/P (z) is entire. This is obvious because (by assumption) P (z) has no zeros.
2. f (z) is bounded. This follows because 1/P (z) goes to 0 as |z| goes to ∞ .

(It is clear that |1/P (z)| goes to 0 as z goes to infinity, i.e. |1/P (z)| is small outside a large circle. So |1/P (z)| is bounded
by M .)
So, by Liouville's theorem f (z) is constant, and therefore P (z) must be constant as well. But this is a contradiction, so the
hypothesis of “No zeros” must be wrong, i.e. P must have a zero.
Easy part: P has exactly n zeros. Let z be one zero. We can factor P (z) = (z − z )Q(z) . Q(z) has degree
0 0 n−1 . If
n − 1 > 0 , then we can apply the result to Q(z) . We can continue this process until the degree of Q is 0.

Maximum modulus principle


Briefly, the maximum modulus principle states that if f is analytic and not constant in a domain A then |f (z)| has no relative
maximum in A and the absolute maximum of |f | occurs on the boundary of A .
In order to prove the maximum modulus principle we will first prove the mean value property. This will give you a good feel for
the maximum modulus principle. It is also important and interesting in its own right.

5.5.2 https://math.libretexts.org/@go/page/6499
 Theorem 5.5.4 Mean value property

Suppose f (z) is analytic on the closed disk of radius r centered at z , i.e. the set |z − z 0 0| ≤r . Then,

1 iθ
f (z0 ) = ∫ f (z0 + re ) dθ (5.5.4)
2π 0

Proof
This is an application of Cauchy’s integral formula on the disk D r = |z − z0 | ≤ r .

We can parametrize C , the boundary of D , as


r r

iθ ′ iθ
γ(θ) = z0 + re , with 0 ≤ θ ≤ 2π, so γ (θ) = ire . (5.5.5)

By Cauchy's formula we have


2π iθ 2π
1 f (z) 1 f (z0 + re ) 1
iθ iθ
f (z0 ) = ∫ dz = ∫ ire dθ = ∫ f (z0 + re ) dθ (5.5.6)
2πi Cr
z − z0 2πi 0 reiθ 2π 0

This proves the property.

In words, the mean value property says f (z ) is the arithmetic mean of the values on the circle. Now we can state and prove the
0

maximum modulus principle. We state the assumptions carefully.


When applying this theorem, it is important to verify that the assumptions are satisfied.

 Theorem 5.5.5 Maximum dodulus principle

Suppose f (z) is analytic in a connected region A and z is a point in A . 0

1. If |f | has a relative maximum at z then f (z) is constant in a neighborhood of z .


0 0

2. If A is bounded and connected, and f is continuous on A and its boundary, then either f is constant or the absolute
maximum of |f | occurs only on the boundary of A .

Proof
Part (1): The argument for part (1) is a little fussy. We will use the mean value property and the triangle inequality from
Theorem 5.3.2.
Since z is a relative maximum of |f |, for every small enough circle C : |z − z
0 0| =r around z we have 0 |f (z)| ≤ |f (z0 )|

for z on C . Therefore, by the mean value property and the triangle inequality
1 2π iθ
|f (z0 )| = | ∫ f (z0 + re ) dθ| (mean value property)
0

1 2π

≤ ∫ |f (z0 + re )| dθ (triangle inequality)
0
2π (5.5.7)

1 2π iθ
≤ ∫ |f (z0 )| dθ (|f (z0 + re | ≤ |f (z0 )|)
0

= |f (z0 )|

5.5.3 https://math.libretexts.org/@go/page/6499
Since the beginning and end of the above are both |f (z 0 )| all the inequalities in the chain must be equalities.
The first inequality can only be an equality if for all θ , f (z 0 + re

) lie on the same ray from the origin, i.e. have the same
argument or are 0.
The second inequality can only be an equality if all |f (z + re )| = |f (z 0

0 )| . So we have all f (z 0 + re

) have the same
magnitude and the same argumeny. This implies they are all the same.
Finally, if f (z) is constant along the circle and f (z0 ) is the average of f (z) over the circle then f (z) = f (z0 ) \), i.e. f is
constant on a small disk around z . 0

Part (2): The assumptions that A is bounded and f is continuous on A and its boundary serve to guarantee that |f | has an
absolute maximum (on A combined with its boundary). Part (1) guarantees that the absolute maximum can not lie in the
interior of the region A unless f is constant. (This requires a bit more argument. Do you see why?) If the absolute
maximum is not in the interior it must be on the boundary.

 Example 5.5.1

Find the maximum modulus of e on the unit square with 0 ≤ x, y ≤ 1 .


z

Solution
x+iy x
|e | =e , (5.5.8)

so the maximum is when x = 1 , 0 ≤ y ≤ 1 is arbitrary. This is indeed on the boundary of the unit square

 Example 5.5.2
Find the maximum modulus for sin(z) on the square [0, 2π] × [0, 2π].
Solution
We use the formula
sin(z) = sin x cosh y + i cos x sinh y. (5.5.9)

So,
2 2 2 2 2
| sin(z)| = sin x cosh y + cos x sinh y

2 2 2 2
= sin x cosh y + (1 − sin x) sinh y (5.5.10)

2 2
= sin x + sinh y

We know the maximum over x of sin 2


(x) is at x = π/2 and x = 3π/2. The maximum of sinh 2
y is at y = 2π . So maximum
modulus is
−−−−−−−−−−− −−−−−−−−
2 2
√ 1 + sinh (2π) = √cosh (2π) = cosh(2π). (5.5.11)

This occurs at the points


π 3π
z = x + iy = + 2πi, and z = + 2πi. (5.5.12)
2 2

Both these points are on the boundary of the region.

 Example 5.5.3
Suppose f (z) is entire. Show that if lim z→∞ f (z) = 0 then f (z) ≡ 0 .
Solution
This is a standard use of the maximum modulus principle. The strategy is to show that the maximum of |f (z)| is not on the
boundary (of the appropriately chosen region), so f (z) must be constant.

5.5.4 https://math.libretexts.org/@go/page/6499
Fix z0 . For let M be the maximum of |f (z)| on the circle |z| = R . The maximum modulus theorem says that
R > | z0 | R

|f (z0 )| < MR . Since f (z) goes to 0, as R goes to infinity, we must have M also goes to 0. This means |f (z )| = 0 . Since
R 0

this is true for any z , we have f (z) ≡ 0 .


0

 Example 5.5.4

Here is an example of why you need A to be bounded in the maximum modulus theorem. Let A be the upper half-plane
Im(z) > 0. (5.5.13)

So the boundary of A is the real axis.


Let f (z) = e −iz
. We have
−ix
|f (x)| = | e | =1 (5.5.14)

for x along the real axis. Since |f (2i)| = |e 2


| >1 , we see |f | cannot take its maximum along the boundary of A .
Of course, it can’t take its maximum in the interior of A either. What happens here is that f (z) doesn’t have a maximum
modulus. Indeed |f (z)| goes to infinity along the positive imaginary axis.

This page titled 5.5: Amazing consequence of Cauchy’s integral formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

5.5.5 https://math.libretexts.org/@go/page/6499
CHAPTER OVERVIEW
6: Harmonic Functions
Harmonic functions appear regularly and play a fundamental role in math, physics and engineering. In this topic we’ll learn the
definition, some key properties and their tight connection to complex analysis. The key connection to 18.04 is that both the real and
imaginary parts of analytic functions are harmonic. We will see that this is a simple consequence of the Cauchy-Riemann
equations. In the next topic we will look at some applications to hydrodynamics.
6.2: Harmonic Functions
6.3: Del notation
6.4: A second Proof that u and v are Harmonic
6.5: Maximum Principle and Mean Value Property
6.6: Orthogonality of Curves

This page titled 6: Harmonic Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
6.2: Harmonic Functions
We start by defining harmonic functions and looking at some of their properties.

 Definition: Harmonic Functions


A function u(x, y) is called harmonic if it is twice continuously differentiable and satisfies the following partial differential
equation:
2
∇ u = uxx + uyy = 0. (6.2.1)

Equation 6.2.1 is called Laplace’s equation. So a function is harmonic if it satisfies Laplace’s equation. The operator ∇ is called 2

the Laplacian and ∇ u is called the Laplacian of u.


2

This page titled 6.2: Harmonic Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

6.2.1 https://math.libretexts.org/@go/page/6502
6.3: Del notation
Here’s a quick reminder on the use of the notation ∇. For a function u(x, y) and a vector field F (x, y) = (u, v), we have
∂ ∂
(i) ∇ =( , )
∂x ∂y

(ii) gradu = ∇u = (ux , uy )

(iii) curlF = ∇ × F = (vx − uy )

(iv) divF = ∇ ⋅ F = ux + vy

2
(v) div grad u = ∇ ⋅ ∇u = ∇ u = uxx + uyy

(vi) curl grad u = ∇ × ∇u = 0

(vii) div curl F = ∇ ⋅ ∇ × F = 0

Analytic Functions have Harmonic Pieces


The connection between analytic and harmonic functions is very strong. In many respects it mirrors the connection between e and z

sine and cosine.


Let z = x + iy and write f (z) = u(x, y) + iv(x, y).

 Theorem 6.3.1
If f (z) = u(x, y) + iv(x, y) is analytic on a region A then both u and v are harmonic functions on A .

Proof
This is a simple consequence of the Cauchy-Riemann equations. Since u x = vy we have

uxx = vyx . (6.3.1)

Likewise, u y = −vx implies

uyy = −vxy . (6.3.2)

Since vxy = vyx we have


uxx + uyy = vyx − vxy = 0. (6.3.3)

Therefore u is harmonic. We can handle v similarly.

 Note
Since we know an analytic function is infinitely differentiable we know u and v have the required two continuous partial
derivatives. This also ensures that the mixed partials agree, i.e. v = v .
xy yx

To complete the tight connection between analytic and harmonic functions we show that any har- monic function is the real part of
an analytic function.

 Theorem 6.3.2

If u(x, y) is harmonic on a simply connected region A , then u is the real part of an analytic function
f (z) = u(x, y) + iv(x, y) .

Proof
This is similar to our proof that an analytic function has an antiderivative. First we come up with a candidate for f (z) and
then show it has the properties we need. Here are the details broken down into steps 1-4.
1. Find a candidate, call it g(z) , for f (z) :

If we had an analytic f with f = u + iv , then Cauchy-Riemann says that f ′


= ux − i uy . So, let's define

6.3.1 https://math.libretexts.org/@go/page/6503
g = ux − i uy . (6.3.4)

This is our candidate for f . ′

2. Show that g(z) is analytic:


Write g = ϕ + iψ , where ϕ = u and ψ = −u . Checking the Cauchy-Riemann equations we have
x y

ϕx ϕy uxx uxy
[ ] =[ ] (6.3.5)
ψx ψy −uyx −uyy

Since u is harmonic we know u = −u , so ϕ = ψ . It is clear that ϕ = −ψ . Thus g satisfies the Cauchy-


xx yy x y y x

Riemann equations, so it is analytic.


3. Let f be an antiderivative of g :
Since A is simply connected our statement of Cauchy’s theorem guarantees that g(z) has an antiderivative in A. We’ll
need to fuss a little to get the constant of integration exactly right. So, pick a base point z in A. Define the 0

antiderivative of g(z) by
z

f (z) = ∫ g(z) dz + u(x0 , y0 ). (6.3.6)


z0

(Again, by Cauchy’s theorem this integral can be along any path in A from z to z .) 0

4. Show that the real part of f is u .


Let's write f = U + iV . So, f (z) = U − i U . By construction

x y


f (z) = g(z) = ux − i uy . (6.3.7)

This means the first partials of U and u are the same, so U and u differ by at most a constant. However, also by
construction,
f (z0 ) = u(x0 , y0 ) = U (x0 , y0 ) + iV (x0 , y0 ), (6.3.8)

So, U (x , y ) = u(x , y ) (and V (x


0 0 0 0 0, y0 ) = 0 ). Since they agree at one point we must have U =u , i.e. the real part
of f is u as we wanted to prove.

 Important Corollary

u is infinitely differentiable.

Proof
By definition we only require a harmonic function u to have continuous second partials. Since the analytic f is infinitely
differentiable, we have shown that so is u !

Harmonic conjugates

 Definition: Harmonic Conjugates

If u and v are the real and imaginary parts of an analytic function, then we say u and v are harmonic conjugates.

 Note

If f (z) = u + iv is analytic then so is if (z) = −v + iu . So, if u and v are harmonic conjugates and so are u and −v.

6.3.2 https://math.libretexts.org/@go/page/6503
This page titled 6.3: Del notation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

6.3.3 https://math.libretexts.org/@go/page/6503
6.4: A second Proof that u and v are Harmonic
This fact that u and v are harmonic is important enough that we will give a second proof using Cauchy’s integral formula. One
benefit of this proof is that it reminds us that Cauchy’s integral formula can transfer a general question on analytic functions to a
question about the function 1/z. We start with an easy to derive fact.

 Fact
The real and imaginary parts of f (z) = 1/z are harmonic away from the origin. Likewise for
1
g(z) = f (z − a) = (6.4.1)
z−a

away from the point z = a .

Proof
We have
1 x y
= −i . (6.4.2)
2 2 2 2
z x +y x +y

It is a simple matter to apply the Laplacian and see that you get 0. We’ll leave the algebra to you! The statement about g(z)
follows in either exactly the same way, or by noting that the Laplacian is translation invariant.
Second proof that f analytic implies u and v are harmonic. We are proving that if f = u + iv is analytic then u and v
are harmonic. So, suppose f is analytic at the point z . This means there is a disk of some radius, say r , around z where f
0 0

is analytic. Cauchy's formula says

1 f (w)
f (z) = ∫ dw, (6.4.3)
2πi Cr
w −z

where C is the circle |w − z


r 0| =r and z is in the disk |z − z 0| <r .
Now, since the real and imaginary parts of 1/(w − z) are harmonic, the same must be true of the integral, which is limit of
linear combinations of such functions. Since the circle is finite and f is continuous, interchanging the order of integration
and differentiation is not a problem.

This page titled 6.4: A second Proof that u and v are Harmonic is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

6.4.1 https://math.libretexts.org/@go/page/6504
6.5: Maximum Principle and Mean Value Property
These are similar to the corresponding properties of analytic functions. Indeed, we deduce them from those corresponding
properties.

 Theorem 6.5.1: Mean Value Property


If u is a harmonic function then u satisfies the mean value property. That is, suppose u is harmonic on and inside a circle of
radius r centered at z = x + i y then
0 0 0


1

u(x0 , y0 ) = ∫ u(z0 + re ) dθ (6.5.1)
2π 0

Proof
Let f = u + iv be an analytic function with u as its real part. The mean value property for f says
1 2π iθ
u(x0 , y0 ) + iv(x0 , y0 ) = f (z0 ) = ∫ f (z0 + re ) dθ
0

(6.5.2)
1 2π iθ iθ
= ∫ u(z0 + re + iv(z0 + re ) dθ
0

Looking at the real parts of this equation proves the theorem.

 Theorem 6.5.2: Maximum Principle


Suppose u(x, y) is harmonic on a open region A .
i. Suppose z is in A . If u has a relative maximum or minimum at z then u is constant on a disk centered at z .
0 0 0

ii. If A is bounded and connected and u is continuous on the boundary of A then the absolute maximum and absolute
minimum of u occur on the boundary.

Proof
The proof for maxima is identical to the one for the maximum modulus principle. The proof for minima comes by looking
at the maxima of −u .

 Note
For analytic functions we only talked about maxima because we had to use the modulus in order to have real values. Since
| − f | = |f | we couldn’t use the trick of turning minima into maxima by using a minus sign.

This page titled 6.5: Maximum Principle and Mean Value Property is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

6.5.1 https://math.libretexts.org/@go/page/6505
6.6: Orthogonality of Curves
An important property of harmonic conjugates u and v is that their level curves are orthogonal. We start by showing their gradients
are orthogonal.

 Lemma 6.6.1
Let z = x + iy and suppose that f (z) = u(x, y) + iv(x, y) is analytic. Then the dot product of their gradients is 0, i.e.
Δu ⋅ Δv = 0. (6.6.1)

Proof
The proof is an easy application of the Cauchy-Riemann equations.

Δu ⋅ Δv = (ux , uy ) ⋅ (vx , vy ) = ux vx + uy vy = vy vx − vx vy = 0 (6.6.2)

In the last step we used the Cauchy-Riemann equations to substitute v for u and −v for u .y x x y

The lemma holds whether or not the gradients are 0. To guarantee that the level curves are smooth the next theorem requires that
f (z) ≠ 0 .

 Theorem 6.6.1
Let z = x + iy and suppose that

f (z) = u(x, y) + iv(x, y) (6.6.3)

is analytic. If f ′
(z) ≠ 0 then the level curve of u through (x, y) is orthogonal to the level curve v through (x, y).

Proof
The technical requirement that f (z) ≠ 0 is needed to be sure that the level curves are smooth. We need smoothness so that

it even makes sense to ask if the curves are orthogonal. We’ll discuss this below. Assuming the curves are smooth the proof
of the theorem is trivial: We know from 18.02 that the gradient ∇u is orthogonal to the level curves of u and the same is
true for ∇v and the level curves of v . Since, by Lemma 6.6.1, the gradients are orthogonal this implies the curves are
orthogonal.
Finally, we show that f ′
(z) ≠ 0 means the curves are smooth. First note that

f (z) = ux (x, y) − i uy (x, y) = vy (x, y) + i vx (x, y). (6.6.4)

Now, since f ′
(z) ≠ 0 we know that
∇u = (ux , uy ) ≠ 0. (6.6.5)

Likewise, ∇v ≠ 0 . Thus, the gradients are not zero and the level curves must be smooth.

 Example 6.6.1
The figures below show level curves of u and v for a number of functions. In all cases, the level curves of u are in orange and
those of v are in blue. For each case we show the level curves separately and then overlayed on each other.

6.6.1 https://math.libretexts.org/@go/page/6506
6.6.2 https://math.libretexts.org/@go/page/6506
6.6.3 https://math.libretexts.org/@go/page/6506
 Example 6.6.2
(i) Let
2 2 2
f (z) = z = (x − y ) + i2xy, (6.6.6)

So

∇u = (2x, −2y) and ∇v = (2y, 2x). (6.6.7)

It's trivial to check that ∇u ⋅ ∇v = 0 , so they are orthogonal.


(ii) Let
1 x y
f (z) = = −i . (6.6.8)
2 2
z r r

So, it's easy to compute


2 2 2 2
y −x −2xy 2xy y −x
∇u = ( , ) and ∇v = ( , ). (6.6.9)
4 4 4 4
r r r r

Again it’s trivial to check that ∇u ⋅ ∇v = 0 , so they are orthogonal.

 Example 6.6.3 Degenerate points: f ′


(z) = 0 .
Consider
2
f (z) = z (6.6.10)

From the previous example we have


2 2
u(x, y) = x −y , v(x, y) = 2xy, ∇u = (2x, −2y), ∇v = (2y, 2x). (6.6.11)

At z = 0 , the gradients are both 0 so the theorem on orthogonality doesn’t apply.

6.6.4 https://math.libretexts.org/@go/page/6506
Let’s look at the level curves through the origin. The level curve (really the ‘level set’) for
2 2
u =x −y =0 (6.6.12)

is the pair of lines y = ±x . At the origin this is not a smooth curve.


Look at the figures for z above. It does appear that away from the origin the level curves of u intersect the lines where v = 0
2

at right angles. The same is true for the level curves of v and the lines where u = 0 . You can see the degeneracy forming at the
origin: as the level curves head towards 0 they get pointier and more right angled. So the level curve u = 0 is more properly
thought of as four right angles. The level curve of u = 0 , not knowing which leg of v = 0 to intersect orthogonally takes the
average and comes into the origin at 45 . ∘

This page titled 6.6: Orthogonality of Curves is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

6.6.5 https://math.libretexts.org/@go/page/6506
CHAPTER OVERVIEW
7: Two Dimensional Hydrodynamics and Complex Potentials
Laplace’s equation and harmonic functions show up in many physical models. As we have just seen, harmonic functions in two
dimensions are closely linked with complex analytic functions. In this section we will exploit this connection to look at two
dimensional hydrodynamics, i.e. fluid flow. Since static electric fields and steady state temperature distributions are also harmonic,
the ideas and pictures we use can be repurposed to cover these topics as well.
7.1: Velocity Fields
7.2: Stationary Flows
7.3: Physical Assumptions and Mathematical Consequences
7.4: Complex Potentials
7.5: Stream Functions
7.6: More Examples with Pretty Pictures

This page titled 7: Two Dimensional Hydrodynamics and Complex Potentials is shared under a CC BY-NC-SA 4.0 license and was authored,
remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

1
7.1: Velocity Fields
Suppose we have water flowing in a region A of the plane. Then at every point (x, y) in A the water has a velocity. In general, this
velocity will change with time. We’ll let F stand for the velocity vector field and we can write
F (x, y, t) = (u(x, y, t), v(x, y, t)). (7.1.1)

The arguments (x, y, t) indicate that the velocity depends on these three variables. In general, we will shorten the name to velocity
field (Figure 7.1.1).

Figure 7.1.1 : Streamlines of wind direction over North America 2 February 2009. (CC BY-SA 3.0; Cloudruns via Wikipedia)
A dynamic beautiful and mesmerizing example of a velocity field is at http://hint.fm/wind/index.html. This shows the current
velocity of the wind at all points in the continental U.S.

This page titled 7.1: Velocity Fields is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

7.1.1 https://math.libretexts.org/@go/page/6509
7.2: Stationary Flows
If the velocity field is unchanging in time we call the flow a stationary flow. In this case, we can drop t as an argument and write:
F (x, y) = (u(x, y), v(x, y)) (7.2.1)

Here are a few examples. These pictures show the streamlines from similar figures in topic 5. We’ve added arrows to indicate the
direction of flow.

 Example 7.2.1: Uniform Flow

F = (1, 0) .

Figure 7.2.1 : Velocity field of a uniform flow. (CC BY-NC; Ümit Kaya)

 Example 7.2.2: Eddies (Vortexes)


2 2
F = (−y/ r , x/ r )

Figure 7.2.2 : Velocity field of an eddy (vortex). (CC BY-NC; Ümit Kaya)

 Example 7.2.3: Source


2 2
F = (x/ r , y/ r )

7.2.1 https://math.libretexts.org/@go/page/6510
Figure 7.2.3 : Velocity field of a source. (CC BY-NC; Ümit Kaya)

This page titled 7.2: Stationary Flows is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

7.2.2 https://math.libretexts.org/@go/page/6510
7.3: Physical Assumptions and Mathematical Consequences
This is a wordy section, so we’ll start by listing the mathematical properties that will follow from our assumptions about the
velocity field F = u + iv .
A. F = F (x, y) is a function of x, y, but not time t (stationary).
B. div F = 0 (divergence free).
C. curl F = 0 (curl free).

Physical Assumptions
We will make some standard physical assumptions. These don’t apply to all flows, but they do apply to a good number of them and
they are a good starting point for understanding fluid flow more generally. More important to 18.04, these are the flows that are
readily susceptible to complex analysis.
Here are the assumptions about the flow, we’ll discuss them further below:
A. The flow is stationary.
B. The flow is incompressible.
C. The flow is irrotational.
We have already discussed stationarity previously, so let’s now discuss the other two properties.

Incompressibility
We will assume throughout that the fluid is incompressible. This means that the density of the fluid is constant across the domain.
Mathematically this says that the velocity field F must be divergence free, i.e. for F = (u, v) :

div F ≡ ∇ ⋅ F = ux + vy = 0. (7.3.1)

To understand this, recall that the divergence measures the infinitesimal flux of the field. If the flux is not zero at a point (x 0, y0 )

then near that point the field looks like

Figure 7.3.1 : Left: Divergent field: divF > 0 , right: Convergent field: divF < 0 . (CC BY-NC; Ümit Kaya)
If the field is diverging or converging then the density must be changing! That is, the flow is not incompressible.
As a fluid flow the left hand picture represents a source and the right represents a sink. In electrostatics where F expresses the
electric field, the left hand picture is the field of a positive charge density and the right is that of a negative charge density.
If you prefer a non-infinitesimal explanation, we can recall Green’s theorem in flux form. It says that for a simple closed curve C
and a field F = (u, v) , differentiable on and inside C , the flux of F through C satisfies

Flux of F across C = ∫ F ⋅ n ds = ∫ ∫ div F dx dy, (7.3.2)


C R

where R is the region inside C . Now, suppose that div F (x , y ) > 0 , then div F (x, y) > 0 for all
0 0 (x, y) close to (x0 , y0 ) . So,
choose a small curve C around (x , y ) such that div F > 0 on and inside C . By Green’s theorem
0 0

Flux of F through C = ∫ ∫ div F dx dy > 0. (7.3.3)


R

Clearly, if there is a net flux out of the region the density is decreasing and the flow is not incompressible. The same argument
would hold if div F (x , y ) < 0 . We conclude that incompressible is equivalent to divergence free.
0 0

7.3.1 https://math.libretexts.org/@go/page/6511
Irrotational Flow
We will assume that the fluid is irrotational. This means that the there are no infinitesimal vortices in A . Mathematically this says
that the velocity field F must be curl free, i.e. for F = (u, v) :

curl F ≡ ∇ × F = vx − uy = 0. (7.3.4)

To understand this, recall that the curl measures the infinitesimal rotation of the field. Physically this means that a small paddle
placed in the flow will not spin as it moves with the flow.

Examples

 Example 7.3.1: Eddies

The eddy is irrotational! The eddy from Example 7.3.2 is irrotational. The vortex at the origin is not in A = C − {0} and you
can easily check that curl F = 0 everywhere in A . This is not physically impossible: if you placed a small paddle wheel in the
flow it would travel around the origin without spinning!

 Example 7.3.2: Shearing Flows

Shearing flows are rotational. Here’s an example of a vector field that has rotation, though not necessarily swirling.

Figure 7.3.1 : Shearing flow. box turns because current is faster at the top. (CC BY-NC; Ümit Kaya)
The field F = (ay, 0) is horizontal, but curl F = −a ≠ 0 . Because the top moves faster than the bottom it will rotate a square
parcel of fluid. The minus sign tells you the parcel will rotate clockwise! This is called a shearing flow. The water at one level
will be sheared away from the level above it.

Summary
(A) Stationary: F depends on x, y, but not t , i.e.,

F (x, y) = (u(x, y), v(x, y)).

(B) Incompressible: divergence free:

div F = ux + vy = 0, i.e. ux = −vy .

(C) Irrotational: curl free:

curl F = vx − uy = 0, i.e., uy = vx .

For future reference we put the last two equalities in a numbered equation:

ux = −vy and uy = vx

These look almost like the Cauchy-Riemann equations (with sign differences)!

This page titled 7.3: Physical Assumptions and Mathematical Consequences is shared under a CC BY-NC-SA 4.0 license and was authored,
remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

7.3.2 https://math.libretexts.org/@go/page/6511
7.4: Complex Potentials
There are different ways to do this. We’ll start by seeing that every complex analytic function leads to an irrotational,
incompressible flow. Then we’ll go backwards and see that all such flows lead to an analytic function. We will learn to call the
analytic function the complex potential of the flow.
Annoyingly, we are going to have to switch notation. Because u and v are already taken by the vector field F , we will call our
complex potential
Φ = ϕ + iψ. (7.4.1)

Analytic functions give us incompressible, irrotational flows


Let Φ(z) be an analytic function on a region A . For z = x + iy we write

Φ(z) = ϕ(x, y) + iψ(x, y). (7.4.2)

From this we can define a vector field

F = ∇ϕ = (ϕx , ϕy ) =: (u, v), (7.4.3)

here we mean that u and v are defined by ϕ and ϕ .


x y

From our work on analytic and harmonic functions we can make a list of properties of these functions.
1. ϕ and ψ are both harmonic.
2. The level curves of ϕ and ψ are orthogonal.
3. Φ = ϕ − i ϕ .

x y

4. F is divergence and curl free (proof just below). That is, the analytic function Φ has given us an incompressible, irrotational
vector field F .
It is standard terminology to call ϕ a potential function for the vector field F . We will also call Φ a complex potential function
for F . The function ψ will be called the stream function of F (the name will be explained soon). The function Φ will be called the

complex velocity.
P roof . (F is curl and divergence free.) This is an easy consequence of the definition. We find
curl F = vx − uy = ϕyx − ϕxy = 0

div F = ux + vy = ϕxx + ϕyy = 0 (since ϕ is harmonic).


We’ll postpone examples until after deriving the complex potential from the flow.

Incompressible, irrotational flows always have complex potential functions


For technical reasons we need to add the assumption that A is simply connected. This is not usually a problem because we often
work locally in a disk around a point (x , y ).
0 0

 Theorem 7.4.1

Assume F = (u, v) is an incompressible, irrotational field on a simply connected region A . Then there is an analytic function
Φ which is a complex potential function for F .

Proof
We have done all the heavy lifting for this in previous topics. The key is to use the property ′
Φ = u − iv to guess ′
Φ .
Working carefully we define

g(z) = u − iv (7.4.4)

Step 1: Show that g is analytic. Keeping the signs straight, the Cauchy Riemann equations are
ux = (−v)y and uy = −(−v)x = vx . (7.4.5)

But, these are exactly the equations in Equation 7.4.8. Thus g(z) is analytic.

7.4.1 https://math.libretexts.org/@go/page/6512
Step 2: Since A is simply connected, Cauchy’s theorem says that g(z) has an antiderivative on A. We call the antiderivative
Φ(z) .

Step 3: Show that Φ(z) is a complex potential function for F . This means we have to show that if we write Φ = ϕ + iψ ,
then F = ∇ϕ . To do this we just unwind the definitions.
′ ′
Φ = ϕx − i ϕy (standard formula for Φ )
(7.4.6)

Φ = g = u − iv (definition of Φ and g)

Comparing these equations we get

ϕx = u, ϕy = v. (7.4.7)

But this says precisely that ∇ϕ = F . QED

 Example 7.4.1: Source Fields

The vector field


x y
F = a( , )
2
r r2

models a source pushing out water or the 2D electric field of a positive charge at the origin. (If you prefer a 3D model, it is the
field of an infinite wire with uniform charge density along the z -axis.)
Show that F is curl-free and divergence-free and find its complex potential.

Figure 7.4.1 : Velocity field of a source field. (CC BY-NC; Ümit Kaya)
We could compute directly that this is curl-free and divergence-free away from 0. An alternative method is to look for a
complex potential Φ. If we can find one then this will show F is curl and divergence free and find ϕ and ψ all at once. If there
is no such Φ then we’ll know that F is not both curl and divergence free.
One standard method is to use the formula for Φ : ′

¯
¯¯
(x − iy) z a

Φ = u − iv = a =a = . (7.4.8)
2 ¯
¯¯
r (z z) z

This is analytic and we have

Φ(z) = a log(z). (7.4.9)

This page titled 7.4: Complex Potentials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

7.4.2 https://math.libretexts.org/@go/page/6512
7.5: Stream Functions
In everything we did above poor old ψ just tagged along as the harmonic conjugate of the potential function ϕ . Let’s turn our
attention to it and see why it’s called the stream function.

 Theorem 7.5.1
Suppose that

Φ = ϕ + iψ (7.5.1)

is the complex potential for a velocity field F . Then the fluid flows along the level curves of ψ . That is, the F is everywhere
tangent to the level curves of ψ . The level curves of ψ are called streamlines and ψ is called the stream function.

Proof
Again we have already done most of the heavy lifting to prove this. Since F is the velocity of the flow at each point, the
flow is always tangent to F . You also need to remember that ∇ϕ is perpendicular to the level curves of ϕ . So we have:
1. The flow is parallel to F .
2. F = ∇ϕ , so the flow is orthogonal to the level curves of ϕ .
3. Since ϕ and ψ are harmonic conjugates, the level curves of ψ are orthogonal to the level curves of ϕ .
Combining 2 and 3 we see that the flow must be along the level curves of ψ .

Examples
We’ll illustrate the streamlines in a series of examples that start by defining the complex potential for a vector field.

 Example 7.5.1: Uniform flow


Let

Φ(z) = z.

Find F and draw a plot of the streamlines. Indicate the direction of the flow.
Solution
Write

Φ = x + iy.

So

ϕ = x and F = ∇ϕ = (1, 0),

which says the flow has uniform velocity and points to the right. We also have

ψ = y,

so the streamlines are the horizontal lines y = constant (Figure 7.5.1).

7.5.1 https://math.libretexts.org/@go/page/6513
Figure 7.5.1 : Uniform flow to the right. (CC BY-NC; Ümit Kaya)
Note that another way to see that the flow is to the right is to check the direction that the potential ϕ increases. The Topic 5
notes show pictures of this complex potential which show both the streamlines and the equipotential lines.

 Example 7.5.2: Linear Source

Let

Φ(z) = log(z).

Find F and draw a plot of the streamlines. Indicate the direction of the flow.
Solution
Write

Φ = log(r) + iθ.

So
2 2
ϕ = log(r) and F = ∇ϕ = (x/ r , y/ r ),

which says the flow is radial and decreases in speed as it gets farther from the origin. The field is not defined at z = 0 . We also
have

ψ = θ,

so the streamlines are rays from the origin (Figure 7.5.2).

Figure 7.5.2 : Linear source: radial flow from the origin. (CC BY-NC; Ümit Kaya)

Stagnation points
A stagnation point is one where the velocity field is 0.

7.5.2 https://math.libretexts.org/@go/page/6513
 Definition: Stagnation Points

If Φ is the complex potential for a field F then the stagnation points F =0 are exactly the points z where Φ (z) = 0 .

 Definition: Proof

This is clear since F = (ϕx , ϕy ) and Φ



= ϕx − i ϕy .

 Example 7.5.3: Stagnation Points

Draw the streamlines and identify the stagnation points for the potential Φ(z) = z . 2

Solution
(We drew the level curves for this in Topic 5.) We have
2 2
Φ = (x − y ) + i2xy.

So the streamlines are the hyperbolas: 2xy = constant. Since ϕ = x − y increases as |x| increases and decreases as
2 2
|y|

increases, the arrows, which point in the direction of increasing ϕ , are as shown in Figure 7.5.3.

Figure 7.5.3 : Stagnation flow: stagnation point at z = 0 . (CC BY-NC; Ümit Kaya)
The stagnation points are the zeros of

Φ (z) = 2z,

i.e. the only stagnation point is at the z = 0 .

The stagnation points are also called the critical points of a vector field.

This page titled 7.5: Stream Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

7.5.3 https://math.libretexts.org/@go/page/6513
7.6: More Examples with Pretty Pictures
 Example 7.6.1: Linear Vortex

Analyze the flow with complex potential function

Φ(z) = i log(z).

Solution
Multiplying by i switches the real and imaginary parts of log(z) (with a sign change). We have

Φ = −θ + i log(r).

The stream lines are the curves log(r) = constant, i.e. circles with center at z = 0 . The flow is clockwise because the potential
ϕ = −θ increases in the clockwise direction (Figure 7.6.1).

Figure 7.6.1 : Linear vortex. (CC BY-NC; Ümit Kaya)


This flow is called a linear vortex. We can find F using Φ . ′


i y x
Φ = = +i = ϕx − i ϕy .
2 2
z r r

So
2 2
F = (ϕx , ϕy ) = (y/ r , −x/ r ).

(By now this should be a familiar vector field.) There are no stagnation points, but there is a singularity at the origin.

 Example 7.6.2: Double Source

Analyze the flow with complex potential function

Φ(z) = log(z − 1) + log(z + 1).

Solution
This is a flow with linear sources at ±1 with the level curves of ψ = Im(Φ) (Figure 7.6.2).

7.6.1 https://math.libretexts.org/@go/page/50842
Figure 7.6.2 : Two sources. (CC BY-NC; Ümit Kaya)
We can analyze this flow further as follows.
Near each source the flow looks like a linear source.
On the y -axis the flow is along the axis. That is, the y -axis is a streamline. It’s worth seeing three different ways of arriving
at this conclusion.
Reason 1: By symmetry of vector fields associated with each linear source, the x components cancel and the combined field
points along the y -axis.
Reason 2: We can write
2
Φ(z) = log(z − 1) + log(z + 1) = log((z − 1)(z + 1)) = log(z − 1).

So
2z

Φ (z) = .
2
z −1

On the imaginary axis


2iy

Φ (iy) = .
2
−y −1

Thus,
2y
F = (0, )
2
y +1

which is along the axis.


Reason 3: On the imaginary axis Φ(iy) = log(−y 2
− 1) . Since this has constant imaginary part, the axis is a streamline.
Because of the branch cut for log(z) we should probably be a little more careful here. First note that the vector field F comes
from Φ = 2z/(z − 1) , which doesn’t have a branch cut. So we shouldn’t really have a problem. Now, as z approaches the y -
′ 2

axis from one side or the other, the argument of log(z − 1) approaches either π or −π. That is, as such limits, the imaginary
2

part is constant. So the streamline on the y -axis is the limit case of streamlines near the axis.
Since Φ (z) = 0 when z = 0 , the origin is a stagnation point. This is where the fields from the two sources exactly cancel each

other.

 Example 7.6.3: A Source in Uniform Flow

Consider the flow with complex potential


Q
Φ(z) = z + log(z).

7.6.2 https://math.libretexts.org/@go/page/50842
This is a combination of uniform flow to the right and a source at the origin (Figure 7.6.2). It shows that the flow looks like a
source near the origin. Farther away from the origin the flow stops being radial and is pushed to the right by the uniform flow.

-2

-4

-6
-6 -4 -2 0 2 4 6

Figure 7.6.3 : A source in uniform flow. (CC BY-NC; Ümit Kaya)


Since the components of Φ and F are the same except for signs, we can understand the flow by considering


Q
Φ (z) = 1 + . (7.6.1)
2πz

Near z = 0 the singularity of 1/z is most important and


Q

Φ ≈ . (7.6.2)
2πz

So, the vector field looks a linear source. Far away from the origin the 1/z term is small and Φ (z) ≈ 1 , so the field looks like

uniform flow.
Setting Φ (z) = 0 we find one stagnation point

Q
z =− . (7.6.3)

It is the point on the x-axis where the flow from the source exactly balances that from the uniform flow. For bigger values of Q
the source pushes fluid farther out before being overwhelmed by the uniform flow. That is why Q is called the source strength.

 Example 7.6.4: Source + Sink

Consider the flow with complex potential

Φ(z) = log(z − 2) − log(z + 2).

This is a combination of source (log(z − 2)) at z = 2 and a sink (− log(z + 2)) at z = −2 (Figure 7.6.3).

7.6.3 https://math.libretexts.org/@go/page/50842
Figure 7.6.4 : A source plus a sink. (CC BY-NC; Ümit Kaya)

This page titled 7.6: More Examples with Pretty Pictures is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

7.6.4 https://math.libretexts.org/@go/page/50842
CHAPTER OVERVIEW
8: Taylor and Laurent Series
We originally defined an analytic function as one where the derivative, defined as a limit of ratios, existed. We went on to prove
Cauchy’s theorem and Cauchy’s integral formula. These revealed some deep properties of analytic functions, e.g. the existence of
derivatives of all orders. Our goal in this topic is to express analytic functions as infinite power series. This will lead us to Taylor
series. When a complex function has an isolated singularity at a point we will replace Taylor series by Laurent series. Not
surprisingly we will derive these series from Cauchy’s integral formula. Although we come to power series representations after
exploring other properties of analytic functions, they will be one of our main tools in understanding and computing with analytic
functions.
8.1: Geometric Series
8.2: Convergence of Power Series
8.3: Taylor Series
8.4: Taylor Series Examples
8.5: Singularities
8.6: Appendix- Convergence
8.7: Laurent Series
8.8: Digression to Differential Equations
8.9: Poles

Thumbnail: A Laurent series is defined with respect to a particular point c and a path of integration γ . The path of integration must
lie in an annulus, indicated here by the red color, inside which f(z) is holomorphic (analytic). (Public Domain; Pko via Wikipedia)

This page titled 8: Taylor and Laurent Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
8.1: Geometric Series
Having a detailed understanding of geometric series will enable us to use Cauchy’s integral formula to understand power series
representations of analytic functions. We start with the definition:

 Definition: Finite Geometric Series


A finite geometric series has one of the following (all equivalent) forms.
2 3 n
Sn = a(1 + r + r + r +. . . +r ) (8.1.1)

2 3 n
= a + ar + ar + ar +. . . +ar (8.1.2)

j
= ∑ ar (8.1.3)

j=0

j
= a∑r (8.1.4)

j=0

The number r is called the ratio of the geometric series because it is the ratio of consecutive terms of the series.

 Theorem 8.1.1
The sum of a finite geometric series is given by
n+1
a(1 − r )
2 3 n
Sn = a(1 + r + r + r +. . . +r ) = . (8.1.5)
1 −r

Proof
This is a standard trick that you’ve probably seen before.
2 n
Sn = a+ ar + ar +. . . +ar
(8.1.6)
2 n n+1
rSn = ar + ar +. . . +ar +ar

When we subtract these two equations most terms cancel and we get
n+1
Sn − rSn = a − ar (8.1.7)

Some simple algebra now gives us the formula in Equation 8.2.2.

 Definition: Infinite Geometric Series

An infinite geometric series has the same form as the finite geometric series except there is no last term:

2 j
S = a + ar + ar +. . . = a ∑ r . (8.1.8)

j=0

 Note
We will usually simply say ‘geometric series’ instead of ‘infinite geometric series’.

 Theorem 8.1.2
If |r| < 1 then the infinite geometric series converges to

j
a
S = a∑r = (8.1.9)
1 −r
j=0

8.1.1 https://math.libretexts.org/@go/page/6516
If |r| ≥ 1 then the series does not converge.

Proof
This is an easy consequence of the formula for the sum of a finite geometric series. Simply let n → ∞ in Equation 8.2.2.

 Note

We have assumed a familiarity with convergence of infinite series. We will go over this in more detail in the appendix to this
topic.

Connection to Cauchy’s Integral Formula


Cauchy’s integral formula says

1 f (w)
f (z) = ∫ dw. (8.1.10)
2πi C
w −z

Inside the integral we have the expression


1
(8.1.11)
w −z

which looks a lot like the sum of a geometric series. We will make frequent use of the following manipulations of this expression.
1 1 1 1 2
= ⋅ = (1 + (z/w) + (z/w ) +. . . ) (8.1.12)
w −z w 1 − z/w w

The geometric series in this equation has ratio z/w . Therefore, the series converges, i.e. the formula is valid, whenever |z/w| < 1,
or equivalently when
|z| < |w|. (8.1.13)

Similarly,
1 1 1 1 2
=− ⋅ =− (1 + (w/z) + (w/z) +. . . ) (8.1.14)
w −z z 1 − w/z z

The series converges, i.e. the formula is valid, whenever |w/z| < 1, or equivalently when
|z| > |w|. (8.1.15)

This page titled 8.1: Geometric Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

8.1.2 https://math.libretexts.org/@go/page/6516
8.2: Convergence of Power Series
When we include powers of the variable z in the series we will call it a power series. In this section we’ll state the main theorem
we need about the convergence of power series. Technical details will be pushed to the appendix for the interested reader.

 Theorem 8.2.1
Consider the power series

n
f (z) = ∑ an (z − z0 ) . (8.2.1)

n=0

There is a number R ≥ 0 such that:


1. If R > 0 then the series converges absolutely to an analytic function for |z − z | < R . 0

2. The series diverges for |z − z | > R . R is called the radius of convergence. The disk |z − z
0 0| <R is called the disk of
convergence.
3. The derivative is given by term-by-term differentiation

′ n−1
f (z) = ∑ nan (z − z0 ) (8.2.2)

n=0

The series for f also has radius of convergence R .


4. If γ is a bounded curve inside the disk of convergence then the integral is given by term-by-term integration

n
∫ f (z) dz = ∑ ∫ an (z − z0 ) (8.2.3)
γ n=0 γ

 Note
The theorem doesn’t say what happens when |z − z | = R . 0

If R = ∞ the function f (z) is entire.


If R = 0 the series only converges at the point z = z . In this case, the series does not represent an analytic function
0

on any disk around z . 0

Often (not always) we can find R using the ratio test.

Proof
The proof of this theorem is in the appendix.

Ratio Test and Root Test


Here are two standard tests from calculus on the convergence of infinite series.

 Ratio Test

Consider the series ∑ 0
cn . If L = lim
n→∞ | cn+1 / cn | exists, then:
1. If L < 1 then the series converges absolutely.
2. If L > 1 then the series diverges.
3. If L = 1 then the test gives no information.

 Note
In words, L is the limit of the absolute ratios of consecutive terms.

Again the proof will be in the appendix. (It boils down to comparison with a geometric series.)

8.2.1 https://math.libretexts.org/@go/page/6517
 Example 8.2.1

Consider the geometric series 1 + z + z 2 3


+ z +. . . . The limit of the absolute ratios of consecutive terms is
n+1
|z |
L = lim = |z| (8.2.4)
n→∞ |z n |

Thus, the ratio test agrees that the geometric series converges when |z| < 1 . We know this converges to 1/(1 − z) . Note, the
disk of convergence ends exactly at the singularity z = 1 .

 Example 8.2.2
n
z
Consider the series f (z) = ∑ ∞

n=0
. The limit from the ratio test is
n!

n+1
|z |/(n + 1)! |z|
L = lim = lim = 0. (8.2.5)
n
n→∞ |z |/n! n+1

Since L < 1 this series converges for every z . Thus, by Theorem 8.3.1, the radius of convergence for this series is ∞. That is,
f (z) is entire. Of course we know that f (z) = e .
z

 Test

Consider the series ∑ ∞

0
cn . If L = lim n→∞
1/n
| cn | exists, then:
1. If L < 1 then the series converges absolutely.
2. If L > 1 then the series diverges.
3. If L = 1 then the test gives no information.

 Note

In words, L is the limit of the n th roots of the (absolute value) of the terms.

The geometric series is so fundamental that we should check the root test on it.

 Example 8.2.3

Consider the geometric series 1 + z + z 2 3


+ z +. . . . The limit of the n th roots of the terms is
n 1/n
L = lim | z | = lim |z| = |z| (8.2.6)
n→∞

Happily, the root test agrees that the geometric series converges when |z| < 1 .

This page titled 8.2: Convergence of Power Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

8.2.2 https://math.libretexts.org/@go/page/6517
8.3: Taylor Series
The previous section showed that a power series converges to an analytic function inside its disk of convergence. Taylor’s theorem
completes the story by giving the converse: around each point of analyticity an analytic function equals a convergent power series.

 Theorem 8.3.1: Taylor's Theorem


Suppose f (z) is an analytic function in a region A . Let z 0 ∈ A . Then,

n
f (z) = ∑ an (z − z0 ) , (8.3.1)

n=0

where the series converges on any disk |z − z 0| <r contained in A . Furthermore, we have formulas for the coefficients
(n)
f (z0 ) 1 f (z)
an = = ∫ dz. (8.3.2)
n+1
n! 2πi γ (z − z0 )

(Where γ is any simple closed curve in A around z , with its interior entirely in A .)
0

We call the series the power series representing f around z . 0

Proof
The proof will be given below. First we look at some consequences of Taylor’s theorem.

 Corollary
The power series representing an analytic function around a point z0 is unique. That is, the coefficients are uniquely
determined by the function f (z) .

Proof
Taylor’s theorem gives a formula for the coefficients.

Order of a Zero

 Theorem 8.3.2

Suppose f (z) is analytic on the disk |z − z 0| <r and f is not identically 0. Then there is an integer k ≥0 such that ak ≠ 0

and f has Taylor series around z given by


0

k
f (z) = (z − z0 ) (ak + ak+1 (z − z0 )+. . . ) (8.3.3)

k n−k
= (z − z0 ) ∑ an (z − z0 ) . (8.3.4)

n=k

Proof
Since f (z) is not identically 0, not all the Taylor coefficients are zero. So, we take k to be the index of the first nonzero
coefficient.

 Theorem 8.3.3: Zeros are Isolated


If f (z) is analytic and not identically zero then the zeros of f are isolated. (By isolated we mean that we can draw a small disk
around any zeros that doesn’t contain any other zeros.)

8.3.1 https://math.libretexts.org/@go/page/6518
Figure 8.3.1 : Isolated zero at z : f (z
0 0) ,
= 0 f (z) ≠ 0 elsewhere in the disk. (CC BY-NC; Ümit Kaya)

Proof
Suppose f (z 0) =0 . Write f as in Equation 8.4.3. There are two factors:
k
(z − z0 )

and

g(z) = ak + ak+1 (z − z0 )+. . .

Clearly (z − z ) ≠ 0 if z ≠ z . We have g(z ) = a ≠ 0 , so g(z) is not 0 on some small neighborhood of


0
k
0 0 k z0 . We
conclude that on this neighborhood the product is only zero when z = z , i.e. z is an isolated 0.
0 0

 Definition: Order of the Zero

The integer k in Equation 8.3.4 is called the order of the zero of f at z . 0

Note, if f (z0) ≠0 then z is a zero of order 0.


0

This page titled 8.3: Taylor Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

8.3.2 https://math.libretexts.org/@go/page/6518
8.4: Taylor Series Examples
The uniqueness of Taylor series along with the fact that they converge on any disk around z where the function is analytic allows 0

us to use lots of computational tricks to find the series and be sure that it converges.

 Example 8.4.1
Use the formula for the coefficients in terms of derivatives to give the Taylor series of f (z) = e around z = 0 . z

Solution
Since f ′
(z) = e
z
, we have f (n)
(0) = e
0
=1 . So,
2 3 ∞ n
z z z
z
e = 1 +z+ + + ...=∑
2! 3! n!
n=0

 Example 8.4.2
Expand f (z) = z 8
e
3z
in a Taylor series around z = 0 .
Solution
Let w = 3z . So,
∞ n ∞ n
3z w
w 3 n
e =e =∑ =∑ z
n! n!
n=0 k=0

Thus,
∞ n
3
n+8
f (z) = ∑ z .
n!
n=0

 Example 8.4.3
Find the Taylor series of sin(z) around z = 0 (Sometimes the Taylor series around 0 is called the Maclaurin series.)
Solution
We give two methods for doing this.
Method 1.
n m
d sin(z) (−1) for n = 2m + 1 = odd, m = 0, 1, 2, . . .
(n)
f (0) = ={
n
dz 0 for n even

Method 2. Using
iz −iz
e −e
sin(z) = ,
2i

we have
∞ n ∞ n
1 (iz) (−iz)
sin(z) = [∑ −∑ ]
2i n! n!
n=0 n=0

∞ n n
1 n
i z
= ∑[(1 − (−1 ) )]
2i n!
n=0

(We need absolute convergence to add series like this.)


Conclusion:

8.4.1 https://math.libretexts.org/@go/page/6519
∞ 2n+1
n
z
sin(z) = ∑(−1 ) ,
(2n + 1)!
n=0

which converges for |z| < ∞ .

 Example 8.4.4

Expand the rational function


2
1 + 2z
f (z) =
3 5
z +z

around z = 0 .
Solution
Note that f has a singularity at 0, so we can’t expect a convergent Taylor series expansion. We’ll aim for the next best thing
using the following shortcut.
2
1 2(1 + z ) − 1 1 1
f (z) = = [2 − ].
3 2 3 2
z 1 +z z 1 +z

Using the geometric series we have



1 1 2 n 2 4 6
= = ∑(−z ) = 1 −z +z − z +. . .
2 2
1 +z 1 − (−z )
n=0

Putting it all together



1 1 1
2 4 n 2n+1
f (z) = (2 − 1 + z − z +. . . ) = ( + ) − ∑(−1 ) z
3 3
z z z
n=0

Note: The first terms are called the singular part, i.e. those with negative powers of z . he summation is called the regular or
analytic part. Since the geometric series for 1/(1 + z ) converges for |z| < 1 , the entire series is valid in 0 < |z| < 1 .
2

 Example 8.4.5

Find the Taylor series for


z
e
f (z) =
1 −z

around z = 0 . Give the radius of convergence.


Solution
We start by writing the Taylor series for each of the factors and then multiply them out.
2 3
z z
2 3
f (z) = (1 + z + + + . . . ) (1 + z + z +z + ...)
2! 3!

1 1 1
2 3
= 1 + (1 + 1)z + (1 + 1 + )z + (1 + 1 + + )z + ...
2! 2! 3!

The biggest disk around z =0 where f is analytic is |z| < 1 . Therefore, by Taylor’s theorem, the radius of convergence is
R = 1.

8.4.2 https://math.libretexts.org/@go/page/6519
f (z) is analytic on |z| < 1 and has a singularity at z = 1 . (CC BY-NC; Ümit Kaya)

 Example 8.4.6

Find the Taylor series for


1
f (z) =
1 −z

around z = 5 . Give the radius of convergence.


Solution
We have to manipulate this into standard geometric series form.
2 3
1 1 z−5 z−5 z−5
f (z) = =− [1 − ( ) +( ) −( ) + ...]
−4(1 + (z − 5)/4) 4 4 4 4

Since f (z) has a singularity at z = 1 the radius of convergence is R = 4 . We can also see this by considering the geometric
series. The geometric series ratio is (z − 5)/4 . So the series converges when |z − 5|/4 < 1 , i.e. when |z − 5| < 4 , i.e. R = 4 .

Disk of convergence stops at the singularity at z = 1 . (CC BY-NC; Ümit Kaya)

 Example 8.4.7

Find the Taylor series for

f (z) = log(1 + z)

around z = 0 . Give the radius of convergence.


Solution

8.4.3 https://math.libretexts.org/@go/page/6519
We know that f is analytic for |z| < 1 and not analytic at z = −1 . So, the radius of convergence is R = 1 . To find the series
representation we take the derivative and use the geometric series.


1 2 3 4
f (z) = = 1 −z+z −z +z − ...
1 +z

Integrating term by term (allowed by Theorem 8.3.1) we have


2 3 4 ∞ n
z z z n−1
z
f (z) = a0 + z − + − + . . . = a0 + ∑(−1 )
2 3 4 n
n=1

Here a is the constant of integration. We find it by evalating at z = 0 .


0

f (0) = a0 = log(1) = 0.

Disk of convergence for log(1 + z) around z = 0 . (CC BY-NC; Ümit Kaya)

 Example 8.4.8

Can the series


n
∑ an (z − 2 )

converge at z = 0 and diverge at z = 3 .


Solution
No! We have z = 2 . We know the series diverges everywhere outside its radius of convergence. So, if the series converges at
0

z = 0 , then the radius of convergence is at least 2. Since |3 − z | < 2 we would also have that z = 3 is inside the disk of
0

convergence.

Proof of Taylor’s Theorem


For convenience we restate Taylor’s Theorem 8.4.1.

 Theorem 8.4.1: Taylor’s Theorem (Taylor Series)


Suppose f (z) is an analytic function in a region A . Let z 0 ∈ A . Then,

n
f (z) = ∑ an (z − z0 ) , (8.4.1)

n=0

where the series converges on any disk |z − z 0| <r contained in A . Furthermore, we have formulas for the coefficients
(n)
f (z0 ) 1 f (z)
an = = ∫ dz (8.4.2)
n! 2πi γ (z − z0 )n+1

8.4.4 https://math.libretexts.org/@go/page/6519
Proof
In order to handle convergence issues we fix 0 < r1 < r2 < r . We let γ be the circle |w − z0 | = r2 (traversed
counterclockise).

Disk of convergence extends to the boundary of A with r 1 < r2 < r , but r and r can be arbitrarily close to r . (CC BY-
1 2

NC; Ümit Kaya)


Take z inside the disk |z − z | < r . We want to express
0 1 f (z) as a power series around z0 . To do this we start with the
Cauchy integral formula and then use the geometric series.
As preparation we note that for w on γ and |z − z 0| < r1 we have

|z − z0 | < r1 < r2 = |w − z0 |,

so
|z − z0 |
< 1.
|w − z0 |

Therefore,
∞ ∞ n
1 1 1 1 z − z0 (z − z0 )
n
= ⋅ = ∑( ) =∑
z − z0 n+1
w −z w − z0 w − z0 w − z0 (w − z0 )
n=0 n=0
1−
w − z0

Using this and the Cauchy formula gives

1 f (w)
f (z) = ∫ dw
γ
2πi w −z

1 f (w)
∞ n
= ∫ ∑ (z − z0 ) dw
γ n=0
2πi (w − z0 )n+1

1 f (w)
∞ n
= ∑ ( ∫ dw)(z − z0 )
n=0 γ n+1
2πi (w − z0 )

(n)
f (z0 )
∞ n
= ∑ (z − z0 )
n=0
n!

The last equality follows from Cauchy’s formula for derivatives. Taken together the last two equalities give Taylor’s
formula. QED

This page titled 8.4: Taylor Series Examples is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is

8.4.5 https://math.libretexts.org/@go/page/6519
available upon request.

8.4.6 https://math.libretexts.org/@go/page/6519
8.5: Singularities
 Definition: Singular Function

A function f (z) is singular at a point z if it is not analytic at z


0 0

 Definition: Isolated Singularity


For a function f (z) , the singularity z is an isolated singularity if f is analytic on the deleted disk 0 < |z − z
0 0| <r for some
r > 0.

 Example 8.5.1
f (z) = has isolated singularities at z = 0 , ±i.

 Example 8.5.2

f (z) = e
1/z
has an isolated singularity at z = 0 .

 Example 8.5.3
f (z) = log(z) has a singularity at z =0 , but it is not isolated because a branch cut, starting at z =0 , is needed to have a
region where f is analytic.

 Example 8.5.4
1
f (z) = has singularities at z =0 and z = 1/n for n = ±, ±2, . . . The singularities at ±1/n are isolated, but the
sin(π/z)

one at z = 0 is not isolated.

Figure 8.5.1 : Every neighborhood of 0 contains zeros at 1/n for large n . (CC BY-NC; Ümit Kaya)

This page titled 8.5: Singularities is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

8.5.1 https://math.libretexts.org/@go/page/6520
8.6: Appendix- Convergence
This section needs to be completed. It will give some of the careful technical definitions and arguments regarding convergence and
manipulation of series. In particular it will define the notion of uniform convergence. The short description is that all of our
manipulations of power series are justified on any closed bounded region. Almost, everything we did can be restricted to a closed
disk or annulus, and so was valid.

This page titled 8.6: Appendix- Convergence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

8.6.1 https://math.libretexts.org/@go/page/50909
8.7: Laurent Series
 Theorem 8.7.1 Laurent series

Suppose that f (z) is analytic on the annulus


A : r1 < |z − z0 | < r2 . (8.7.1)

Then f (z) can be expressed as a series


∞ ∞
bn
n
f (z) = ∑ + ∑ an (z − z0 ) . (8.7.2)
n
(z − z0 )
n=1 n=0

The coefficients have the formulus


1 f (w)
an = ∫ dw
γ n+1
2πi (w − z0 )
(8.7.3)
1
n−1
bn = ∫ f (w)(w − z0 ) dw
γ
2πi

where γ is any circle |w − z 0| =r inside the annulus, i.e. r 1 < r < r2 .


Furthermore
The series ∑ ∞

n=0
an (z − z0 )
n
converges to an analytic function for |z − z 0| < r2 .
bn
The series ∑ ∞

n=1 n
converges to an analytic function for |z − z 0| > r1 .
(z − z0 )

Together, the series both converge on the annulus A where f is analytic.

The proof is given below. First we define a few terms.

 Definition: Laurent Series


The entire series is called the Laurent series for f around z . The series 0

n
∑ an (z − z0 ) (8.7.4)

n=0

is called the analytic or regular part of the Laurent series. The series

bn
∑ (8.7.5)
n
(z − z0 )
n=1

is called the singular or principal part of the Laurent series.

 Note
Since f (z) may not be analytic (or even defined) at z we don’t have any formulas for the coefficients using derivatives.
0

Proof
(Laurent series). Choose a point z in A. Now set circles C and C close enough to the boundary that z is inside
1 3

C +C −C −C
1 2 3 as shown. Since this curve and its interior are contained in A, Cauchy’s integral formula says
2

1 f (w)
f (z) = ∫ dw (8.7.6)
2πi C1 +C2 −C3 −C2
w −z

8.7.1 https://math.libretexts.org/@go/page/50904
c1
A
r2 -c2
c2

r1
Z0 -c3

Figure 8.7.1: The contour used for proving the formulas for Laurent series. (CC BY-NC; Ümit Kaya)
The integrals over C cancel, so we have
2

1 f (w)
f (z) = ∫ dw. (8.7.7)
2πi C1 −C3 w −z

Next, we divide this into two pieces and use our trick of converting to a geometric series. The calculuations are just like the
proof of Taylor’s theorem. On C we have
1

|z − z0 |
< 1, (8.7.8)
|w − z0 |

so

⎧ 1 f (w) 1 f (w) 1

⎪ rcl
⎪ ∫ dw = ∫ ⋅ dw
⎪ 2πi
C1
w −z 2πi
C1
w − z0 z − z0



⎪ (1 − )

⎪ w − z0


⎪ f (w)
1 ∞ n
⎨ = ∫ ∑ (z − z0 ) dw (8.7.9)
C1 n=0
2πi (w − z0 )n+1




⎪ 1 f (w)
⎪ ∞ n

⎪ = ∑ ( ∫ dw)(z − z0 )
⎪ n=0 C1
⎪ n+1
⎪ 2πi (w − z0 )


⎪ ∞ n
= ∑ an (z − z0 ) .
n=0

Here a is defined by the integral formula given in the statement of the theorem. Examining the above argument we see that the
n

only requirement on z is that |z − z | < r . So, this series converges for all such z .
0 2

Similarly on C we have
3

|w − z0 |
= 1. (8.7.10)
|z − z0 |

so
1 f (w) 1 f (w) 1
∫ dw = ∫ − ⋅ dw
C3 C3
2πi w −z 2πi z − z0 w − z0
(1 − )
z − z0
n
1 (w − z0 )

= − ∫ ∑ f (w) dw
C3 n=0 n+1
2πi (z − z0 ) (8.7.11)

1 ∞ n −n−1
= − ∑ (∫ f (w)(w − z0 ) dw)(z − z0 )
n=0 C1
2πi

∞ bn
= − ∑n=1 .
n
(z − z0 )

8.7.2 https://math.libretexts.org/@go/page/50904
In the last equality we changed the indexing to match the indexing in the statement of the theorem. Here b is defined by the n

integral formula given in the statement of the theorem. Examining the above argument we see that the only requirement on z is
that |z − z | > r . So, this series converges for all such z .
0 1

Combining these two formulas we have


∞ ∞
1 f (w) bn
n
f (z) = ∫ dw = ∑ + ∑ an (z − z0 ) (8.7.12)
n
2πi C1 −C3
w −z (z − z0 )
n=1 n=0

The last thing to note is that the integrals defining a and b do not depend on the exact radius of the circle of integration. Any
n n

circle inside A will produce the same values. We have proved all the statements in the theorem on Laurent series. QED

Examples of Laurent Series


In general, the integral formulas are not a practical way of computing the Laurent coefficients. Instead we use various algebraic
tricks. Even better, as we shall see, is the fact that often we don’t really need all the coefficients and we will develop more
techniques to compute those that we do need.

 Example 8.7.1

Find the Laurent series for


z+1
f (z) =
z

around z 0 =0 . Give the region where it is valid.


Solution
The answer is simply
1
f (z) = 1 + .
z

This is a Laurent series, valid on the infinite region 0 < |z| < ∞ .

 Example 8.7.2

Find the Laurent series for


z
f (z) =
2
z +1

around z 0 =i . Give the region where your answer is valid. Identify the singular (principal) part.
Solution
Using partial fractions we have
1 1 1 1
f (z) = ⋅ + ⋅ .
2 z−i 2 z+i

1
Since is analytic at z = i it has a Taylor series expansion. We find it using geometric series.
z+i


1 1 1 1 z−i
n
= ⋅ = ∑(− )
z+i 2i 1 + (z − i)/(2i) 2i 2i
n=0

So the Laurent series is



1 1 1 z−i
n
f (z) = ⋅ + ∑(− )
2 z−i 4i 2i
n=0

The singular (principal) part is given by the first term. The region of convergence is 0 < |z − i| < 2 .

8.7.3 https://math.libretexts.org/@go/page/50904
 Note
We could have looked at f (z) on the region 2 < |z − i| < ∞ . This would have produced a different Laurent series. We
discuss this further in an upcoming example.

 Example 8.7.3

Compute the Laurent series for


z+1
f (z) =
3 2
z (z + 1)

on the region A : 0 < |z| < 1 centered at z = 0 .


Solution
This function has isolated singularities at z = 0, ±i . Therefore it is analytic on the region A .

f (z) has singularities at z = 0, ±i . (CC BY-NC; Ümit Kaya)


At z = 0 we have
1
2 4 6
f (z) = (1 + z)(1 − z +z −z + . . . ).
3
z

Multiplying this out we get


1 1 1
2 3
f (z) = + − −1 +z+z −z − ...
3 2
z z z

The following example shows that the Laurent series depends on the region under consideration.

 Example 8.7.4
1
Find the Laurent series around z = 0 for f (z) = in each of the following regions:
z(z − 1)

(i) the region A1 : 0 < |z| < 1

(ii) the region A2 : 1 < |z| < ∞.

Solution
For (i)
1 1 1 2
1 2
f (z) = − ⋅ =− (1 + z + z + ...) =− −1 −z−z − ...
z 1 −z z z

For (ii): Since the usual geometric series for 1/(1 − z) does not converge on A we need a different form, 2

8.7.4 https://math.libretexts.org/@go/page/50904
1 1 1 1 1
f (z) = ⋅ = (1 + + + ...)
z z(1 − 1/z) z2 z z2

Since |1/z| < 1 on A our use of the geometric series is justified.


2

One lesson from this example is that the Laurent series depends on the region as well as the formula for the function.

This page titled 8.7: Laurent Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

8.7.5 https://math.libretexts.org/@go/page/50904
8.8: Digression to Differential Equations
Here is a standard use of series for solving differential equations.

 Example 8.8.1
Find a power series solution to the equation

f (x) = f (x) + 2, f (0) = 0. (8.8.1)

Solution
We look for a solution of the form

n
f (x) = ∑ an x . (8.8.2)

n=0

Using the initial condition we find f (0) = 0 = a . Substituting the series into the differential equation we get
0

′ 3 2
f (x) = a1 + 2 a2 x + 3 a3 x + . . . = f (x) + 2 = a0 + 2 + a1 x + a2 x + ... (8.8.3)

Equating coefficients and using a 0 =0 we have


a1 = a0 + 2 ⇒ a1 = 2

2a2 = a1 ⇒ a2 = a1 /2 = 1
(8.8.4)
3a3 = a2 ⇒ a3 = 1/3

4a4 = a3 ⇒ a4 = 1/(3 ⋅ 4)

In general
an 1
(n + 1)an+1 = an ⇒ an+1 = = . (8.8.5)
(n + 1) 3 ⋅ 4 ⋅ 5 ⋅ ⋅ ⋅ (n + 1)

You can check using the ratio test that this function is entire.

8.8: Digression to Differential Equations is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

8.8.1 https://math.libretexts.org/@go/page/50907
8.9: Poles
Poles refer to isolated singularities. So, we suppose f (z) is analytic on 0 < |z − z 0| <r and has Laurent series
∞ ∞
bn
n
f (z) = ∑ + ∑ an (z − z0 ) . (8.9.1)
n
(z − z0 )
n=1 n=0

 Definition: poles
If only a finite number of the coefficients b are nonzero we say z is a finite pole of f . In this case, if b
n 0 k ≠0 and b
n =0 for
all n > k then we say z is a pole of order k .
0

If z is a pole of order 1 we say it is a simple pole of f .


0

If an infinite number of the b are nonzero we say that z is an essential singularity or a pole of infinite order of f .
n 0

If all the b are 0, then z is called a removable singularity. That is, if we define f (z ) = a then f is analytic on the disk
n 0 0 0

|z − z | < r .
0

The terminology can be a bit confusing. So, imagine that I tell you that f is defined and analytic on the punctured disk
0 < |z − z | < r . Then, a priori, we assume f has a singularity at z . But, if after computing the Laurent series we see there is no
0 0

singular part we can extend the definition of f to the full disk, thereby 'removing the singularity’.
We can explain the term essential singularity as follows. If f (z) has a pole of order k at z then (z − z ) f (z) is analytic (has a
0 0
k

removable singularity) at z . So, f (z) itself is not much harder to work with than an analytic function. On the other hand, if z is
0 0

an essential singularity then no algebraic trick will change f (z) into an analytic function at z . 0

Examples of Poles
We’ll go back through many of the examples from the previous sections.

 Example 8.9.1

The rational function


2
1 + 2z
f (z) =
3 5
z +z

expanded to

1 1 n 2n+1
f (z) = ( + ) − ∑(−1 ) z .
3
z z
n=0

Thus, z = 0 is a pole of order 3.

 Example 8.9.2
Consider
z+1 1
f (z) = =1+ .
z z

Thus, z = 0 is a pole of order 1, i.e. a simple pole.

 Example 8.9.3

Consider
z 1 1
f (z) = = ⋅ + g(z),
2
z +1 2 z−i

where g(z) is analytic at z = i . So, z = i is a simple pole.

8.9.1 https://math.libretexts.org/@go/page/50908
 Example 8.9.4

The function
1
f (z) =
z(z − 1)

has isolated singularities at z = 0 and z = 1 . Show that both are simple poles.
Solution
In a neighborhood of z = 0 we can write
g(z) 1
f (z) = , where g(z) = .
z z−1

Since g(z) is analytic at 0, z =0 is a finite pole. Since g(0) ≠ 0 , the pole has order 1, i.e. it is simple. Likewise, in a
neighborhood of z = 1 ,
h(z) 1
f (z) = , where h(z) = .
z−1 z

Since h is analytic at z = 1 , f has a finite pole there. Since h(1) ≠ 0 it is simple.

 Example 8.9.5

Consider

1/z
1 1 1
e =1+ + + + ...
2 3
z 2!z 3!z

So, z = 0 is an essential singularity.

 Example 8.9.6

log(z) has a singularity at z =0 . Since the singularity is not isolated, it can’t be classified as either a pole or an essential
singularity.

Residues
In preparation for discussing the residue theorem in the next topic we give the definition and an example here.
Note well, residues have to do with isolated singularites.

 Definition: Residue
Consider the function f (z) with an isolated singularity at z , i.e. defined on 0 < |z − z
0 0| <r and with Laurent series
∞ ∞
bn n
f (z) = ∑ + ∑ an (z − z0 ) . (8.9.2)
n
(z − z0 )
n=1 n=0

The residue of f at z is b . This is denoted


0 1

Res(f , z0 ) or Resz=z0 f = b1 . (8.9.3)

What is the importance of the residue? If γ is a small, simple closed curve that goes counterclockwise around z then 0

∫ f (z) = 2πi b1 . (8.9.4)


γ

8.9.2 https://math.libretexts.org/@go/page/50908
Figure 8.9.1 : γ is small enough to be inside |z − z0| < r , and surround z . (CC BY-NC; Ümit Kaya)
0

This is easy to see by integrating the Laurent series term by term. The only nonzero integral comes from the term b 1 /z .

 Example 8.9.7

The function
1 1
1/(2z)
f (z) = e =1+ + + ...
2
2z 2(2z)

has an isolated singularity at 0. From the Laurent series we see that


1
Res(f , 0) = .
2

This page titled 8.9: Poles is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

8.9.3 https://math.libretexts.org/@go/page/50908
CHAPTER OVERVIEW
9: Residue Theorem
9.1: Poles and Zeros
9.2: Holomorphic and Meromorphic Functions
9.3: Behavior of functions near zeros and poles
9.4: Residues
9.5: Cauchy Residue Theorem
9.6: Residue at ∞

Thumbnail: Illustration of the setting. (Public Domain; Ben pcc via Wikipedia)

This page titled 9: Residue Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
9.1: Poles and Zeros
We remind you of the following terminology: Suppose f (z) is analytic at z and 0

n n+1
f (z) = an (z − z0 ) + an+1 (z − z0 ) + ..., (9.1.1)

with an ≠0 . Then we say f has a zero of order n at z . If n = 1 we say z is a simple zero.


0 0

Suppose f has an isolated sigularity at z and Laurent series


0

bn bn−1 b1
f (z) = + + ...+ + a0 + a1 (z − z0 ) + . . . (9.1.2)
n n−1
(z − z0 ) (z − z0 ) z − z0

which converges on 0 < |z − z 0| <R and with b n ≠0 . Then we say f has a pole of order n at z . If n = 1 we say z is a simple
0 0

pole.
There are several examples in the Topic 8 notes. Here is one more

 Example 9.1.1

z+1
f (z) =
3 2
z (z + 1)

has isolated singularities at z = 0 , ±i and a zero at z = −1 . We will show that z = 0 is a pole of order 3, z = ±i are poles of
order 1 and z = −1 is a zero of order 1. The style of argument is the same in each case.
At z = 0 :
1 z+1
f (z) = ⋅ .
3 2
z z +1

Call the second factor g(z) . Since g(z) is analytic at z = 0 and g(0) = 1 , it has a Taylor series
z+1 2
g(z) = = 1 + a1 z + a2 z + ...
z2 + 1

Therefore
1 a1 a2
f (z) = + + + ...
3 2
z z z

This shows z = 0 is a pole of order 3.


1 z+1
At z = i : f (z) = ⋅
3
. Call the second factor g(z) . Since g(z) is analytic at z = i , it has a Taylor series
z−i z (z + i)

z+1 2
g(z) = = a0 + a1 (z − i) + a2 (z − i ) + ...
3
z (z + i)

where a 0 = g(i) ≠ 0 . Therefore,


a0
f (z) = + a1 + a2 (z − i) + . . .
z−i

This shows z = i is a pole of order 1.


The arguments for z = −i and z = −1 are similar.

This page titled 9.1: Poles and Zeros is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

9.1.1 https://math.libretexts.org/@go/page/6522
9.2: Holomorphic and Meromorphic Functions
 Definitions: Holomorphic and Meromorphic
A function that is analytic on a region A is called holomorphic on A .
A function that is analytic on A except for a set of poles of finite order is called meromorphic on A .

 Example 9.2.1

Let
2 3
z+z +z
f (z) = .
(z − 2)(z − 3)(z − 4)(z − 5)

This is meromorphic on C with (simple) poles at z = 2, 3, 4, 5.

This page titled 9.2: Holomorphic and Meromorphic Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

9.2.1 https://math.libretexts.org/@go/page/6523
9.3: Behavior of functions near zeros and poles
The basic idea is that near a zero of order n , a function behaves like (z − z 0)
n
and near a pole of order n , a function behaves like
1/(z − z ) . The following make this a little more precise.
n
0

Behavior near a zero. If f has a pole of order n at z then near z , 0 0

bn
f (z) ≈ , (9.3.1)
n
(z − z0 )

for some constant b .n

P roof . This is nearly identical to the previous argument. By definition f has a Laurent series around z of the form 0

bn bn−1 b1
f (z) = + + ... + + a0 + . . .
n n−1
(z − z0 ) (z − z0 ) z − z0
(9.3.2)
bn bn−1 bn−2
2
= (1 + (z − z0 ) + (z − z0 ) + ...)
n
(z − z0 ) bn bn

Since the second factor equals 1 at z , the claim follows.


0

Picard’s Theorem and Essential Singularities


Near an essential singularity we have Picard’s theorem. We won’t prove or make use of this theorem in 18.04. Still, we feel it is
pretty enough to warrant showing to you.

 Theorem 9.3.1: Picard's Theorem


If f (z) has an essential singularity at z then in every neighborhood of
0 z0 , f (z) takes on all possible values infinitely many
times, with the possible exception of one value.

 Example 9.3.1

It is easy to see that in any neighborhood of z = 0 the function w = e 1/z


takes every value except w = 0 .

Quotients of functions
We have the following statement about quotients of functions. We could make similar statements if one or both functions has a pole
instead of a zero.

 Theorem 9.3.2
Suppose f has a zero of order m at z and g has a zero of order n at z . Let
0 0

f (z)
h(z) = . (9.3.3)
g(z)

Then
If n > m then h(z) has a pole of order n − m at z . 0

If n < m then h(z) has a zero of order m − n at z . 0

If n = m then h(z) is analytic and nonzero at z . 0

We can paraphrase this as h(z) has ‘pole’ of order n − m at z . If n − m is negative then the ‘pole’ is actually a zero.
0

Proof
You should be able to supply the proof. It is nearly identical to the proofs above: express f and g as Taylor series and take
the quotient.

9.3.1 https://math.libretexts.org/@go/page/6524
 Example 9.3.2

Let
sin(z)
h(z) = .
2
z

We know sin(z) has a zero of order 1 at z = 0 and z has a zero of order 2. So, h(z) has a pole of order 1 at z = 0 . Of course,
2

we can see this easily using Taylor series


3
1 z
h(z) = (z − + ...)
2
z 3!

This page titled 9.3: Behavior of functions near zeros and poles is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

9.3.2 https://math.libretexts.org/@go/page/6524
9.4: Residues
In this section we’ll explore calculating residues. We’ve seen enough already to know that this will be useful. We will see that even
more clearly when we look at the residue theorem in the next section.
We introduced residues in the previous topic. We repeat the definition here for completeness.

 Definition: Residue

Consider the function f (z) with an isolated singularity at z0 , i.e. defined on the region 0 < |z − z0 | < r and with Laurent
series (on that region)
∞ ∞
bn
n
f (z) = ∑ + ∑ an (z − z0 ) . (9.4.1)
n
(z − z0 )
n=1 n=0

The residue of f at z is b . This is denoted


0 1

Res(f , z0 ) = b1 or Resz=z0 f = b1 . (9.4.2)

What is the importance of the residue? If γ is a small, simple closed curve that goes counterclockwise around b then 1

∫ f (z) = 2πi b1 . (9.4.3)


γ

Figure 9.4.1 : γ small enough to be inside |z − z0 | < r , surround z0 and contain no other singularity of f . (CC BY-NC; Ümit
Kaya)
This is easy to see by integrating the Laurent series term by term. The only nonzero integral comes from the term b 1 /z .

 Example 9.4.1
1 1
1/2z
f (z) = e =1+ + + ... (9.4.4)
2
2z 2(2z)

has an isolated singularity at 0. From the Laurent series we see that Res(f , 0) = 1/2.

 Example 9.4.2

(i) Let
1 2 4
f (z) = + + + 5 + 6z.
3 2
z z z

f has a pole of order 3 at z = 0 and Res(f , 0) = 4.


(ii) Suppose

9.4.1 https://math.libretexts.org/@go/page/6525
2
f (z) = + g(z),
z

where g is analytic at z = 0 . Then, f has a simple pole at 0 and Res(f , 0) = 2.


(iii) Let
2
f (z) = cos(z) = 1 − z /2! + . . .

Then f is analytic at z = 0 and Res(f , 0) = 0.


(iv) Let
sin(z)
f (z) =
z

3 2
1 z z
= (z − + ...) =1− + ...
z 3! 3!

So, f has a removable singularity at z = 0 and Res(f , 0) = 0.

 Example 9.4.3 Using partial fractions

Let
z
f (z) = .
z2 + 1

Find the poles and residues of f .


Solution
Using partial fractions we write
z
f (z) =
(z − i)(z + i)

1 1 1 1
= ⋅ + ⋅ .
2 z−i 2 z+i

The poles are at z = ±i . We compute the residues at each pole:


At z = i :
1 1
f (z) = ⋅ + something analytic at i.
2 z−i

Therefore the pole is simple and Res(f , i) = 1/2.


At z = −i :
1 1
f (z) = ⋅ + something analytic at − i.
2 z+i

Therefore the pole is simple and Res(f , −i) = 1/2.

 Example 9.4.4 Mild warning!


Let
1
f (z) = −
z(1 − z)

then we have the following Laurent expansions for f around z = 0 .


On 0 < |z| < 1 :

9.4.2 https://math.libretexts.org/@go/page/6525
1 1
f (z) = − ⋅
z 1 −z

1
2
=− (1 + z + z + . . . ).
z

Therefore the pole at z = 0 is simple and Res(f , 0) = −1.


On 1 < |z| < ∞ :
1 1
f (z) = ⋅
2
z 1 − 1/z

1 1 1
= (1 + + + . . . ).
2
z z z

Even though this is a valid Laurent expansion you must not use it to compute the residue at 0. This is because the definition of
residue requires that we use the Laurent series on the region 0 < |z − z | < r . 0

 Example 9.4.5

Let

f (z) = log(1 + z).

This has a singularity at z = −1 , but it is not isolated, so not a pole and therefore there is no residue at z = −1 .

Residues at Simple Poles


Simple poles occur frequently enough that we’ll study computing their residues in some detail. Here are a number of ways to spot a
simple pole and compute its residue. The justification for all of them goes back to Laurent series.
Suppose f (z) has an isolated singularity at z = z . Then we have the following properties.
0

 Property 1

If the Laurent series for f (z) has the form


b1
+ a0 + a1 (z − z0 ) + . . . (9.4.5)
z − z0

then f has a simple pole at z and Res(f , z


0 0) = b1 .

 Property 2

If

g(z) = (z − z0 )f (z) (9.4.6)

is analytic at z then z is either a simple pole or are movable singularity. In either case Res(f , z
0 0 0) = g(z0 ) . (In the removable
singularity case the residue is 0.)

Proof
Directly from the Laurent series for f around z . 0

 Property 3

If f has a simple pole at z then


0

lim (z − z0 )f (z) = Res(f , z0 ) (9.4.7)


z→z0

9.4.3 https://math.libretexts.org/@go/page/6525
This says that the limit exists and equals the residue. Conversely, if the limit exists then either the pole is simple, or f is
analytic at z . In both cases the limit equals the residue.
0

Proof
Directly from the Laurent series for f around z . 0

 Property 4

If f has a simple pole at z and g(z) is analytic at z then


0 0

Res(f g, z0 ) = g(z0 )Res(f , z0 ). (9.4.8)

If g(z
0) ≠0 then
1
Res(f /g, z0 ) = Res(f , z0 ). (9.4.9)
g(z0 )

Proof
Since z is a simple pole,
0

b1
f (z) = + a0 + a1 (z − z0 ) (9.4.10)
z − z0

Since g is analytic,
g(z) = c0 + c1 (z − z0 ) + . . . , (9.4.11)

where c 0 = g(z0 ) . Multiplying these series together it is clear that

Res(f g, z0 ) = c0 b1 = g(z0 )Res(f , z0 ). QED (9.4.12)

The statement about quotients f /g follows from the proof for products because 1/g is analytic at z . 0

 Property 5

If g(z) has a simple zero at z then 1/g(z) has a simple pole at z and
0 0

1
Res(1/g, z0 ) = . (9.4.13)

g (z0 )

Proof
The algebra for this is similar to what we’ve done several times above. The Taylor expansion for g is
2
g(z) = a1 (z − z0 ) + a2 (z − z0 ) + ..., (9.4.14)

where a 1

= g (z0 ) . So
1 1 1
= ( ) (9.4.15)
a2
g(z) a1 (z − z0 )
1+ (z − z0 ) + . . .
a1

The second factor on the right is analytic at z and equals 1 at z . Therefore we know the Laurent expansion of 1/g is
0 0

1 1
= (1 + c1 (z − z0 ) + . . . ) (9.4.16)
g(z) a1 (z − z0 )

Clearly the residue is 1/a 1



.
= 1/ g (z0 ) QED .

9.4.4 https://math.libretexts.org/@go/page/6525
 Example 9.4.6

Let
2
2 +z+z
f (z) = .
(z − 2)(z − 3)(z − 4)(z − 5)

Show all the poles are simple and compute their residues.
Solution
The poles are at z = 2, 3, 4, 5. They are all isolated. We'll look at z = 2 the others are similar. Multiplying by z − 2 we get
2
2 +z+z
g(z) = (z − 2)f (z) = .
(z − 3)(z − 4)(z − 5)

This is analytic at z = 2 and


8 4
g(2) = =− .
−6 3

So the pole is simple and Res(f , 2) = −4/3.

 Example 9.4.7

Let
1
f (z) = .
sin(z)

Find all the poles and their residues.


Solution
The poles of f (z) are the zeros of sin(z) , i.e. nπ for n an integer. Since the derivative

sin (nπ) = cos(nπ) ≠ 0,

the zeros are simple and by Property 5 above


1
n
Res(f , nπ) = = (−1 ) .
cos(nπ)

 Example 9.4.8

Let
1
f (z) = .
2 2
z(z + 1)(z − 2 )

Identify all the poles and say which ones are simple.
Solution
Clearly the poles are at z = 0 , ±i, 2.
At z = 0 :

g(z) = zf (z)

is analytic at 0 and g(0) = 1/4 . So the pole is simple and the residue is g(0) = 1/4 .
At z = i :
1
g(z) = (z − i)f (z) =
2
z(z + i)(z − 2)

9.4.5 https://math.libretexts.org/@go/page/6525
is analytic at i, the pole is simple and the residue is g(i).
At z = −i : This is similar to the case z = i . The pole is simple.
At z = 2 :
1
g(z) = (z − 2)f (z) =
2
z(z + 1)(z − 2)

is not analytic at 2, so the pole is not simple. (It should be obvious that it’s a pole of order 2.)

 Example 9.4.9

Let p(z) , q(z) be analytic at z = z . Assume p(z


0 0) ≠0 , q(z
0) =0 ,q ′
(z0 ) ≠ 0 . Find
p(z)
Resz=z0 .
q(z)

Solution
Since q ′
(z0 ) ≠ 0 , q has a simple zero at z . So 1/q has a simple pole at z and
0 0

1
Res(1/q, z0 ) =

q (z0 )

Since p(z 0) ≠0 we know


p(z0 )
Res(p/q, z0 ) = p(z0 )Res(1/q, z0 ) = .

q (z0 )

Residues at finite poles


For higher-order poles we can make statements similar to those for simple poles, but the formulas and computations are more
involved. The general principle is the following

 Higher order poles

If f (z) has a pole of order k at z then 0

k
g(z) = (z − z0 ) f (z) (9.4.17)

is analytic at z and if
0

g(z) = a0 + a1 (z − z0 ) + . . . (9.4.18)

then
(k−1)
g (z0 )
Res(f , z0 ) = ak−1 = . (9.4.19)
(k − 1)!

Proof
This is clear using Taylor and Laurent series for g and f .

 Example 9.4.10
Let
sinh(z)
f (z) = (9.4.20)
5
z

and find the residue at z = 0 .

9.4.6 https://math.libretexts.org/@go/page/6525
Solution
We know the Taylor series for
3 5
sinh(z) = z + z /3! + z /5! + . . . (9.4.21)

(You can find this using sinh(z) = (e z


−e
−z
)/2 and the Taylor series for e .) Therefore, z

1 1 1
f (z) = + + + ... (9.4.22)
4 2
z 3!z 5!

We see Res(f , 0) = 0.
Note, we could have seen this by realizing that f (z) is an even function.

 Example 9.4.11

Let
z
sinh(z)e
f (z) = . (9.4.23)
5
z

Find the residue at z = 0 .


Solution
It is clear that Res(f , 0) equals the coefficient of z in the Taylor expansion of sinh(z)e . We compute this directly as
4 z

3 2 3
z z z 1 1
z 4
sinh(z)e = (z + + . . . )(1 + z + + + . . . ) = . . . +( + )z + ... (9.4.24)
3! 2 3! 4! 3!

So
1 1 5
Res(f , 0) = + = . (9.4.25)
3! 4! 24

 Example 9.4.12

Find the residue of


1
f (z) = (9.4.26)
2 2
z(z + 1)(z − 2 )

at z = 2 .
Solution
1
2
g(z) = (z − 2 ) f (z) =
2
is analytic at z = 2 . So, the residue we want is the a term in its Taylor series, i.e. 1

g (2) .
z(z + 1)

This is easy, if dull, to compute


13

Res(f , 2) = g (2) = − (9.4.27)
100

cot(z)

The function cot(z) turns out to be very useful in applications. This stems largely from the fact that it has simple poles at all
multiples of π and the residue is 1 at each pole. We show that first.

 Fact
f (z) = cot(z) has simple poles at nπ for n an integer and Res(f , nπ) = 1.

Proof

9.4.7 https://math.libretexts.org/@go/page/6525
cos(z)
f (z) = . (9.4.28)
sin(z)

This has poles at the zeros of sin, i.e. at z = nπ . At poles f is of the form p/q where q has a simple zero at z0 and
p(z ) ≠ 0 . Thus we can use the formula
0

p(z0 )
Res(f , z0 ) = . (9.4.29)

q (z0 )

In our case, we have


cos(nπ)
Res(f , nπ) = = 1, (9.4.30)
cos(nπ)

as claimed.
Sometimes we need more terms in the Laurent expansion of cot(z) . There is no known easy formula for the terms, but we
can easily compute as many as we need using the following technique.

 Example 9.4.13

Compute the first several terms of the Laurent expansion of cot(z) around z = 0 .
Solution
Since cot(z) has a simple pole at 0 we know
b1
2
cot(z) = + z0 + a1 z + a2 z + ... (9.4.31)
z

We also know
2 4
cos(z) 1 − z /2 + z /4! − . . .
cot(z) = = (9.4.32)
3 5
sin(z) z − z /3! + z /5! − . . .

Cross multiplying the two expressions we get


3 5 2 4
b1 z z z z
2
( + a0 + a1 z + a2 z + . . . )(z − + − ...) =1− + − ... (9.4.33)
z 3! 5! 2 4!

We can do the multiplication and equate the coefficients of like powers of z .


2 4
b1 a0 b1 a1 z z
2 3 4
b1 + a0 z + (− + a1 )z + (− + a2 )z +( − + a3 )z =1− + (9.4.34)
3! 3! 5! 3! 2! 4!

So, starting from b 1 =1 and a 0 =0 , we get


−b1 /3! + a1 = −1/2! ⇒ a1 = −1/3

−a0 /3! + a2 = 0 ⇒ a2 = 0 (9.4.35)

b1 /5! − a1 /3! + a3 = 1/4! ⇒ a3 = −1/45

As noted above, all the even terms are 0 as they should be. We have
3
1 z z
cot(z) = − − +. . . (9.4.36)
z 3 45

This page titled 9.4: Residues is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

9.4.8 https://math.libretexts.org/@go/page/6525
9.5: Cauchy Residue Theorem
This is one of the major theorems in complex analysis and will allow us to make systematic our previous somewhat ad hoc
approach to computing integrals on contours that surround singularities.

 Theorem 9.5.1 Cauchy's Residue Theorem


Suppose f (z) is analytic in the region A except for a set of isolated singularities. Also suppose C is a simple closed curve in A
that doesn’t go through any of the singularities of f and is oriented counterclockwise. Then

∫ f (z) dz = 2πi ∑ residues of f inside C (9.5.1)


C

Proof
The proof is based of the following figures. They only show a curve with two singularities inside it, but the generalization
to any number of singularities is straightforward. In what follows we are going to abuse language and say pole when we
mean isolated singularity, i.e. a finite order pole or an essential singularity (‘infinite order pole’).

Figure 9.5.1: Applying the Cauchy residue theorem. (CC BY-NC; Ümit Kaya)
The left figure shows the curve C surrounding two poles z and z of f . The right figure shows the same curve with some
1 2

cuts and small circles added. It is chosen so that there are no poles of f inside it and so that the little circles around each of
the poles are so small that there are no other poles inside them. The right hand curve is
~
C = C1 + C2 − C3 − C2 + C4 + C5 − C6 − C5 (9.5.2)

~
The left hand curve is C = C1 + C4 . Since there are no poles inside C we have, by Cauchy’s theorem,

∫ f (z) dz = ∫ f (z) dz = 0 (9.5.3)


~
C C1 +C2 −C3 −C2 +C4 +C5 −C6 −C5

Dropping C and C , which are both added and subtracted, this becomes
2 5

∫ f (z) dz = ∫ f (z) dz (9.5.4)


C1 +C4 C3 +C6

If
b2 b1
f (z) = . . . + + + a0 + a1 (z − z1 ) + . . . (9.5.5)
2
(z − z1 ) z − z1

is the Laurent expansion of f around z then1

9.5.1 https://math.libretexts.org/@go/page/6526
b2 b1
∫ f (z) dz = ∫ ...+ + + a0 + a1 (z − z1 ) + . . . dz
C3 C3 2
(z − z1 ) z − z1
(9.5.6)
= 2πib1

= 2πiRes(f , z1 )

Likewise

∫ f (z) dz = 2πiRes(f , z2 ). (9.5.7)


C6

Using these residues and the fact that C = C1 + C4 , Equation 9.5.4 becomes

∫ f (z) dz = 2πi[Res(f , z1 ) + Res(f , z2 )]. (9.5.8)


C

That proves the residue theorem for the case of two poles. As we said, generalizing to any number of poles is
straightforward.

 Example 9.5.1

Let
1
f (z) = .
2
z(z + 1)

Compute ∫ f (z) dz over each of the contours C 1, C2 , C3 , C4 shown.

Figure 9.5.1 : Contours. (CC BY-NC; Ümit Kaya)


Solution
The poles of f (z) are at z = 0, ±i . Using the residue theorem we just need to compute the residues of each of these poles.
At z = 0 :
1
g(z) = zf (z) =
2
z +1

is analytic at 0 so the pole is simple and

Res(f , 0) = g(0) = 1.

At z = i :
1
g(z) = (z − i)f (z) =
z(z + i)

9.5.2 https://math.libretexts.org/@go/page/6526
is analytic at i so the pole is simple and

Res(f , i) = g(i) = −1/2.

At z = −i :
1
g(z) = (z + i)f (z) =
z(z − i)

is analytic at −i so the pole is simple and

Res(f , −i) = g(−i) = −1/2.

Using the residue theorem we have


∫ f (z) dz = 0 (since f is analytic inside C1 )
C1

∫ f (z) dz = 2πiRes(f , i) = −πi


C2

∫ f (z) dz = 2πi[Res(f , i) + Res(f , 0)] = πi


C3

∫ f (z) dz = 2πi[Res(f , i) + Res(f , 0) + Res(f , −i)] = 0.


C4

 Example 9.5.2

Compute
5z − 2
∫ dz.
|z|=2 z(z − 1)

Solution
Let
5z − 2
f (z) = .
z(z − 1)

The poles of f are at z = 0, 1 and the contour encloses them both.

Figure 9.5.2 : Poles within a contour. (CC BY-NC; Ümit Kaya)


At z = 0 :
5z − 2
g(z) = zf (z) =
(z − 1)

is analytic at 0 so the pole is simple and

Res(f , 0) = g(0) = 2.

At z = 1 :

9.5.3 https://math.libretexts.org/@go/page/6526
5z − 2
g(z) = (z − 1)f (z) =
z

is analytic at 1 so the pole is simple and

Res(f , 1) = g(1) = 3.

Finally
5z − 2
∫ dz = 2πi[Res(f , 0) + Res(f , 1)] = 10πi.
C z(z − 1)

 Example 9.5.3
Compute

2
∫ z sin(1/z) dz.
|z|=1

Solution
Let
2
f (z) = z sin(1/z).

f has an isolated singularity at z = 0 . Using the Taylor series for sin(w) we get
1 1 1 1/6
2 2
z sin(1/z) = z ( − + − ...) = z− + ...
3 5
z 3!z 5!z z

So, Res(f , 0) = b 1 = −1/6 . Thus the residue theorem gives



2
∫ z sin(1/z) dz = 2πiRes(f , 0) = − .
|z|=1
3

 Example 9.5.4

Compute
dz
∫ dz,
C z(z − 2)4

where, C : |z − 2| = 1 .

Figure 9.5.3 : Poles and contour. (CC BY-NC; Ümit Kaya)


Solution
Let
1
f (z) = .
4
z(z − 2)

9.5.4 https://math.libretexts.org/@go/page/6526
The singularity at z = 0 is outside the contour of integration so it doesn’t contribute to the integral. To use the residue theorem
we need to find the residue of f at z = 2 . There are a number of ways to do this. Here’s one:
1 1
=
z 2 + (z − 2)

1 1
= ⋅
2 1 + (z − 2)/2

2 3
1 z−2 (z − 2) (z − 2)
= (1 − + − + ..)
2 2 4 8

This is valid on 0 < |z − 2| < 2 . So,


1 1 1 1 1 1
f (z) = ⋅ = − + − + ...
3 2
(z − 4)4 z 2(z − 2)4 4(z − 2) 8(z − 2) 16(z − 2)

Thus, Res(f , 2) = −1/16 and


πi
∫ f (z) dz = 2πiRes(f , 2) = − .
C
8

 Example 9.5.5

Compute
1
∫ dz
C sin(z)

over the contour C shown.

Figure 9.5.4 : Poles within a square contour. (CC BY-NC; Ümit Kaya)
Solution
Let

f (z) = 1/ sin(z).

There are 3 poles of f inside C at 0, π and 2π. We can find the residues by taking the limit of (z − z )f (z) . Each of the limits 0

is computed using L’Hospital’s rule. (This is valid, since the rule is just a statement about power series. We could also have
used Property 5 from the section on residues of simple poles above.)
At z = 0 :
z 1
lim = lim = 1.
z→0 sin(z) z→0 cos(z)

Since the limit exists, z = 0 is a simple pole and

Res(f , 0) = 1.

At z = π :
z−π 1
lim = lim = −1.
z→π sin(z) z→π cos(z)

9.5.5 https://math.libretexts.org/@go/page/6526
Since the limit exists, z = π is a simple pole and

Res(f , π) = −1.

At z = 2π : The same argument shows

Res(f , 2π) = 1.

Now, by the residue theorem

∫ f (z) dz = 2πi[Res(f , 0) + Res(f , π) + Res(f , 2π)] = 2πi.


C

This page titled 9.5: Cauchy Residue Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

9.5.6 https://math.libretexts.org/@go/page/6526
9.6: Residue at ∞
The residue at ∞ is a clever device that can sometimes allow us to replace the computation of many residues with the computation
of a single residue.
Suppose that f is analytic in C except for a finite number of singularities. Let C be a positively oriented curve that is large enough
to contain all the singularities.

Figure 9.6.1 : All the poles of f are inside C . (CC BY-NC; Ümit Kaya)

 Definition: Residue
We define the residue of f at infinity by
1
Res(f , ∞) = − ∫ f (z) dz. (9.6.1)
2πi C

We should first explain the idea here. The interior of a simple closed curve is everything to left as you traverse the curve. The curve
C is oriented counterclockwise, so its interior contains all the poles of f . The residue theorem says the integral over C is

determined by the residues of these poles.


On the other hand, the interior of the curve −C is everything outside of C . There are no poles of f in that region. If we want the
residue theorem to hold (which we do –it’s that important) then the only option is to have a residue at ∞ and define it as we did.
The definition of the residue at infinity assumes all the poles of f are inside C . Therefore the residue theorem implies

Res(f , ∞) = − ∑ the residues of f . (9.6.2)

To make this useful we need a way to compute the residue directly. This is given by the following theorem.

 Theorem 9.6.1

If f is analytic in C except for a finite number of singularities then


1
Res(f , ∞) = −Res ( f (1/w), 0) . (9.6.3)
w2

Proof
The proof is just a change of variables: w = 1/z.

9.6.1 https://math.libretexts.org/@go/page/6527
Figure 9.6.1: Changing variables. (CC BY-NC; Ümit Kaya)
Change of variable: w = 1/z
First note that z = 1/w and
2
dz = −(1/ w ) dw. (9.6.4)

Next, note that the map w = 1/z carries the positively oriented z -circle of radius R to the negatively oriented w -circle of
radius 1/R. (To see the orientation, follow the circled points 1, 2, 3, 4 on C in the z -plane as they are mapped to points on
~
C in the w -plane.) Thus,

1 1 1
Res(f , ∞) = − ∫ f (z) dz = ∫ f (1/w) dw (9.6.5)
~ 2
2πi C 2πi C w

~
Finally, note that z = 1/w maps all the poles inside the circle C to points outside the circle C . So the only possible pole of
~ ~
(1/ w )f (1/w) that is inside C is at w = 0 . Now, since C is oriented clockwise, the residue theorem says
2

1 1 1
∫ f (1/w) dw = −Res( f (1/w), 0) (9.6.6)
~ 2 2
2πi C w w

Comparing this with the equation just above finishes the proof.

 Example 9.6.1

Let
5z − 2
f (z) = .
z(z − 1)

Earlier we computed

∫ f (z) dz = 10πi
|z|=2

by computing residues at z = 0 and z = 1 . Recompute this integral by computing a single residue at infinity.
Solution
1 1 5/w − 2 5 − 2w
f (1/w) = = .
2 2
w w (1/w)(1/w − 1) w(1 − w)

We easily compute that


1
Res(f , ∞) = −Res( f (1/w), 0) = −5.
2
w

Since |z| = 2 contains all the singularities of f we have

9.6.2 https://math.libretexts.org/@go/page/6527
∫ f (z) dz = −2πiRes(f , ∞) = 10πi.
|z|=2

This is the same answer we got before!

This page titled 9.6: Residue at ∞ is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

9.6.3 https://math.libretexts.org/@go/page/6527
CHAPTER OVERVIEW
10: Definite Integrals Using the Residue Theorem
In this topic we’ll use the residue theorem to compute some real definite integrals.
b

∫ f (x) dx (10.1)
a

The general approach is always the same


1. Find a complex analytic function g(z) which either equals f on the real axis or which is closely connected to f , e.g.
f (x) = cos(x) , g(z) = e .
iz

2. Pick a closed contour C that includes the part of the real axis in the integral.
3. The contour will be made up of pieces. It should be such that we can compute ∫ g(z) dz over each of the pieces except the part
on the real axis.
4. Use the residue theorem to compute ∫ g(z) dz .
C

5. Combine the previous steps to deduce the value of the integral we want.
10.1: Integrals of functions that decay
10.2: Integrals
10.3: Trigonometric Integrals
10.4: Integrands with branch cuts
10.5: Cauchy principal value
10.6: Integrals over portions of circles
10.7: Fourier transform
10.8: Solving DEs using the Fourier transform

This page titled 10: Definite Integrals Using the Residue Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
10.1: Integrals of functions that decay
The theorems in this section will guide us in choosing the closed contour C described in the introduction.
The first theorem is for functions that decay faster than 1/z.

 Theorem 10.1.1
(a) Suppose f (z) is defined in the upper half-plane. If there is an a > 1 and M >0 such that
M
|f (z)| < (10.1.1)
a
|z|

for |z| large then

lim ∫ f (z) dz = 0, (10.1.2)


R→∞
CR

where C is the semicircle shown below on the left.


R

Semicircles: left: Reiθ


,0 < θ < π right: Re iθ
, π < θ < 2π . (CC BY-NC; Ümit Kaya)
(b) If f (z) is defined in the lower half-plane and
M
|f (z)| < , (10.1.3)
a
|z|

where a > 1 then

lim ∫ f (z) dz = 0, (10.1.4)


R→∞
CR

where C is the semicircle shown above on the right.


R

Proof
We prove (a), (b) is essentially the same. We use the triangle inequality for integrals and the estimate given in the
hypothesis. For R large
π
M M Mπ
|∫ f (z) dz| ≤ ∫ |f (z)| |dz| ≤ ∫ |dz| = ∫ R dθ = . (10.1.5)
a a a−1
CR CR CR |z| 0
R R

Since a > 1 this clearly goes to 0 as R → ∞ . QED

The next theorem is for functions that decay like 1/z. It requires some more care to state and prove.

 Theorem 10.1.2

(a) Suppose f (z) is defined in the upper half-plane. If there is an M >0 such that
M
|f (z)| < (10.1.6)
|z|

10.1.1 https://math.libretexts.org/@go/page/6530
for |z| large then for a > 0

iaz
lim ∫ f (z)e dz = 0, (10.1.7)
x1 →∞, x2 →∞
C1 +C2 +C3

where C 1 + C2 + C3 is the rectangular path shown below on the left.

Rectangular paths of height and width x 1 + x2 . (CC BY-NC; Ümit Kaya)


(b) Similarly, if a < 0 then

iaz
lim ∫ f (z)e dz = 0, (10.1.8)
x1 →∞, x2 →∞
C1 +C2 +C3

where C 1 + C2 + C3 is the rectangular path shown above on the right.


Note: In contrast to Theorem 10.2.1 this theorem needs to include the factor e iaz
.

Proof
(a) We start by parametrizing C 1, C2 , C3 .
C1 : γ1 (t) = x1 + it , t from 0 to x 1 + x2

C2 : γ2 (t) = t + i(x1 + x2 ) , t from x to −x 1 2

C3 : γ3 (t) = −x2 + it , t from x 1 + x2 to 0.


Next we look at each integral in turn. We assume x and x are large enough that
1 2

M
|f (z)| < (10.1.9)
|z|

on each of the curves C . j

M
iaz iaz iaz
|∫ f (z)e dz| ≤ ∫ |f (z)e | |dz| ≤ ∫ |e | |dz|
C1 C1 C1
|z|

x1 +x2 M
iax1 −at
= ∫ |e | dt
0 −−−−−−
2 2
√x +t
1 (10.1.10)

M x1 +x2
−at
≤ ∫ e dt
0
x1

M
−a( x1 +x2 )
= (1 − e )/a.
x1

Since a > 0 , it is clear that this last expression goes to 0 as x and x go to ∞ . 1 2

10.1.2 https://math.libretexts.org/@go/page/6530
M
iaz iaz iaz
|∫ f (z)e dz| ≤ ∫ |f (z)e | |dz| ≤ ∫ |e | |dz|
C2 C2 C2
|z|

x1 M
iat−a( x1 +x2 )
= ∫ |e | dt
−x2 − −
2
−−−−−−−−− −
2
√ t + (x1 + x2 ) (10.1.11)

−a( x1 +x2 )
Me x1 +x2
≤ ∫ dt
0
x1 + x2

−a( x1 +x2 )
≤ Me

Again, clearly this last expression goes to 0 as x and x go to ∞ .


1 2

The argument for C is essentially the same as for C , so we leave it to the reader.
3 1

The proof for part (b) is the same. You need to keep track of the sign in the exponentials and make sure it is negative.

 Example 10.1.1

See Example 10.8.1 below for an example using Theorem 10.2.2.

This page titled 10.1: Integrals of functions that decay is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

10.1.3 https://math.libretexts.org/@go/page/6530
10.2: Integrals
Integrals ∫ and ∫
∞ ∞

−∞ 0

 Example 10.2.1

Compute

1
I =∫ dx. (10.2.1)
2 2
−∞ (1 + x )

Solution
Let
2 2
f (z) = 1/(1 + z ) . (10.2.2)

It is clear that for z large


4
f (z) ≈ 1/ z . (10.2.3)

In particular, the hypothesis of Theorem 10.2.1 is satisfied. Using the contour shown below we have, by the residue theorem,

∫ f (z) dz = 2πi ∑ residues of f inside the contour. (10.2.4)


C1 +CR

C1

Figure 10.2.1 : Copy and Paste Caption here. (CC BY-NC; Ümit Kaya)
We examine each of the pieces in the above equation.

CR
f (z) dz : By Theorem 10.2.1(a),

lim ∫ f (z) dz = 0. (10.2.5)


R→∞
CR


C1
f (z) dz : Directly,we see that
R ∞

lim ∫ f (z) dz = lim ∫ f (x) dx = ∫ f (x) dx = I . (10.2.6)


R→∞ R→∞
C1 −R −∞

So letting R → ∞ , Equation 10.3.4 becomes


I =∫ f (x) dx = 2πi ∑ residues of f inside the contour. (10.2.7)


−∞

Finally, we compute the needed residues: f (z) has poles of order 2 at ±i. Only z = i is inside the contour, so we compute the
residue there. Let
1
2
g(z) = (z − i ) f (z) = . (10.2.8)
2
(z + i)

10.2.1 https://math.libretexts.org/@go/page/6531
Then


2 1
Res(f , i) = g (i) = − = (10.2.9)
(2i)3 4i

So,
π
I = 2πiRes(f , i) = . (10.2.10)
2

 Example 10.2.2

Compute

1
I =∫ dx. (10.2.11)
4
−∞ x +1

Solution
Let f (z) = 1/(1 + z 4
) . We use the same contour as in the previous example (Figure 10.2.2.

Figure 10.2.2 : (CC BY-NC; Ümit Kaya)


As in the previous example,

lim ∫ f (z) dz = 0 (10.2.12)


R→∞
CR

and

lim ∫ f (z) dz = ∫ f (x) dx = I . (10.2.13)


R→∞
C1 −∞

So, by the residue theorem

I = lim ∫ f (z) dz = 2πi ∑ residues of f inside the contour. (10.2.14)


R→∞
C1 +CR

The poles of f are all simple and at


iπ/4 i3π/4 i5π/4 i7π/4
e ,e ,e ,e . (10.2.15)

Only e iπ/4
and e i3π/4
are inside the contour. We compute their residues as limits using L’Hospital’s rule. For z 1 =e
iπ/4
:
−i3π/4
z − z1 1 1 e
Res(f , z1 ) = lim (z − z1 )f (z) = lim = lim = = (10.2.16)
4 3 i3π/4
z→z1 z→z1 1 +z z→z1 4z 4e 4

and for z 2 =e
i3π/4
:
−iπ/4
z − z2 1 1 e
Res(f , z2 ) = lim (z − z2 )f (z) = lim = lim = = (10.2.17)
z→z2 z→z2 1 + z4 z→z2 4z 3 4e
i9π/4
4

So,

10.2.2 https://math.libretexts.org/@go/page/6531

−1 − i 1 −i 2i √2
I = 2πi(Res(f , z1 ) + Res(f , z2 )) = 2πi( + ) = 2πi(− ) =π (10.2.18)
– – –
4 √2 4 √2 4 √2 2

 Example 10.2.3

Suppose b > 0 . Show


∞ −b
cos(x) πe
∫ dx = . (10.2.19)
2 2
0 x +b 2b

Solution
The first thing to note is that the integrand is even, so

1 cos(x)
I = ∫ . (10.2.20)
2 2
2 −∞ x +b

Also note that the square in the denominator tells us the integral is absolutely convergent.
We have to be careful because cos(z) goes to infinity in either half-plane, so the hypotheses of Theorem 10.2.1 are not
satisfied. The trick is to replace cos(x) by e , so
ix

∞ ix
~ e 1 ~
I =∫ dx, with I = Re(I ). (10.2.21)
2 2
−∞ x +b 2

Now let
iz
e
f (z) = . (10.2.22)
z 2 + b2

For z = x + iy with y > 0 we have


i(x+iy) −y
|e | e
|f (z)| = = . (10.2.23)
2 2 2 2
|z +b | |z +b |

Since e < 1 , f (z) satisfies the hypotheses of Theorem 10.2.1 in the upper half-plane. Now we can use the same contour as
−y

in the previous examples (Figure 10.2.3.

Figure 10.2.3 : (CC BY-NC; Ümit Kaya)


We have

lim ∫ f (z) dz = 0 (10.2.24)


R→∞
CR

and

~
lim ∫ f (z) dz = ∫ f (x) dx = I . (10.2.25)
R→∞
C1 −∞

So, by the residue theorem

10.2.3 https://math.libretexts.org/@go/page/6531
~
I = lim ∫ f (z) dz = 2πi ∑ residues of f inside the contour. (10.2.26)
R→∞
C1 +CR

The poles of f are at ±bi and both are simple. Only bi is inside the contour. We compute the residue as a limit using
L’Hospital’s rule
iz −b
e e
Res(f , bi) = lim (z − bi) = . (10.2.27)
2 2
z→bi z +b 2bi

So,
−b
~ πe
I = 2πiRes(f , bi) = . (10.2.28)
b

Finally,
−b
1 ~ πe
I = Re(I ) = , (10.2.29)
2 2b

as claimed.

 Warning
1 ~
Be careful when replacing cos(z) by e that it is appropriate. A key point in the above example was that I
iz
= Re(I ) . This is
2
needed to make the replacement useful.

This page titled 10.2: Integrals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff (MIT
OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

10.2.4 https://math.libretexts.org/@go/page/6531
10.3: Trigonometric Integrals
The trick here is to put together some elementary properties of z = e iθ
on the unit circle.
1. e −iθ
= 1/z.
iθ −iθ
e +e z + 1/z
2. cos(θ) = = .
2 2
iθ −iθ
e −e z − 1/z
3. sin(θ) = = .
2i 2i

We start with an example. After that we’ll state a more general theorem.

 Example 10.3.1
Compute


∫ . (10.3.1)
2
0 1 +a − 2a cos(θ)

Assume that |a| ≠ 1 .


Solution
Notice that [0, 2π] is the interval used to parametrize the unit circle as z = e . We need to make two substitutions:

z + 1/z
cos(θ) =
2
(10.3.2)
dz

dz = ie dθ ⇔ dθ =
iz

Making these substitutions we get

2π dθ
I = ∫
0 2
1 +a − 2a cos(θ)

1 dz
= ∫ ⋅ (10.3.3)
|z|=1 2
1 +a − 2a(z + 1/z)/2 iz

1
= ∫ dz.
|z|=1 2 2
i((1 + a )z − a(z + 1))

So, let
1
f (z) = . (10.3.4)
i((1 + a )z − a(z 2 + 1))
2

The residue theorem implies

I = 2πi ∑ residues of f inside the unit circle. (10.3.5)

We can factor the denominator:


−1
f (z) = . (10.3.6)
ia(z − a)(z − 1/a)

The poles are at a , 1/a. One is inside the unit circle and one is outside.
1
If |a| > 1 then 1/a is inside the unit circle and Res(f , 1/a) = 2
i(a − 1)

1
If |a| < 1 then a is inside the unit circle and Res(f , a) =
i(1 − a2 )

We have

10.3.1 https://math.libretexts.org/@go/page/6532
⎧ 2π

⎪ if |a| > 1
a2 − 1
I =⎨ (10.3.7)



⎪ if |a| < 1
2
1 −a

The example illustrates a general technique which we state now.

 Theorem 10.3.1

Suppose R(x, y) is a rational function with no poles on the circle


2 2
x +y =1 (10.3.8)

then for

1 z + 1/z z − 1/z
f (z) = R( , ) (10.3.9)
iz 2 2i

we have

∫ R(cos(θ), sin(θ)) dθ = 2πi ∑ residues of f inside |z| = 1. (10.3.10)


0

Proof
We make the same substitutions as in Example 10.4.1. So,

z + 1/z z − 1/z dz
∫ R(cos(θ), sin(θ)) dθ = ∫ R( , ) (10.3.11)
0 |z|=1
2 2i iz

The assumption about poles means that f has no poles on the contour |z| = 1 . The residue theorem now implies the
theorem.

This page titled 10.3: Trigonometric Integrals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

10.3.2 https://math.libretexts.org/@go/page/6532
10.4: Integrands with branch cuts
 Example 10.4.1

Compute
∞ 1/3
x
I =∫ dx. (10.4.1)
2
0 1 +x

Solution
Let
1/3
x
f (x) = . (10.4.2)
2
1 +x

Since this is asymptotically comparable to x −5/3


, the integral is absolutely convergent. As a complex function
1/3
z
f (z) = (10.4.3)
2
1 +z

needs a branch cut to be analytic (or even continuous), so we will need to take that into account with our choice of contour.
First, choose the following branch cut along the positive real axis. That is, for z = re iθ
not on the axis, we have 0 < θ < 2π .
Next, we use the contour C 1 + CR − C2 − Cr shown in Figure 10.4.1.

Figure 10.4.1 : Contour around branch cut: inner circle of radius r , outer of radius R . (CC BY-NC; Ümit Kaya)
We put convenient signs on the pieces so that the integrals are parametrized in a natural way. You should read this contour as
having r so small that C and C are essentially on the x-axis. Note well, that, since C and C are on opposite sides of the
1 2 1 2

branch cut, the integral

∫ f (z) dz ≠ 0. (10.4.4)
C1 −C2

First we analyze the integral over each piece of the curve.


On C : Theorem 10.2.1 says that
R

lim ∫ f (z) dz = 0. (10.4.5)


R→∞
CR

On C : For concreteness, assume r < 1/2 . We have |z| = r , so


r

10.4.1 https://math.libretexts.org/@go/page/6533
1/3 1/3 1/3
|z | r (1/2)
|f (z)| = ≤ ≤ . (10.4.6)
2 2
|1 + z | 1 −r 3/4

Call the last number in the above equation M . We have shown that, for small r, |f (z)| < M . So,
2π 2π
iθ iθ
|∫ f (z) dz| ≤ ∫ |f (re )||ire | dθ ≤ ∫ M r dθ = 2πM r. (10.4.7)
Cr 0 0

Clearly this goes to zero as r → 0 .


On C :1

lim ∫ f (z) dz = ∫ f (x) dx = I . (10.4.8)


r→0,R→∞
C1 0

On C : We have (essentially) θ = 2π , so z
2
1/3
=e
i2π/3
|z|
1/3
. Thus,

i2π/3 i2π/3
lim ∫ f (z) dz = e ∫ f (x) dx = e I. (10.4.9)
r→0,R→∞
C2 0

The poles of f (z) are at ±i. Since f is meromorphic inside our contour the residue theorem says

∫ f (z) dz = 2πi(Res(f , i) + Res(f , −i)). (10.4.10)


C1 +CR −C2 −Cr

Letting r → 0 and R → ∞ the analysis above shows


i2π/3
(1 − e )I = 2πi(Res(f , i) + Res(f , −i)) (10.4.11)

All that’s left is to compute the residues using the chosen branch of z 1/3

1/3 i3π/2 1/3 −iπ


(−i) (e ) e 1
Res(f , −i) = = = =− (10.4.12)
i3π/2
−2i 2e 2 2

1/3 iπ/6 −iπ/3


i e e
Res(f , i) = = = (10.4.13)
2i iπ/2 2
2e

A little more algebra gives


−iπ/3
i2π/3
−1 + e – iπ/3
(1 − e )I = 2πi ⋅ = πi(−1 + 1/2 − i √3/2) = −πi e . (10.4.14)
2

Continuing
iπ/3
−πie πi π/2 π/2 π
I = = = = = –. (10.4.15)
1 − ei2π/3 eiπ/3 − e−πi/3 (eiπ/3 − e−iπ/3 )/2i sin(π/3) √3

Whew! (Note: a sanity check is that the result is real, which it had to be.)

 Example 10.4.2

Compute

dx
I =∫ . (10.4.16)
− −−−−
1 x √ x2 − 1

Solution
Let
1
f (z) = − − −−−. (10.4.17)
2
z√ z − 1

The first thing we’ll show is that the integral

10.4.2 https://math.libretexts.org/@go/page/6533

∫ f (x) dx (10.4.18)
1

is absolutely convergent. To do this we split it into two integrals


∞ 2 ∞
dx dx dx
∫ =∫ +∫ . (10.4.19)
− −−−− − −−−− − −−−−
1 x √ x2 − 1 1 x √ x2 − 1 2 x √ x2 − 1

The first integral on the right can be rewritten as


2 2
1 1 1 1 2 −− −−− 2
∫ ⋅ dx ≤ ∫ ⋅ dx = x − 1| . (10.4.20)
−− −−− −−−−− – −−−−− –√ 1
1 x√ x + 1 √x − 1 1 √2 √x − 1 √2

This shows the first integral is absolutely convergent.


The function f (x) is asymptotically comparable to 1/x , so the integral from 2 to ∞ is also absolutely convergent.
2

We can conclude that the original integral is absolutely convergent.


Next, we use the following contour. Here we assume the big circles have radius R and the small ones have radius r (Figure
10.4.2).

Figure 10.4.2 : Copy and Paste Caption here. (CC BY-NC; Ümit Kaya)
We use the branch cut for square root that removes the positive real axis. In this branch


0 < arg(z) < 2π and 0 < arg(√w ) < π. (10.4.21)

For f (z) , this necessitates the branch cut that removes the rays [1, ∞) and (−∞, −1] from the complex plane.
The pole at z = 0 is the only singularity of f (z) inside the contour. It is easy to compute that
1 1
Res(f , 0) = = = −i. (10.4.22)

−−
√−1 i

So, the residue theorem gives us

∫ f (z) dz = 2πiRes(f , 0) = 2π. (10.4.23)


C1 +C2 −C3 −C4 +C5 −C6 −C7 +C8

In a moment we will show the following limits

lim ∫ f (z) dz = lim ∫ f (z) dz = 0 (10.4.24)


R→∞ R→∞
C1 C5

10.4.3 https://math.libretexts.org/@go/page/6533
lim ∫ f (z) dz = lim ∫ f (z) dz = 0. (10.4.25)
r→0 r→∞
C3 C7

We will also show


limR→∞,r→0 ∫ f (z) dz = limR→∞,r→0 ∫ f (z) dz
C2 −C4
(10.4.26)
= limR→∞,r→0 ∫ f (z) dz = limR→∞,r→0 ∫ f (z) dz = I .
−C6 C8

Using these limits, Equation 10.5.23 implies 4I = 2π , i.e.


I = π/2. (10.4.27)

All that’s left is to prove the limits asserted above.


The limits for C and C follow from Theorem 10.2.1 because
1 5

3/2
|f (z)| ≈ 1/|z| (10.4.28)

for large z .
We get the limit for C as follows. Suppose r is small, say much less than 1. If
3


z = −1 + re (10.4.29)

is on C then,
3

1 1 M
|f (z)| = − −−− − −−− = −−−−−−−− − ≤ . (10.4.30)
iθ iθ
|z√ z − 1 √ z + 1 | | − 1 + re | √ | − 2 + re | √r √r

where M is chosen to be bigger than


1
(10.4.31)
−−−−−−−− −
iθ iθ
| − 1 + re | √ | − 2 + re |

for all small r.


Thus,
M M
|∫ f (z) dz| ≤ ∫ |dz| ≤ ⋅ 2πr = 2πM √r. (10.4.32)
C3 C3 √r | √r|

This last expression clearly goes to 0 as r → 0 .


The limit for the integral over C is similar.
7

We can parameterize the straight line C by 8

z = x + iϵ, (10.4.33)

where ϵ is a small positive number and x goes from (approximately) 1 to ∞. Thus, on C , we have 8

2
arg(z − 1) ≈ 0 and f (z) ≈ f (x). (10.4.34)

All these approximations become exact as r → 0 . Thus,


lim ∫ f (z) dz = ∫ f (x) dx = I . (10.4.35)


R→∞,r→0
C8 1

We can parameterize −C by 6

z = x − iϵ (10.4.36)

where x goes from ∞ to 1. Thus, on C , we have 6

2
arg(z − 1) ≈ 2π, (10.4.37)

so

10.4.4 https://math.libretexts.org/@go/page/6533
− − −−− − −−−−
2 2
√ z − 1 ≈ −√ x − 1 . (10.4.38)

This implies
1
f (z) ≈ −−−−− = f (x). (10.4.39)
√ 2
(−x)(− x −1)

Thus,
1 ∞

lim ∫ f (z) dz = ∫ f (x)(−dx) = ∫ f (x) dx = I . (10.4.40)


R→∞,r→0
C2 ∞ 1

The last curve −C is handled similarly.


4

This page titled 10.4: Integrands with branch cuts is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

10.4.5 https://math.libretexts.org/@go/page/6533
10.5: Cauchy principal value
First an example to motivate defining the principal value of an integral. We’ll actually compute the integral in the next section.

 Example 10.5.1
Let

sin(x)
I =∫ dx. (10.5.1)
0
x

This integral is not absolutely convergent, but it is conditionally convergent. Formally, of course, we mean
R
sin(x)
I = lim ∫ dx. (10.5.2)
R→∞
0 x

We can proceed as in Example 10.3.3. First note that sin(x)/x is even, so



1 sin(x)
I = ∫ dx. (10.5.3)
2 −∞
x

ix
e
Next, to avoid the problem that sin(z) goes to infinity in both the upper and lower half-planes we replace the integrand by .
x

We’ve changed the problem to computing


∞ ix
~ e
I =∫ dx. (10.5.4)
−∞
x

The problems with this integral are caused by the pole at 0. The biggest problem is that the integral doesn’t converge! The
other problem is that when we try to use our usual strategy of choosing a closed contour we can’t use one that includes z = 0
on the real axis. This is our motivation for defining principal value. We will come back to this example below.

 Definition

Suppose we have a function f (x) that is continuous on the real line except at the point x , then we define the Cauchy principal
1

value as
∞ x1 −r1 R

p.v. ∫ f (x) dx = lim ∫ f (x) dx + ∫ f (x) dx. (10.5.5)


R→∞, r1 →0
−∞ −R x1 +r1

Provided the limit converges. You should notice that the intervals around x1 and around ∞ are symmetric. Of course, if the
integral

∫ f (x) dx (10.5.6)
−∞

converges, then so does the principal value and they give the same value. We can make the definition more flexible by
including the following cases.
1. If f (x) is continuous on the entire real line then we define the principal value as
∞ R

p.v. ∫ f (x) dx = lim ∫ f (x) dx (10.5.7)


R→∞
−∞ −R

2. If we have multiple points of discontinuity, x 1 < x2 < x3 < . . . < xn , then


∞ x1 −r1 x2 −r2 x3 −r3 R

p.v. ∫ f (x) dx = lim ∫ f (x) dx + ∫ +∫ +...∫ f (x) dx. (10.5.8)


−∞ −R x1 +r1 x2 +r2 xn +rn

Here the limit is taken as R → ∞ and each of the r k → 0 (Figure 10.5.1).

10.5.1 https://math.libretexts.org/@go/page/6534
Figure 10.5.1 : Intervals of integration for principal value are symmetric around x and ∞ . (CC BY-NC; Ümit Kaya) k

The next example shows that sometimes the principal value converges when the integral itself does not. The opposite is never true.
That is, we have the following theorem.

 Example 10.5.2
∞ ∞
If f (x) has discontinuities at x1 < x2 < . . . < xn and ∫ −∞
f (x) dx converges then so does p.v. ∫ −∞
.
f (x) dx

Solution
The proof amounts to understanding the definition of convergence of integrals as limits. The integral converges means that
each of the limits
x1 −a1
limR1 →∞, a1 →0 ∫ f (x) dx
−R1

x2 −a2
limb →0, a2 →0 ∫ f (x) dx
1 x1 +b1
(10.5.9)
...

R2
limR2 →∞, bn →0 ∫ f (x) dx.
xn +bn

converges. There is no symmetry requirement, i.e. R and R are completely independent, as are a and b etc.
1 2 1 1

The principal value converges means


x1 −r1 x2 −r2 x3 −r3 R

lim ∫ +∫ +∫ +...∫ f (x) dx (10.5.10)


−R x1 +r1 x2 +r2 xn +rn

converges. Here the limit is taken over all the parameter R → ∞, r → 0 . This limit has symmetry, e.g. we replaced both a
k 1

and b in Equation 10.6.9 by r etc. Certainly if the limits in Equation 10.6.9 converge then so do the limits in Equation
1 1

10.6.10. QED

 Example 10.5.3

Consider both
∞ ∞
1 1
∫ dx and p.v. ∫ dx. (10.5.11)
−∞
x −∞
x

The first integral diverges since


−r1 R2
1 1
∫ dx + ∫ dx = ln(r1 ) − ln(R1 ) + ln(R2 ) − ln(r2 ). (10.5.12)
−R1
x r2
x

This clearly diverges as R 1, R2 → ∞ and r


1, r2 → 0 .
On the other hand the symmetric integral
−r R
1 1
∫ dx + ∫ dx = ln(r) − ln(R) + ln(R) − ln(r) = 0. (10.5.13)
−R
x r
x

This clearly converges to 0.


We will see that the principal value occurs naturally when we integrate on semicircles around points. We prepare for this in the
next section.

10.5.2 https://math.libretexts.org/@go/page/6534
This page titled 10.5: Cauchy principal value is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

10.5.3 https://math.libretexts.org/@go/page/6534
10.6: Integrals over portions of circles
We will need the following theorem in order to combine principal value and the residue theorem.

 Theorem 10.6.1
Suppose f (z) has a simple pole at z . Let C be the semicircle γ(θ) = z
0 r 0 + re

, with 0 ≤ θ ≤ π . Then

lim ∫ f (z) dz = πiRes(f , z0 ) (10.6.1)


r→0
Cr

Figure 10.6.1 : Small semicircle of radius r around z . (CC BY-NC; Ümit Kaya)
0

Proof
Since we take the limit as r goes to 0, we can assume r is small enough that f (z) has a Laurent expansion of the punctured
disk of radius r centered at z . That is, since the pole is simple,
0

b1
f (z) = + a0 + a1 (z − z0 ) + . . . for 0 < |z − z0 | ≤ r. (10.6.2)
z − z0

Thus,
π π
iθ iθ iθ 2 i2θ
∫ f (z) dz = ∫ f (z0 + re )ri e dθ = ∫ (b1 i + a0 ire + a1 i r e + . . . ) dθ (10.6.3)
Cr 0 0

The b term gives πib . Clearly all the other terms go to 0 as r → 0 . QED
1 1

If the pole is not simple the theorem doesn’t hold and, in fact, the limit does not exist.
The same proof gives a slightly more general theorem.

 Theorem 10.6.2
Suppose f (z) has a simple pole at z . Let C be the circular arc γ(θ) = z
0 r 0 + re

, with θ 0 ≤ θ ≤ θ0 + α . Then

lim ∫ f (z) dz = αiRes(f , z0 ) (10.6.4)


r→0
Cr

10.6.1 https://math.libretexts.org/@go/page/51085
Im(z)
Cr

α
r
z0
Re(z)

Figure 10.6.1 : Small circular arc of radius r around z . (CC BY-NC; Ümit Kaya)
0

 Example 10.6.1 Return to Example 10.6.1

A long time ago we left off Example 10.6.1 to define principal value. Let’s now use the principal value to compute
∞ ix
~ e
I = p.v. ∫ dx. (10.6.5)
−∞
x

Solution
We use the indented contour shown below. The indentation is the little semicircle the goes around z =0 . There are no poles
inside the contour so the residue theorem implies
iz
e
∫ dz = 0. (10.6.6)
C1 −Cr +C2 +CR
z

Figure 10.6.3 : (CC BY-NC; Ümit Kaya)


Next we break the contour into pieces.
iz
e ~
lim ∫ dz = I . (10.6.7)
R→∞,r→0
C1 +C2
z

Theorem 10.2.2(a) implies


iz
e
lim ∫ dz = 0. (10.6.8)
R→∞
CR
z

Equation 10.7.1 in Theorem 10.7.1 tells us that


iz iz
e e
lim ∫ dz = πiRes( , 0) = πi (10.6.9)
r→0
Cr
z z

Combining all this together we have


iz
e ~
lim ∫ dz = I − πi = 0, (10.6.10)
R→∞,r→0
C1 −Cr +C2 +CR
z

~ ∞
sin(x)
so I = πi . Thus, looking back at Example 10.3.3, where I =∫
0
dx , we have
x

10.6.2 https://math.libretexts.org/@go/page/51085
1 ~ π
I = Im(I ) = , (10.6.11)
2 2

~
There is a subtlety about convergence we alluded to above. That is, I is a genuine (conditionally) convergent integral, but I
only exists as a principal value. However since I is a convergent integral we know that computing the principle value as we
just did is sufficient to give the value of the convergent integral.

This page titled 10.6: Integrals over portions of circles is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

10.6.3 https://math.libretexts.org/@go/page/51085
10.7: Fourier transform
 Definition: Fourier Transform

The Fourier transform of a function f (x) is defined by


^ −ixω
f (ω) = ∫ f (x)e dx (10.7.1)
−∞

This is often read as 'f -hat'.

 Theorem 10.7.1: Fourier Inversion Formula


We can recover the original function \f(x)\) with the Fourier inversion formula

1
^ ixω
f (x) = ∫ f (ω)e dω. (10.7.2)
2π −∞

So, the Fourier transform converts a function of x to a function of ω and the Fourier inversion converts it back. Of course,
everything above is dependent on the convergence of the various integrals.

Proof
We will not give the proof here. (We may get to it later in the course.)

 Example 10.7.1

Let
−at
e for t > 0
f (t) = { (10.7.3)
0 for t < 0

where a > 0 . Compute f^(ω) and verify the Fourier inversion formula in this case.
Solution
Computing f^ is easy: For a > 0
∞ ∞
−iωt −at −iωt
1
^
f (ω) = ∫ f (t)e dt = ∫ e e dt = (recall a > 0). (10.7.4)
−∞ 0
a + iω

We should first note that the inversion integral converges. To avoid distraction we show this at the end of this example.
Now, let
1
g(z) = (10.7.5)
a + iz

M
Note that f^(ω) = g(ω) and |g(z)| < for large |z|.
|z|

To verify the inversion formula we consider the cases t >0 and t <0 separately. For t >0 we use the standard contour
(Figure 10.7.1).

10.7.1 https://math.libretexts.org/@go/page/51086
lm(z)
C2

i(x1 + x2)
C3 C1

Re(z)
−x2 C4 x1
Figure 10.7.1 : Standard contour. (CC BY-NC; Ümit Kaya)
Theorem 10.2.2(a) implies that

izt
lim ∫ g(z)e dz = 0 (10.7.6)
x1 →∞, x2 →∞
C1 +C2 +C3

Clearly

izt ^
lim ∫ g(z)e dz = ∫ f (ω) dω (10.7.7)
x1 →∞, x2 →∞
C4 −∞

The only pole of g(z)e is at z = ia , which is in the upper half-plane. So, applying the residue theorem to the entire closed
izt

contour, we get for large x , x :


1 2

izt −at
e e
izt
∫ g(z)e dz = 2πiRes( , ia) = . (10.7.8)
C1 +C2 +C3 +C4
a + iz i

Combining the three equations 10.8.6, 10.8.7 and 10.8.8, we have


^ −at
∫ f (ω) dω = 2π e for t > 0 (10.7.9)
−∞

This shows the inversion formulas holds for t > 0 .


For t < 0 we use the contour in Figure 10.7.2.

Figure 10.7.2 : (CC BY-NC; Ümit Kaya)


Theorem 10.2.2(b) implies that

izt
lim ∫ g(z)e dz = 0 (10.7.10)
x1 →∞, x2 →∞
C1 +C2 +C3

Clearly

1 izt
1
^
lim ∫ g(z)e dz = ∫ f (ω) dω (10.7.11)
x1 →∞, x2 →∞ 2π C4
2π −∞

Since, there are no poles of g(z)e izt


in the lower half-plane, applying the residue theorem to the entire closed contour, we get
for large x , x :
1 2

izt
izt
e
∫ g(z)e dz = −2πiRes( , ia) = 0. (10.7.12)
C1 +C2 +C3 +C4
a + iz

10.7.2 https://math.libretexts.org/@go/page/51086
Thus,

1
^
∫ f (ω) dω = 0 for t < 0 (10.7.13)
2π −∞

This shows the inversion formula holds for t < 0 .


Finally, we give the promised argument that the inversion integral converges. By definition
iωt
∞ ∞ e
^ iωt
∫ f (ω)e dω = ∫ dω
−∞ −∞
a + iω
(10.7.14)
∞ a cos(ωt) + ω sin(ωt) − iω cos(ωt) + ia sin(ωt)
= ∫ dω
−∞ 2 2
a +ω

The terms without a factor of ω in the numerator converge absolutely because of the ω in the denominator. The terms with a2

factor of ω in the numerator do not converge absolutely. For example, since


ω sin(ωt)
(10.7.15)
2 2
a +ω

decays like 1/ω, its integral is not absolutely convergent. However, we claim that the integral does converge conditionally.
That is, both limits exist and are finite.
R2 0
ω sin(ωt) ω sin(ωt)
lim ∫ dω and lim ∫ dω (10.7.16)
R2 →∞ 2 2 R1 →∞ 2 2
0 a +ω −R1 a +ω

ω
The key is that, as sin(ωt) alternates between positive and negative arches, the function is decaying monotonically.
a2 + ω2

So, in the integral, the area under each arch adds or subtracts less than the arch before. This means that as R1 (or R2 ) grows
the total area under the curve oscillates with a decaying amplitude around some limiting value.

Figure 10.7.3 : Total area oscillates with a decaying amplitude. (CC BY-NC; Ümit Kaya)

This page titled 10.7: Fourier transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

10.7.3 https://math.libretexts.org/@go/page/51086
10.8: Solving DEs using the Fourier transform
Let
d
D = . (10.8.1)
dt

Our goal is to see how to use the Fourier transform to solve differential equations like

P (D)y = f (t). (10.8.2)

Here P (D) is a polynomial operator, e.g.


2
D + 8D + 7I . (10.8.3)

We first note the following formula:


ˆ ^
Df (ω) = iωf . (10.8.4)

P roof . This is just integration by parts:



ˆ ′ −iωt
Df (ω) = ∫ f (t)e dt
−∞

iωt ∞ ∞ −iωt
= f (t)e |−∞ − ∫ f (t)(−iωe ) dt
−∞
(10.8.5)
∞ −iωt
= iω ∫ f (t)e dt
−∞

^
= iωf (ω) QED

In the third line we assumed that f decays so that f (∞) = f (−∞) = 0 .


It is a simple extension of Equation 10.9.4 to see
ˆ ^
(P (D)f )(ω) = P (iω)f . (10.8.6)

We can now use this to solve some differential equations.

 Example 10.8.1

Solve the equation


−at
′′ ′ e if t > 0
y (t) + 8 y (t) + 7y(t) = f (t) = { (10.8.7)
0 if t < 0

Solution
In this case, we have
2
P (D) = D + 8D + 7I , (10.8.8)

so
2
P (s) = s + 8s + 7 = (s + 7)(s + 1). (10.8.9)

The DE
P (D)y = f (t) (10.8.10)

transforms to

^ = f^.
P (iw)y (10.8.11)

Using the Fourier transform of f found in Example 10.8.1 we have


^
f 1
^(ω) =
y = . (10.8.12)
P (iω) (a + iω)(7 + iω)(1 + iω)

Fourier inversion says that



1 iωt
y(t) = ∫ ^(ω)e
y dω (10.8.13)
2π −∞

As always, we want to extend y^ to be function of a complex variable z . Let's call it g(z) :

10.8.1 https://math.libretexts.org/@go/page/51139
1
g(z) = . (10.8.14)
(a + iz)(7 + iz)(1 + iz)

3
Now we can proceed exactly as in Example 10.8.1. We know |g(z)| < M /|z| for some constant M . Thus, the conditions of Theorem 10.2.2
are easily met. So, just as in Example 10.8.1, we have:
For t > 0 , e izt
is bounded in the upper half-plane, so we use the contour in Figure 10.8.1 on the left.
1 ∞ 1
iωt izt
y(t) = ∫ ^(ω)e
y dω = limx1 →∞, x2 →∞ ∫ g(z)e dz
−∞ C4
2π 2π

1 (10.8.15)
izt
= limx →∞, x2 →∞ ∫ g(z)e dz
1 C1 +C2 +C3 +C4

izt
= i ∑ residues of e g(z) in the upper half-plane

The poles of e izt


g(z) are at
,
ia 7i i , .
These are all in the upper half-plane. The residues are respectively,
−at −7t −t
e e e
, , (10.8.16)
i(7 − a)(1 − a) i(a − 7)(6) i(a − 1)(6)

Thus, for t > 0 we have


−at −7t −t
e e e
y(t) = − + . (10.8.17)
(7 − a)(1 − a) (a − 7)(6) (a − 1)(6)

Figure 10.8.1 : (CC BY-NC; Ümit Kaya)


More briefly, when t < 0 we use the contour above on the right. We get the exact same string of equalities except the sum is over the residues
of e g(z) in the lower half-plane. Since there are no poles in the lower half-plane, we find that
izt

^(t) = 0
y (10.8.18)

when t < 0 .
Conclusion (reorganizing the signs and order of the terms):

⎧0
⎪ for t < 0
−at −7t −t
y(t) = ⎨ e e e (10.8.19)

⎪ + − for t > 0.
(7 − a)(1 − a) (7 − a)(6) (1 − a)(6)

 Note

Because |g(z)| < M /|z| , we could replace the rectangular contours by semicircles to compute the Fourier inversion integral.
3

 Example 10.8.2
Consider
−at
′′ e if t > 0
y + y = f (t) = { (10.8.20)
0 if t < 0.

Find a solution for t > 0 .

10.8.2 https://math.libretexts.org/@go/page/51139
Solution
We work a little more quickly than in the previous example.
Taking the Fourier transform we get
^ ^
f (ω) f (ω) 1
^(ω) =
y = = . (10.8.21)
2 2
P (iω) 1 −ω (a + iω)(1 − ω )

(In the last expression, we used the known Fourier transform of f .)


As usual, we extend y^(ω) to a function of z :
1
g(z) = . (10.8.22)
(a + iz)(1 − z 2 )

This has simple poles at


-1, 1, ai.
Since some of the poles are on the real axis, we will need to use an indented contour along the real axis and use principal value to compute the
integral.
The contour is shown in Figure 10.8.2 . We assume each of the small indents is a semicircle with radius r. The big rectangular path from
(R, 0) to (−R, 0) is called C . R

Figure 10.8.2 : (CC BY-NC; Ümit Kaya)


For t > 0 the function e izt
g(z), M /|z|
3
in the upper half-plane. Thus, we get the following limits:
izt
limR→∞ ∫ e g(z) dz = 0 (Theorem 10.2.2(b))
CR

izt izt
limR→∞,r→0 ∫ e g(z) dz = πiRes(e g(z), −1) (Theorem 10.7.2)
C2

izt izt
limR→∞,r→0 ∫ e g(z) dz = πiRes(e g(z), 1) (Theorem 10.7.2)
C4

izt iωt
limR→∞,r→0 ∫ e ^(t)e
g(z) dz = p.v. y dt
C1 +C3 +C5

Putting this together with the residue theorem we have


izt ∞ iωt izt izt
limR→∞,r→0 ∫ e g(z) dz = p.v. ∫ ^(t)e
y dt − πiRes(e g(z), −1) − πiRes(e g(z), 1)
C1 −C2 +C3 −C4 +C5 +CR −∞
(10.8.23)
izt
= 2πiRes(e , ai).

All that’s left is to compute the residues and do some arithmetic. We don’t show the calculations, but give the results
−it
e
izt
Res(e g(z), −1) =
2(a − i)

it
e
izt
Res(e g(z), 1) = − (10.8.24)
2(a + i)

−at
e
izt
Res(e g(z), ai) = −
2
i(1 + a )

We get, for t > 0 ,

10.8.3 https://math.libretexts.org/@go/page/51139
1 ∞ iωt
y(t) = p.v. ∫ ^(t)e
y dt
−∞

i i
izt izt izt
= Res(e g(z), −1) + Res(e g(z), 1) + iRes(e g(z), ai) (10.8.25)
2 2
−at
e a 1
= + sin(t) − cos(t).
2 2 2
1 +a 1 +a 1 +a

This page titled 10.8: Solving DEs using the Fourier transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

10.8.4 https://math.libretexts.org/@go/page/51139
CHAPTER OVERVIEW
11: Conformal Transformations
In this topic we will look at the geometric notion of conformal maps. It will turn out that analytic functions are automatically
conformal. Once we have understood the general notion, we will look at a specific family of conformal maps called fractional
linear transformations and, in particular at their geometric properties. As an application we will use fractional linear
transformations to solve the Dirichlet problem for harmonic functions on the unit disk with specified values on the unit circle. At
the end we will return to some questions of fluid flow.
11.1: Geometric Definition of Conformal Mappings
11.2: Tangent vectors as complex numbers
11.3: Analytic functions are Conformal
11.4: Digression to harmonic functions
11.5: Riemann Mapping Theorem
11.6: Examples of conformal maps and excercises
11.7: Fractional Linear Transformations
11.8: Reflection and symmetry
11.9: Flows around cylinders
11.10: Solving the Dirichlet problem for harmonic functions

Thumbnail: A rectangular grid under a conformal map. It is seen that maps pairs of lines intersecting at 90° to pairs of curves still
intersecting at 90°. (Public Domain; Oleg Alexandrov via Wikipedia)

This page titled 11: Conformal Transformations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
11.1: Geometric Definition of Conformal Mappings
We start with a somewhat hand-wavy definition:

 Informal Definition: Conformal Maps


Conformal maps are functions on C that preserve the angles between curves.
More precisely: Suppose f (z) is differentiable at z and γ(t) is a smooth curve through
0 z0 . To be concrete, let's suppose
γ(t ) = z . The function maps the point z to w = f (z ) and the curve γ to
0 0 0 0 0

~
γ (t) = f (γ(t)). (11.1.1)

Under this map, the tangent vector γ ′


(t0 ) at z is mapped to the tangent vector
0

~′ ′
γ (t0 ) = (f ∘ γ ) (t0 ) (11.1.2)

at w . With these notations we have the following definition.


0

 Definition: Conformal Functions


The function f (z) is conformal at z if there is an angle ϕ and a scale a > 0 such that for any smooth curve γ(t) through z
0 0

the map f rotates the tangent vector at z by ϕ and scales it by a . That is, for any γ, the tangent vector (f ∘ γ ) (t ) is found by
0

0

rotating γ (t ) by ϕ and scaling it by a .



0

If f (z) is defined on a region A , we say it is a conformal map on A if it is conformal at each point z in A .

 Note
The scale factor a and rotation angle ϕ depends on the point z , but not on any of the curves through z .

 Example 11.1.1
Figure 11.1.1 below shows a conformal map f (z) mapping two curves through z to two curves through
0 w0 = f (z0 ) . The
tangent vectors to each of the original curves are both rotated and scaled by the same amount.

Figure 11.1.1 : A conformal map rotates and scales all tangent vectors at z by the same amount. (CC BY-NC; Ümit Kaya)
0

Remark 1. Conformality is a local phenomenon. At a different point z the rotation angle and scale factor might be different.
1

Remark 2. Since rotations preserve the angles between vectors, a key property of conformal maps is that they preserve the
angles between curves.

11.1.1 https://math.libretexts.org/@go/page/6537
 Example 11.1.2

Recall that way back in Topic 1 we saw that f (z) = z maps horizontal and vertical grid lines to mutually orthogonal
2

parabolas. We will see that f (z) is conformal. So, the orthogonality of the parabolas is no accident. The conformal map
preserves the right angles between the grid lines.

Figure 11.1.1 : A conformal map of a rectangular grid. (CC BY-NC; Ümit Kaya)

This page titled 11.1: Geometric Definition of Conformal Mappings is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

11.1.2 https://math.libretexts.org/@go/page/6537
11.2: Tangent vectors as complex numbers
In previous classes, you used parametrized curves γ(t) = (x(t), y(t)) in the -plane. Considered this way, the tangent vector is
xy

just the derivative:


′ ′ ′
γ (t) = (x (t), y (t)). (11.2.1)

Note, as a vector, (x , y ) represents a displacement. If the vector starts at the origin, then the endpoint is at (x , y ). More typically
′ ′ ′ ′

we draw the vector starting at the point γ(t).


You may also previously used parametrized curves γ(t) = x(t) + iy(t) in the complex plane. Considered this way, the tangent
vector is just the derivative:
′ ′ ′
γ (t) = x (t) + i y (t). (11.2.2)

It should be clear that these representations are equivalent. The vector (x , y ) and the complex number x + i y both represent the
′ ′ ′ ′

same displacement. Also, the length of a vector and the angle between two vectors is the same in both representations.
Thinking of tangent vectors to curves as complex numbers allows us to recast conformality in terms of complex numbers.

 Theorem 11.2.1

If f (z) is conformal at z then there is a complex number c = ae such that the map f multiplies tangent vectors at z by c .
0

0

Conversely, if the map f multiplies all tangent vectors at z by c = ae then f is conformal at z .


0

0

Proof
By definition f is conformal at z means that there is an angle ϕ and a scalar a > 0 such that the map
0 f rotates tangent
vectors at z by ϕ and scales them by a . This is exactly the effect of multiplication by c = ae .
0

This page titled 11.2: Tangent vectors as complex numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

11.2.1 https://math.libretexts.org/@go/page/6538
11.3: Analytic functions are Conformal
 Theorem 11.3.1 Operational definition of conformal

If f is analytic on the region A and f ′


(z0 ) ≠ 0 , then f is conformal at z . Furthermore, the map f multiplies tangent vectors at
0

z by f (z ) .

0 0

Proof
The proof is a quick computation. Suppose z = γ(t) is curve through z0 with γ(t0 ) = z0 . The curve γ(t) is transformed
by f to the curve w = f (γ(t)) . By the chain rule we have
df (γ(t))
′ ′ ′ ′
|t0 = f (γ(t0 ))γ (t0 ) = f (z0 )γ (t0 ). (11.3.1)
dt

The theorem now follows from Theorem 11.3.1.

 Example 11.3.1 Basic example

Suppose c = ae and consider the map f (z) = cz . Geometrically, this map rotates every point by ϕ and scales it by

a .
Therefore, it must have the same effect on all tangent vectors to curves. Indeed, f is analytic and f (z) = c is constant. ′

 Example 11.3.2

Let f (z) = z . So f
2 ′
(z) = 2z . Thus the map f has a different affect on tangent vectors at different points z and z . 1 2

 Example 11.3.3 Linear approximation

Suppose f (z) is analytic at z = 0 . The linear approximation (first two terms of the Taylor series) is

f (z) ≈ f (0) + f (0)z. (11.3.2)

If γ(t) is a curve with γ(t 0) =0 then, near t ,0


f (γ(t)) ≈ f (0) + f (0)γ(t). (11.3.3)

That is, near 0, f looks like our basic example plus a shift by f (0).

 Example 11.3.4
The map f (z) = z̄ has lots of nice geometric properties, but it is not conformal. It preserves the length of tangent vectors and
¯
¯

the angle between tangent vectors. The reason it isn’t conformal is that is does not rotate tangent vectors. Instead, it reflects
them across the x-axis.
In other words, it reverses the orientation of a pair of vectors. Our definition of conformal maps requires that it preserves
orientation.

This page titled 11.3: Analytic functions are Conformal is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

11.3.1 https://math.libretexts.org/@go/page/6539
11.4: Digression to harmonic functions
 Theorem 11.4.1

If u and v are harmonic conjugates and g = u + iv has g ′


(z0 ) ≠ 0 , then the level curves of u and v through z are orthogonal.
0

 Note
We proved this in an earlier topic using the Cauchy-Riemann equations. Here will make an argument involving conformal
maps.

Proof
First we’ll examine how g maps the level curve u(x, y) = a . Since g = u + iv , the image of the level curve is
w = a + iv , i.e it’s (contained in) a vertical line in the w -plane. Likewise, the level curve v(x, y) = b is mapped to the

horizontal line w = u + ib .
Thus, the images of the two level curves are orthogonal. Since g is conformal it preserves the angle between the level
curves, so they must be orthogonal.

g = u + iv maps level curves of u and v to grid lines.

This page titled 11.4: Digression to harmonic functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

11.4.1 https://math.libretexts.org/@go/page/6540
11.5: Riemann Mapping Theorem
The Riemann mapping theorem is a major theorem on conformal maps. The proof is fairly technical and we will skip it. In practice,
we will write down explicit conformal maps between regions.

 Theorem 11.5.1 Riemann mapping theorem


If A is simply connected and not the whole plane, then there is a bijective conformal map from A to the unit

 Corollary

For any two such regions there is a bijective conformal map from one to the other. We say they are conformally equivalent.

This page titled 11.5: Riemann Mapping Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

11.5.1 https://math.libretexts.org/@go/page/6541
11.6: Examples of conformal maps and excercises
As we’ve seen, once we have flows or harmonic functions on one region, we can use conformal maps to map them to other regions.
In this section we will offer a number of conformal maps between various regions. By chaining these together along with scaling,
rotating and shifting we can build a large library of conformal maps. Of course there are many many others that we will not touch
on.
For convenience, in this section we will let
z−i
T0 (z) = . (11.6.1)
z+i

This is our standard map of taking the upper half-plane to the unit disk.

 Example 11.6.1
Let H be the half-plane above the line
α

y = tan(α)x,

i.e., {(x, y) : y > tan(α)x}. Find an FLT from H to the unit disk.
α

Solution
We do this in two steps. First use the rotation
−iα
T−α (a) = e z

to map H to the upper half-plane. Follow this with the map T . So our map is T
α 0 0 ∘ T−α (z) .
(You supply the picture)

 Example 11.6.2
Let A be the channel 0 ≤ y ≤ π in the xy-plane. Find a conformal map from A to the upper half-plane.
Solution
The map f (z) = e does the trick. (See the Topic 1 notes!)
z

(You supply the picture: horizontal lines get mapped to rays from the origin and vertical segments in the channel get mapped to
semicircles.)

 Example 11.6.3

Let B be the upper half of the unit disk. Show that T −1

0
maps B to the second quadrant.
Solution
You supply the argument and figure.

 Example 11.6.4
Let B be the upper half of the unit disk. Find a conformal map from B to the upper half-plane.
Solution
The map T (z) maps B to the second quadrant. Then multiplying by −i maps this to the first quadrant. Then squaring maps
−1

this to the upper half-plane. In the end we have


iz + i 2
f (z) = (−i( )) .
−z + 1

You supply the sequence of pictures.

11.6.1 https://math.libretexts.org/@go/page/51146
 Example 11.6.5

Let A be the infinite well {(x, y) : x ≤ 0, 0 ≤ y ≤ π} . Find a confomal map from A to the upper half-plane.

Figure 11.6.1 : Infinite well function. (CC BY-NC; Ümit Kaya)


Solution
The map f (z) = e maps A to the upper half of the unit disk. Then we can use the map from Example 11.6.4 to map the half-
z

disk to the upper half-plane.


You supply the sequence of pictures.

 Example 11.6.6

Show that the function

f (z) = z + 1/z

maps the region shown in Figure 11.6.2 to the upper half-plane.

Figure 11.6.2 : (CC BY-NC; Ümit Kaya)

This page titled 11.6: Examples of conformal maps and excercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

11.6.2 https://math.libretexts.org/@go/page/51146
11.7: Fractional Linear Transformations
 Definition: Fractional Linear Transformations

A fractional linear transformation is a function of the form


az + b
T (z) = (11.7.1)
cz + d

where a , b , c , and d are complex constants and with ad − bc ≠ 0 .


These are also called Möbius transforms or bilinear transforms. We will abbreviate fractional linear transformation as FLT.

 Simple Point
If ad − bc = 0 then T (z) is a constant function.

Proof
The full proof requires that we deal with all the cases where some of the coefficients are 0. We’ll give the proof assuming
c ≠ 0 and leave the case c = 0 to you. Assuming c ≠ 0 , the condition ad − bc = 0 implies

a
(c, d) = (a, b). (11.7.2)
c

So,
(a/c)(cz + d) a
T (z) = = . (11.7.3)
cz + d c

That is, T (z) is constant.

Extension to ∞. It will be convenient to consider linear transformations to be defined on the extended complex plane C ∪ {∞}

by defining

a/c if c ≠ 0
T (∞) = {
∞ if c = 0 (11.7.4)

T (−d/c) = ∞ if c ≠ 0.

 Example 11.7.1: Scale and Rotate

Let T (z) = az . If a = r is real this scales the plane. If a = e iθ


it rotates the plane. If a = re iθ
it does both at once.

Figure 11.7.1 : Multiplication by a = re iθ


scales by r and rotates by θ . (CC BY-NC; Ümit Kaya)
Note that T is the fractional linear transformation with coefficients
a b a 0
[ ] =[ ].
c d 0 1

11.7.1 https://math.libretexts.org/@go/page/51142
(We’ll see below the benefit of presenting the coefficients in matrix form!)

 Example 11.7.2: Scale and Rotate and Translate


Let T (z) = az + b . Adding the b term introduces a translation to the previous example.

Figure 11.7.2 : The map w = az + b scales, rotates and shifts the square. (CC BY-NC; Ümit Kaya)
Note that T is the fractional linear transformation with coefficients

a b a b
[ ] =[ ].
c d 0 1

 Example 11.7.3: Inversion

Let T (z) = 1/z . This is called an inversion. It turns the unit circle inside out. Note that T (0) = ∞ and T (∞) = 0 . In the
figure below the circle that is outside the unit circle in the z plane is inside the unit circle in the w plane and vice-versa. Note
that the arrows on the curves are reversed.

Figure 11.7.3 : The map w = 1/z inverts the plane. (CC BY-NC; Ümit Kaya)
Note that T is the fractional linear transformation with coefficients
a b 0 1
[ ] =[ ].
c d 1 0

 Example 11.7.4

Let
z−i
T (z) = .
z+i

We claim that this maps the x-axis to the unit circle and the upper half-plane to the unit disk.
Solution

11.7.2 https://math.libretexts.org/@go/page/51142
First take x real, then
− −−−−
|x − i| √ x2 + 1
|T (x)| = = − −−−− = 1.
|x + i| √ x2 + 1

So, T maps the x-axis to the unit circle.


Next take z = x + iy with y > 0 , i.e. z in the upper half-plane. Clearly

|y + 1| > |y − 1|,

so

|z + i| = |x + i(y + 1)| > |x + i(y − 1)| = |z − i|,

implying that
|z − i|
|T (z)| = < 1.
|z + i|

So, T maps the upper half-plane to the unit disk.


We will use this map frequently, so for the record we note that
T (i) = 0 , T (∞) = 1 , T (−1) = i , T (0) = −1 , T (1) = −i .
These computations show that the real axis is mapped counterclockwise around the unit circle starting at 1 and coming back to
1.

z −i
Figure 11.7.4 : The map w = maps the upper-half plane to the unit disk. (CC BY-NC; Ümit Kaya)
z +i

Lines and circles

 Theorem 11.7.1

A linear fractional transformation maps lines and circles to lines and circles.
Before proving this, note that it does not say lines are mapped to lines and circles to circles. For example, in Example 11.7.4
the real axis is mapped the unit circle. You can also check that inversion w = 1/z maps the line z = 1 + iy to the circle
|z − 1/2| = 1/2 .

Proof
We start by showing that inversion maps lines and circles to lines and circles. Given z and w = 1/z we define x, y, u and v
by
1 x − iy
z = x + iy and w = = = u + iv
z x2 + y 2

So,

11.7.3 https://math.libretexts.org/@go/page/51142
x y
u = and v=− .
2 2 2
x +y x + y2

Now, every circle or line can be described by the equation


2 2
Ax + By + C (x +y ) = D

(If C =0 it descibes a line, otherwise a circle.) We convert this to an equation in u, v as follows.


2 2
Ax + By + C (x +y ) = D

Ax By D
⇔ + +C =
2 2 2 2 2 2
x +y x +y x +y

2 2
⇔ Au − Bv + C = D(u + v ).

In the last step we used the fact that


2 2 2 2 2 2
u +v = |w | = 1/|z| = 1/(x + y ).

We have shown that a line or circle in x, y is transformed to a line or circle in u, v . This shows that inversion maps lines
and circles to lines and circles.
We note that for the inversion w = 1/z.
1. Any line not through the origin is mapped to a circle through the origin.
2. Any line through the origin is mapped to a line through the origin.
3. Any circle not through the origin is mapped to a circle not through the origin.
4. Any circle through the origin is mapped to a line not through the origin.
Now, to prove that an arbitrary fractional linear transformation maps lines and circles to lines and circles, we factor it into a
sequence of simpler transformations.
First suppose that c = 0 . So,

T (z) = (az + b)/d. (11.7.5)

Since this is just translation, scaling and rotating, it is clear it maps circles to circles and lines to lines.
Now suppose that c ≠ 0 . Then,
a ad
(cz + d) + b −
az + b c c a b − ad/c
T (z) = = = +
cz + d cz + d c cz + d

So, w = T (z) can be computed as a composition of transforms


a
z ↦ w1 = cz + d ↦ w2 = 1/ w1 ↦ w = + (b − ad/c)w2
c

We know that each of the transforms in this sequence maps lines and circles to lines and circles. Therefore the entire
sequence does also.

Mapping z to w j j

It turns out that for two sets of three points z 1, z2 , z3 and w 1, w2 , w3 there is a fractional linear transformation that takes z to w . j j

We can construct this map as follows.


Let
(z − z1 )(z2 − z3 )
T1 (z) = . (11.7.6)
(z − z3 )(z2 − z1 )

Notice that
T1 (z1 ) = 0 ,T 1 (z1 ) =1 ,T 1 (z3 ) =∞ .

11.7.4 https://math.libretexts.org/@go/page/51142
Likewise let
(w − w1 )(w2 − w3 )
T2 (w) = . (11.7.7)
(w − w3 )(w2 − w1 )

Notice that
T2 (w1 ) = 0 ,T 2 (w2 ) =1 ,T 2 (w3 ) =∞ .
Now T (z) = T 2
−1
∘ T1 (z) is the required map.

Correspondence with Matrices


We can identify the transformation
az + b
T (z) = (11.7.8)
cz + d

with the matrix

a b
[ ]. (11.7.9)
c d

This identification is useful because of the following algebraic facts.


a b a b
1. If r ≠ 0 then [ ] and r [ ] correspond to the same FLT.
c d c d

P roof . This follows from the obvious equality


az + b raz + rb
= . (11.7.10)
cz + d rcz + rd

a b e f
2. If T (z) corresponds to A = [ ] and S(z) corresponds to B = [ ] then composition T ∘ S(z) corresponds to
c d g h

matrix multiplication AB.


P roof . The proof is just a bit of algebra.

ez + f a((ez + f )/(gz + h)) + b (ae + bg)z + af + bh


T ∘ S(z) = T( ) = =
gz + h c((ez + f )/(gz + h)) + d (ce + dg)z + cf + dh
(11.7.11)
a b e f ae + bg af + bh
AB = [ ][ ] =[ ]
c d g h ce + dg cf + dh

The claimed correspondence is clear from the last entries in the two lines above.
a b d −b
3. If T (z) corresponds to A = [ ] then T has an inverse and T −1
(w) corresponds to A −1
and also to [ ] , i.e. to
c d −c a

A
−1
without the factor of 1/det(A).
P roof . Since AA = I it is clear from the previous fact that T
−1 −1
corresponds to A −1
. Since

−1
1 d −b
A = [ ] (11.7.12)
ad − bc −c a

d −b
Fact 1 implies A −1
and [ ] both correspond to the same FLT, i.e. to T −1
.
−c a

 Example 11.7.5
a b
1. The matrix [ ] corresponds to T (z) = az + b .
0 1

e 0
2. The matrix [ −iα
] corresponds to rotation by 2α.
0 e

11.7.5 https://math.libretexts.org/@go/page/51142
0 1
3. The matrix [ ] corresponds to the inversion w = 1/z .
1 0

This page titled 11.7: Fractional Linear Transformations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

11.7.6 https://math.libretexts.org/@go/page/51142
11.8: Reflection and symmetry
Reflection and symmetry in a line

 Example 11.8.1

Suppose we have a line S and a point z not on S . The reflection of z in S is the point z so that S is the perpendicular
1 1 2

bisector to the line segment z z . Since there is exactly one such point z , the reflection of a point in a line is unique.
¯
¯¯¯¯¯¯¯
1
¯
2 2

 Definition

If z is the reflection of z in S , we say that z and z are symmetric with respect to the line S .
2 1 1 2

In the figure below the points z and z are symmetric in the x-axis. The points z and z are symmetric in the line S .
1 2 3 4

In order to define the reflection of a point in a circle we need to work a little harder. Looking back at the previous example we can
show the following.

 Fact
If z and z are symmetric in the line S , then any circle through z and z intersects S orthogonally.
1 2 1 2

Proof
Call the circle C . Since S is the perpendicular bisector of a chord of C , the center of C lies on S . Therefore S is a radial
line, i.e. it intersects C orthogonally.

Circles through symmetric points intersect the line at right angles.

11.8.1 https://math.libretexts.org/@go/page/51143
Reflection and symmetry in a circle
We will adapt this for our definition of reflection in a circle. So that the logic flows correctly we need to start with the definition of
symmetric pairs of points.

 Definition

Suppose S is a line or circle. A pair of points z1, z2 is called symmetric with respect to S if every line or circle through the two
points intersects S orthogonally.

First we state an almost trivial fact.

 Fact

Fractional linear transformations preserve symmetry. That is, if z 1, z2 are symmetric in a line or circle S , then, for an FLT T ,
T (z ) and T (z ) are symmetric in T (S) .
1 2

Proof
The definition of symmetry is in terms of lines and circles, and angles. Fractional linear transformations map lines and
circles to lines and circles and, being conformal, preserve angles.

 Theorem 11.8.1

Suppose S is a line or circle and z a point not on S . There is a unique point z such that the pair z
1 2 1, z2 is symmetric in S .

Proof
Let T be a fractional linear transformation that maps S to a line. We know that w = T (z ) has a unique reflection w in
1 1 2

this line. Since T −1


preserves symmetry, z and z = T (w ) are symmetric in S . Since w is the unique point
1 2
−1
2 2

symmetric to w the same is true for z vis-a-vis z . This is all shown in the figure below.
1 2 1

We can now define reflection in a circle.

 Definition
The point z in the Theorem 11.8.1 is called the reflection of z in S .
2 1

Reflection in the unit circle


Using the symmetry preserving feature of fractional linear transformations, we start with a line and transform to the circle. Let R

be the real axis and C the unit circle. We know the FLT
z−i
T (z) = (11.8.1)
z+i

maps R to C . We also know that the points z and z are symmetric in R . Therefore
¯
¯¯

11.8.2 https://math.libretexts.org/@go/page/51143
¯¯
z−i z̄ −i
¯¯
w1 = T (z) = and w2 = T (z̄ ) = (11.8.2)
z+i ¯¯
z̄ +i

are symmetric in D . Looking at the formulas, it is clear that w2 = 1/ w1


¯
¯¯¯¯
¯
. This is important enough that we highlight it as a
theorem.

 Theorem 11.8.2 Reflection in the unit circle

The reflection of z = x + iy = re iθ
in the unit circle is

1 z x + iy e
= = = . (11.8.3)
¯
¯¯ 2 2 2
z |z| x +y r

The calculations from 1/z are all trivial.


¯
¯¯

 Note
1. It is possible, but more tedious and less insightful, to arrive at this theorem by direct calcula- tion.
2. If z is on the unit circle then 1/z̄ = z . That is, z is its own reflection in the unit circle –as it should be.
¯
¯

3. The center of the circle 0 is symmetric to the point at ∞.

The figure below shows three pairs of points symmetric in the unit circle:
1 1 +i −2 + i
z1 = 2; w1 = , z2 = 1 + i; w2 = , z3 = −2 + i; w3 = .
2 2 5

Pairs of points z : w symmetric in the unit circle.


j j

 Example 11.8.2 Reflection in the circle of radius R

Suppose S is the circle |z| = R and z is a pint not on S . Find the reflection of z in S.
1 1

Solution
Our strategy is to map S to the unit circle, find the reflection and then map the unit circle back to S .
Start with the map T (z) = w = z/R . Clearly T maps S to the unit circle and
w1 = T (z1 ) = z1 /R. (11.8.4)

11.8.3 https://math.libretexts.org/@go/page/51143
The reflection of w is
1

w2 = 1/¯
¯¯¯¯
w ¯ ¯¯
1 = R/ z̄ 1 . (11.8.5)

Mapping back from the unit circle by T −1


we have
−1 ¯¯ 2
z2 = T (w2 ) = Rw2 = R / z̄ 1. (11.8.6)

Therefore the reflection of z is R


1
2 ¯
¯¯
/ z1 .

Here are three pairs of points symmetric in the circle of radius 2. Note, that this is the same figure as the one above with
everything doubled.
−4 + 2i
z1 = 4; w1 = 1, z2 = 2 + 2i; w2 = 1 + i, z3 = −4 + 2i; w3 = .
5

Pairs of points z ; w symmetric in the circle of radius 2.


j j

 Example 11.8.3
Find the reflection of z in the circle of radius R centered at c .
1

Solution
Let T (z) = (z − c)/R . T maps the circle centered at c to the unit circle. The inverse map is
−1
T (w) = Rw + c. (11.8.7)

So, the reflection of z is given by mapping z to


1 T (z) , reflecting this in the unit circle, and mapping back to the original
geometry with T . That is, the reflection z is
−1
2

2
z1 − c R R
z1 → → → z2 = + c. (11.8.8)
¯
¯¯¯¯¯¯¯¯¯¯¯
¯ ¯
¯¯¯¯¯¯¯¯¯¯¯
¯
R z1 − c z1 − c

We can now record the following important fact.

 (Reflection of the center)

For a circle S with center c the pair c , ∞ is symmetric with respect to the circle.

Proof
This is an immediate consequence of the formula for the reflection of a point in a circle. For example, the reflection of z in
the unit circle is 1/z. So, the reflection of 0 is infinity.
¯
¯¯

11.8.4 https://math.libretexts.org/@go/page/51143
 Example 11.8.4

Show that if a circle and a line don’t intersect then there is a pair of points z 1, z2 that is symmetric with respect to both the line
and circle.
Solution
By shifting, scaling and rotating we can find a fractional linear transformation T that maps the circle and line to the following
configuration: The circle is mapped to the unit circle and the line to the vertical line x = a > 1 .

For any real r, w = r and w = 1/r are symmetric in the unit circle. We can choose a specific r so that r and 1/r are
1 2

equidistant from a , i.e. also symmetric in the line x = a . It is clear geometrically that this can be done. Algebraically we solve
the equation
r + 1/r −−−−− 1 −−−−−
2 2 2
=a ⇒ r − 2ar + 1 = 0 ⇒ r = a + √a −1 ⇒ = a − √a −1. (11.8.9)
2 r

−−−−− −−−−−
Thus z 1 =T
−1 2
(a + √a − 1 ) and z 2 =T
−1 2
(a − √a − 1 ) are the required points.

 Example 11.8.5

Show that if two circles don’t intersect then there is a pair of points z 1, z2 that is symmetric with respect to both circles.
Solution
Using a fractional linear transformation that maps one of the circles to a line (and the other to a circle) we can reduce the
problem to that in the previous example.

 Example 11.8.6

Show that any two circles that don’t intersect can be mapped conformally to con- centric circles.
Solution
Call the circles S and S . Using the previous example start with a pair of points z , z which are symmetric in both circles.
1 2 1 2

Next, pick a fractional linear transformation T that maps z to 0 and z to infinity. For example,
1 2

z − z1
T (z) = . (11.8.10)
z − z2

Since T preserves symmetry 0 and ∞ are symmetric in the circle T (S ). This implies that 0 is the center of T (S ). Likewise 0
1 1

is the center of T (S ). Thus, T (S ) and T (S ) are concentric.


2 1 2

11.8: Reflection and symmetry is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

11.8.5 https://math.libretexts.org/@go/page/51143
11.9: Flows around cylinders
Milne-Thomson circle theorem
The Milne-Thomson theorem allows us to insert a circle into a two-dimensional flow and see how the flow adjusts. First we’ll state
and prove the theorem.

 Theorem 11.9.1 Milne-Thomson circle theorem


If f (z) is a complex potential with all its singularities outside |z| = R then
¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
2
R
Φ(z) = f (z) + f ( ) (11.9.1)
¯¯

is a complex potential with streamline on |z| = R and the same singularities as f in the region |z| > R .

Proof
First note that R2 ¯¯
/ z̄ is the reflection of z in the circle |z| = R .
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Next we need to see that f (R 2 ¯
¯
/ z̄ ) is analytic for |z| > R . By assumption f (z) is analytic for |z| < R , so it can be
expressed as a Taylor series
2
f (z) = a0 + a1 z + a2 z + ... (11.9.2)

Therefore,
¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
2 2 2
R R R 2
¯
¯¯¯
¯ ¯
¯¯¯
¯ ¯
¯¯¯
¯
f( ) = a0 + a1 + a2 ( ) + ... (11.9.3)
¯¯ z z

All the singularities of f are outside |z| = R , so the Taylor series in Equation 11.10.2 converges for |z| ≤ R . This means
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
the Laurent series in Equation 11.10.3 converges for |z| ≥ R . That is, 2
f (R / z )
¯¯
¯
is analytic |z| ≥ R , i.e. it introduces no
singularies to Φ(z) outside |z| = R .
The last thing to show is that |z| = R is a streamline for Φ(z). This follows because for z = Re iθ

¯
¯¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
iθ iθ iθ
Φ(Re ) = f (Re + f (Re ) (11.9.4)

is real. Therefore
iθ iθ
ψ(Re = Im(Φ(Re ) = 0. (11.9.5)

Examples
Think of f (z) as representing flow, possibly with sources or vortices outside |z| = R . Then Φ(z) represents the new flow when a
circular obstacle is placed in the flow. Here are a few examples.

 Example 11.9.1 Uniform flow around a circle


We know from Topic 6 that f (z) = z is the complex potential for uniform flow to the right. So,
2
Φ(z) = z + R /z (11.9.6)

is the potential for uniform flow around a circle of radius R centered at the origin.

11.9.1 https://math.libretexts.org/@go/page/51145
Uniform flow around a circle
Just because they look nice, the figure includes streamlines inside the circle. These don’t interact with the flow outside the
circle.
Note, that as z gets large flow looks uniform. We can see this analytically because
′ 2 2
Φ (z) = 1 − R / z (11.9.7)

goes to 1 as z gets large. (Recall that the velocity field is (ϕ x, ϕy ), where Φ = ϕ + iψ . . . )

 Example 11.9.2 Source flow around a circle

Here the source is at z = −2 (outside the unit circle) with complex potential
f (z) = log(z + 2). (11.9.8)

With the appropriate branch cut the singularities of f are also outside |z| = 1 . So we can apply Milne-Thomson and obtain
¯
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
¯
1
Φ(z) = log(z + 2) + log( + 2) (11.9.9)
¯
¯¯
z

Source flow around a circle


We know that far from the origin the flow should look the same as a flow with just a source at z = −2 .

Let’s see this analytically. First we state a useful fact:

 Useful fact
¯
¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯
If g(z) is analytic then so is h(z) = g(z̄) and h (z) = g
¯¯ ′ ′ ¯¯
(z̄ ) .

Proof
¯¯¯¯¯¯¯¯¯¯
Use the Taylor series for g to get the Taylor series for h and then compare h (z) and g ′ ′ ¯¯
(z )
¯
.

11.9.2 https://math.libretexts.org/@go/page/51145
Using this we have


1 1
Φ (z) = − (11.9.10)
z+2 z(1 + 2z)

For large z the second term decays much faster than the first, so
1

Φ (z) ≈ . (11.9.11)
z+2

That is, far from z = 0 , the velocity field looks just like the velocity field for f (z) , i.e. the velocity field of a source at z = −2 .

 Example 11.9.3 Transforming flows

If we use
2
g(z) = z (11.9.12)

we can transform a flow from the upper half-plane to the first quadrant

Source flow around a quarter circular corner

This page titled 11.9: Flows around cylinders is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

11.9.3 https://math.libretexts.org/@go/page/51145
11.10: Solving the Dirichlet problem for harmonic functions
In general, a Dirichlet problem in a region A asks you to solve a partial differential equation in A where the values of the solution
on the boundary of A are specificed.

 Example 11.10.1
Find a function u harmonic on the unit disk such that

iθ 1 for 0 < θ < π


u(e ) ={ (11.10.1)
0 for − π < θ < 0

This is a Dirichlet problem because the values of u on the boundary are specified. The partial differential equation is implied
by requiring that u be harmonic, i.e. we require ∇ u = 0 . We will solve this problem in due course.
2

Harmonic functions on the upper half-plane


Our strategy will be to solve the Dirichlet problem for harmonic functions on the upper half-plane and then transfer these solutions
to other domains.

 Example 11.10.2

Find a harmonic function u(x, y) on the upper half-plane that satisfies the boundary condition

1 for x < 0
u(x, 0) = { (11.10.2)
0 for x > 0

Solution
We can write down a solution explicitly as
1
u(x, y) = θ, (11.10.3)
π

where θ is the argument of z = x + iy . Since we are only working on the upper half-plane we can take any convenient branch
with branch cut in the lower half-plane, say −π/2 < θ < 3π/2 .

To show u is truly a solution, we have to verify two things:


1. u satisfies the boundary conditions
2. u is harmonic.
Both of these are straightforward. First, look at the point r on the positive x-axis. This has argument θ = 0 , so u(r
2 2, 0) = 0 .
Likewise arg(r ) = π , so u(r , 0) = 1 . Thus, we have shown point (1).
1 1

To see point (2) remember that


log(z) = log(r) + iθ. (11.10.4)

So,
1
u = Re( log(z)). (11.10.5)
πi

Since it is the real part of an analytic function, u is harmonic.

11.10.1 https://math.libretexts.org/@go/page/51144
 Example 11.10.3

Suppose x 1 < x2 < x3 . Find a harmonic function u on the upper half-plane that satisfies the boundary condition

⎧ c0 for x < x1


c1 for x1 < x < x2
u(x, 0) = ⎨ (11.10.6)
⎪ c2 for x2 < x < x3


c3 for x3 < x

Solution
We mimic the previous example and write down the solution
θ3 θ2 θ1
u(x, y) = c3 + (c2 − c3 ) + (c1 − c2 ) + (c0 − c1 ) . (11.10.7)
π π π

Here, the θ are the angles shown in the figure. One again, we chose a branch of θ that has
j 0 <θ <π for points in the upper
half-plane. (For example the branch −π/2 < θ < 3π/2 .)

To convince yourself that u satisfies the boundary condition test a few points:
At r : all the θ = 0 . So, u(r , 0) = c as required.
3 j 3 3

At r : θ = θ = 0 , θ = π . So, u(r , 0) = c + c − c
2 1 2 3 2 3 2 3 = c2 as required.
Likewise, at r and r , u have the correct values.
1 0

As before, u is harmonic because it is the real part of the analytic function


(c2 − c3 ) (c1 − c2 ) (c1 − c0 )
Φ(z) = c3 + log(z − x3 ) + log(z − x2 ) + log(z − x1 ). (11.10.8)
πi πi πi

Harmonic functions on the unit disk


Let’s try to solve a problem similar to the one in Example 11.9.1..

 Example 11.10.4

Find a function u harmonic on the unit disk such that

iθ 1 for − π/2 < θ < π/2


u(e ) ={ (11.10.9)
0 for π/2 < θ < 3π/2

Solution

11.10.2 https://math.libretexts.org/@go/page/51144
Our strategy is to start with a conformal map T from the upper half-plane to the unit disk. We can use this map to pull the
problem back to the upper half-plane. We solve it there and then push the solution back to the disk.
Let’s call the disk D, the upper half-plane H . Let z be the variable on D and w the variable on H . Back in Example 11.7.4 we
found a map from H to D. The map and its inverse are
w −i −1
iz + i
z = T (w) = , w =T (z) = . (11.10.10)
w +i −z + 1

The function u on D is transformed by T to a function ϕ on H . The relationships are


−1
u(z) = ϕ ∘ T (z) or ϕ(w) = u ∘ T (w) (11.10.11)

These relationships determine the boundary values of ϕ from those we were given for u. We compute:
T
−1
(i) = −1 ,T −1
(−i) = 1 ,T −1
(1) = ∞ ,T −1
(−1) = 0 .
This shows the left hand semicircle bounding D is mapped to the segment [-1, 1] on the real axis. Likewise, the right hand
semicircle maps to the two half-lines shown. (Literally, to the ‘segment’ 1 to ∞ to -1.)
We know how to solve the problem for a harmonic function ϕ on H :
1 1 1 1
ϕ(w) = 1 − θ2 + θ1 = Re(1 − log(w − 1) + log(w + 1)). (11.10.12)
π π πi πi

Transforming this back to the disk we have


1 1
−1 −1 −1
u(z) = ϕ ∘ T (z) = Re(1 − log(T (z) − 1) + log(T (z) + 1)). (11.10.13)
πi πi

If we wanted to, we could simplify this somewhat using the formula for T −1
.

This page titled 11.10: Solving the Dirichlet problem for harmonic functions is shared under a CC BY-NC-SA 4.0 license and was authored,
remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

11.10.3 https://math.libretexts.org/@go/page/51144
CHAPTER OVERVIEW
12: Argument Principle
The argument principle (or principle of the argument) is a consequence of the residue theorem. It connects the winding number of a
curve with the number of zeros and poles inside the curve. This is useful for applications (mathematical and otherwise) where we
want to know the location of zeros and poles.
12.1: Principle of the Argument
12.2: Nyquist Criterion for Stability
12.3: A Bit on Negative Feedback

This page titled 12: Argument Principle is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
12.1: Principle of the Argument
Setup
γ a simple closed curve, oriented in a counterclockwise direction. f (z) analytic on and inside γ, except for (possibly) some finite
poles inside (not on) γ and some zeros inside (not on) γ.
Let p , . . . , p be the poles of f inside γ.
1 m

Let z , . . . , z be the zeros of f inside γ.


1 n

Write mult(z ) = the multiplicity of the zero at z . Likewise write mult(p ) = the order of the pole at p .
k k k k

We start with a theorem that will lead to the argument principle.

 Theorem 12.1.1

With the above setup



f (z)
∫ dz = 2πi(∑ mult(zk ) − ∑ mult(pk )). (12.1.1)
γ f (z)

Proof
To prove this theorem we need to understand the poles and residues of ′
f (z)/f (z) . With this in mind, suppose f (z) has a
zero of order m at z . The Taylor series for f (z) near z is
0 0

m
f (z) = (z − z0 ) g(z) (12.1.2)

where g(z) is analytic and never 0 on a small neighborhood of z . This implies 0

′ m−1 m ′
f (z) m(z − z0 ) g(z) + (z − z0 ) g (z)
=
m
f (z) (z − z0 ) g(z)
(12.1.3)

m g (z)
= +
z − z0 g(z)

Since g(z) is never 0, g ′


(z)/g(z) is analytic near z . This implies that z is a simple pole of f
0 0

(z)/f (z) and

f (z)
Res( , z0 ) = mmult(z0 ). (12.1.4)
f (z)

Likewise, if z is a pole of order m then the Laurent series for f (z) near z is
0 0

−m
f (z) = (z − z0 ) g(z) (12.1.5)

where g(z) is analytic and never 0 on a small neighborhood of z . Thus, 0

′ −m−1 −m ′
f (z) m(z − z0 ) g(z) + (z − z0 ) g (z)
= −
−m
f (z) (z − z0 ) g(z)
(12.1.6)

m g (z)
= − +
z − z0 g(z)

Again we have that z is a simple pole of f


0

(z)/f (z) and

f (z)
Res( , z0 ) = −m = −mult(z0 ). (12.1.7)
f (z)

The theorem now follows immediately from the Residue Theorem:



f (z)
∫ dz = 2πi sum of the residues
γ
f (z) (12.1.8)

= 2πi(∑ mult(zk ) − ∑ mult(pk )).

12.1.1 https://math.libretexts.org/@go/page/6544
 Definition

We write Z f ,γ for the sum of multiplicities of the zeros of f inside γ. Likewise for P f ,γ . So the Theorem 12.2.1 says,

f
∫ dz = 2πi(Zf ,γ − Pf ,γ ). (12.1.9)
γ
f

 Definition: Winding Number

We have an intuition for what this means. We define it formally via Cauchy’s formula. If γ is a closed curve then its winding
number (or index) about z is defined as
0

1 1
Ind(γ, z0 ) = ∫ dz. (12.1.10)
2πi γ
z − z0

Mapping Curves: f ∘ γ
One of the key notions in this topic is mapping one curve to another. That is, if z = γ(t) is a curve and w = f (z) is a function, then
w = f ∘ γ(t) = f (γ(t)) is another curve. We say f maps γ to f ∘ γ . We have done this frequently in the past, but it is important

enough to us now, so that we will stop here and give a few examples. This is a key concept in the argument principle and you should
make sure you are very comfortable with it.

 Example 12.1.1

Let γ(t) = e with 0 ≤ t ≤ 2π (the unit circle). Let f (z) = z . Describe the curve f ∘ γ .
it 2

Solution
Clearly f ∘ γ(t) = e 2it
traverses the unit circle twice as t goes from 0 to 2π.

 Example 12.1.2

Let γ(t) = it with −∞ < t < ∞ (the y -axis). Let f (z) = 1/(z + 1) . Describe the curve f ∘ γ(t) .
Solution
f (z)is a fractional linear transformation and maps the line given by γ to the circle through the origin centered at 1⁄2. By
checking at a few points:
1 1 +i 1 1 −i
f (−i) = = , f (0) = 1 , f (i) = = , f (∞) = 0 .
−i + 1 2 i +1 2

We see that the circle is traversed in a clockwise manner as t goes from −∞ to ∞.

The curve z = γ(t) = it is mapped to w = f ∘ γ(t)) = 1/(it + 1).

12.1.2 https://math.libretexts.org/@go/page/6544
Argument Principle
You will also see this called the principle of the argument.

 Theorem 12.1.2 Argument principle

For f and γ with the same setup as above



f (z)
∫ dz = 2πiInd(f ∘ γ, 0) = 2πi(Zf ,γ − Pf ,γ ) (12.1.11)
γ f (z)

Proof
Theorem 12.2.1 showed that

f (z)
∫ dz = 2πi(Zf ,γ − Pf ,γ ) (12.1.12)
γ f (z)

So we need to show is that the integral also equals the winding number given. This is simply the change of variables
w = f (z) . With this change of variables the countour z = γ(t) becomes w = f ∘ γ(t) and dw = f (z) dz so


f (z) dw
∫ dz = ∫ = 2πiInd(f ∘ γ, 0) (12.1.13)
γ f (z) f ∘γ
w

The last equality in the above equation comes from the definition of winding number.
Note that by assumption γ does not go through any zeros of f , so w = f (γ(t)) is never zero and 1/w in the integral is not a
problem.
Here is an easy corollary to the argument principle that will be useful to us later.

 Corollary
Assume that f ∘ γ does not go through −1, i.e. there are no zeros of 1 + f (z) on γ then

f
∫ = 2πiInd(f ∘ γ, −1) = 2πi(Z1+f ,γ − Pf ,γ ). (12.1.14)
γ
f +1

Proof
Applying the argument principle in Equation 12.2.11 to the function 1 + f (z) , we get

(1 + f ) f (z)
∫ dz = 2πiInd(1 + f ∘ γ, 0) = 2πi(Z1+f ,γ − P1+f ,γ )
γ 1 + f (z)

Now, we can compare each of the terms in this equation to those in Equation 12.2.14:
′ ′
(1 + f ) f (z) f f (z)
′ ′
∫ dz = ∫ dz (because (1 + f ) = f )
γ γ
1 + f (z) 1 + f (z)

Ind(1 + f ∘ γ, 0) = Ind(f ∘ γ, −1) (1 + f winds around 0 ⇔ winds around -1) (12.1.15)

Z1+f ,γ = Z1+f ,γ (same in both equation))

P1+f ,γ = Pf ,γ (poles of f = poles of 1 + f )

 Example 12.1.3

Let f (z) = z 2
+z Find the winding number of f ∘ γ around 0 for each of the following curves.
1. γ = circle of radius 2.
1

2. γ = circle of radius 1/2.


2

3. γ = circle of radius 1.
3

12.1.3 https://math.libretexts.org/@go/page/6544
Solution
f (z) has zeros at 0, −1. It has no poles.
So, f has no poles and two zeros inside γ . The argument principle says Ind(f ∘ γ
1 1, 0) = Zf ,γ − Pf ,γ = 2
1

Likewise f has no poles and one zero inside γ , so Ind(f ∘ γ


2 2 , 0) = 1 − 0 = 1

For γ a zero of f is on the curve, i.e.


3 f (−1) = 0 , so the argument principle doesn’t apply. The image of γ3 is shown in the
figure below – it goes through 0.

The image of 3 different circles under f (z) = z 2


+z .

Rouché’s Theorem

 Theorem 12.1.3: Rouché’s theorem

Make the following assumptions:


γ is a simple closed curve
f , h are analytic functions on and inside γ, except for some finite poles.

There are no poles of f and h on γ.


|h| < |f | everywhere on γ.

Then
Ind(f ∘ γ, 0) = Ind((f + h) ∘ γ, 0). (12.1.16)

That is,
Zf ,γ − Pf ,γ = Zf +h,γ − Pf +h,γ (12.1.17)

Proof
In class we gave a heuristic proof involving a person walking a dog around f ∘γ on a leash of length h∘γ . Here is the
analytic proof.
The argument principle requires the function to have no zeros or poles on γ . So we first show that this is true of
f , f + h, (f + h)/f . The argument is goes as follows.

Zeros: The fact that 0 ≤ |h| < |f | on γ implies f has no zeros on γ . It also implies f + h has no zeros on γ , since the value
of h is never big enough to cancel that of f . Since f and f + h have no zeros, neither does (f + h)/f .
Poles: By assumption f and h have no poles on γ , so f + h has no poles there. Since f has no zeors on γ , (f + h)/f has no
poles there.
Now we can apply the argument principle to f and f + h

12.1.4 https://math.libretexts.org/@go/page/6544

1 f
∫ dz = Ind(f ∘ γ, 0) = Zf ,γ − Pf ,γ . (12.1.18)
2πi γ f


1 (f + h)
∫ dz = Ind((f + h) ∘ γ, 0) = Zf +h,γ − Pf +h,γ . (12.1.19)
2πi γ
f +h

h h h f +h
Next, by assumption | | <1 , so ( )∘γ is inside the unit circle. This means that 1 + = maps γ to the inside of
f f f f

the unit disk centered at 1. (You should draw a figure for this.) This implies that
f +h
Ind(( ) ∘ γ, 0) = 0. (12.1.20)
f


f +h g
Let g = . The above says Ind(g ∘ γ, 0) = 0 . So, ∫ γ
dz = 0 . (We showed above that g has no zeros or poles on γ .)
f g

′ ′ ′
g (f + h) f
Now, it's easy to compute that = − . So, using
g f +h f

′ ′ ′
g (f + h) f
Ind(g ∘ γ, 0) = ∫ dz = ∫ dz − ∫ dz = 0 ⇒ Ind((f + h) ∘ γ, 0) = Ind(f ∘ γ, 0). (12.1.21)
γ
g γ
f +h γ
f

Now equations 12.2.19 and 12.2.20 tell us Z f ,γ − Pf ,γ = Zf +h,γ − Pf +h,γ , i.e. we have proved Rouchés theorem.

 Corollary

Under the same hypotheses, If h and f are analytic (no poles) then
Zf ,γ = Zf +h,γ . (12.1.22)

Proof
Since the functions are analytic P f ,γ and P f +h,γ are both 0. So Equation 12.2.18 shows Z f = Zf +h . QED

We think of h as a small perturbation of f .

 Example 12.1.4

Show all 5 zeros of z 5


+ 3z + 1 are inside the curve C 2 : |z| = 2 .
Solution
Let f (z) = z
5
and h(z) = 3z + 1 . Clearly all 5 roots of f (really one root with multiplicity 5) are inside C . Also clearly, 2

|h| < 7 < 32 = |f | on C . The corollary to Rouchés theorem says all 5 roots of f + h = z + 3z + 1 must also be inside the
2
5

curve.

 Example 12.1.5

Show z + 3 + 2e has one root in the left half-plane.


z

Solution
Let f (z) = z + 3 , h(z) = 2e . Consider the contour from −iR to iR along the y -axis and then the left semicircle of radius R
z

back to −iR. That is, the contour C + C shown below. 1 R

12.1.5 https://math.libretexts.org/@go/page/6544
To apply the corollary to Rouchés theorem we need to check that (for R large) |h| < |f | on C 1 + CR . On C , z = iy , so
1

iy
|f (z)| = |3 + iy| ≥ 3, |h(z)| = 2| e | = 2. (12.1.23)

So |h| < |f | on C .1

On C , z = x + iy with x < 0 and |z| = R . So,


R

x+iy x
|f (z)| > R − 3 for R large, |h(z)| = 2| e | = 2e < 2 (since x < 0). (12.1.24)

So |h| < |f | on C .R

The only zero of f ia at z = −3 , which lies inside the contour.


Therefore, by the Corollary to Rouchés theorem, f + h has the same number of roots as f inside the contour, that is 1. Now let
R go to infinity and we see that f + h has only one root in the entire half-plane.

 Theorem 12.1.4: Fundamental Theorem of Algebra

Rouchés theorem can be used to prove the fundamental theorem of algebra as follows.

Proof
Let
n n−1
P (z) = z + an−1 z + . . . +a0 (12.1.25)

be an n th order polynomial. Let f (z) = z and h = P


n
−f . Choose an R such that R > max(1, n|a n−1 |, . Then
. . . , n| a0 |)

on |z| = R we have
R R R
n−1 n−2 n−1 n−2 n
|h| ≤ | an−1 | R + | an−2 | R + . . . +| a0 | ≤ R + R + ...+ <R . (12.1.26)
n n n

On |z| = R we have |f (z)| = R , so we have shown |h| < |f | on the curve. Thus, the corollary to Rouchés theorem says
n

f + h and f have the same number of zeros inside |z| = R . Since we know f has exactly n zeros inside the curve the same

is true for the polynomial f + h . Now let R go to infinity, we’ve shown that f + h has exactly n zeros in the entire plane.

 Note

The proof gives a simple bound on the size of the zeros: they are all have magnitude less than or equal to
max(1, n| a |, . . . , n| a |).
n−1 0

This page titled 12.1: Principle of the Argument is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

12.1.6 https://math.libretexts.org/@go/page/6544
12.2: Nyquist Criterion for Stability
The Nyquist criterion is a graphical technique for telling whether an unstable linear time invariant system can be stabilized using a
negative feedback loop. We will look a little more closely at such systems when we study the Laplace transform in the next topic.
For this topic we will content ourselves with a statement of the problem with only the tiniest bit of physical context.

 Note
You have already encountered linear time invariant systems in 18.03 (or its equivalent) when you solved constant coefficient
linear differential equations.

System Functions
A linear time invariant system has a system function which is a function of a complex variable. Typically, the complex variable is
denoted by s and a capital letter is used for the system function.
Let G(s) be such a system function. We will make a standard assumption that G(s) is meromorphic with a finite number of (finite)
poles. This assumption holds in many interesting cases. For example, quite often G(s) is a rational function Q(s)/P (s) (Q and P
are polynomials).
We will be concerned with the stability of the system.

 Definition
The system with system function G(s) is called stable if all the poles of G are in the left half-plane. That is, if all the poles of
G have negative real part.

The system is called unstable if any poles are in the right half-plane, i.e. have positive real part.
For the edge case where no poles have positive real part, but some are pure imaginary we will call the system marginally
stable. This case can be analyzed using our techniques. For our purposes it would require and an indented contour along the
imaginary axis. If we have time we will do the analysis.

 Example 12.2.1
s
Is the system with system function G(s) = 2
stable?
(s + 2)(s + 4s + 5)

Solution
The poles are −2, −2 ± i . Since they are all in the left half-plane, the system is stable.

 Example 12.2.2
s
Is the system with system function G(s) = 2 2
stable?
(s − 4)(s + 4s + 5)

Solution
The poles are ±2, −2 ± i . Since one pole is in the right half-plane, the system is unstable.

 Example 12.2.3
s
Is the system with system function G(s) = 2
stable?
(s + 2)(s + 4)

Solution
The poles are −2, ±2i. There are no poles in the right half-plane. Since there are poles on the imaginary axis, the system is
marginally stable.

12.2.1 https://math.libretexts.org/@go/page/6545
Terminology. So far, we have been careful to say ‘the system with system function G(s) '. From now on we will allow ourselves to
be a little more casual and say ‘the system G(s) '. It is perfectly clear and rolls off the tongue a little easier!

Pole-zero Diagrams
We can visualize G(s) using a pole-zero diagram. This is a diagram in the s -plane where we put a small cross at each pole and a
small circle at each zero.

 Example 12.2.4

Give zero-pole diagrams for each of the systems


s s s
G1 (s) = , G1 (s) = , G1 (s) = (12.2.1)
(s + 2)(s2 + 4s + 5) (s2 − 4)(s2 + 4s + 5) (s + 2)(s2 + 4)

Solution
These are the same systems as in the examples just above. We first note that they all have a single zero at the origin. So we put
a circle at the origin and a cross at each pole.

Pole-zero diagrams for the three systems.

bit about stability


This is just to give you a little physical orientation. Given our definition of stability above, we could, in principle, discuss stability
without the slightest idea what it means for physical systems.
The poles of G(s) correspond to what are called modes of the system. A simple pole at s corresponds to a mode y
1 1 (t) =e
s1 t
. The
system is stable if the modes all decay to 0, i.e. if the poles are all in the left half-plane.
Physically the modes tell us the behavior of the system when the input signal is 0, but there are initial conditions. A pole with
positive real part would correspond to a mode that goes to infinity as t grows. It is certainly reasonable to call a system that does
this in response to a zero signal (often called ‘no input’) unstable.
To connect this to 18.03: if the system is modeled by a differential equation, the modes correspond to the homogeneous solutions
y(t) = e , where s is a root of the characteristic equation. In 18.03 we called the system stable if every homogeneous solution
st

decayed to 0. That is, if the unforced system always settled down to equilibrium.

Closed loop systems


If the system with system function G(s) is unstable it can sometimes be stabilized by what is called a negative feedback loop. The
new system is called a closed loop system. Its system function is given by Black's formula
G(s)
GC L (s) = , (12.2.2)
1 + kG(s)

where k is called the feedback factor. We will just accept this formula. Any class or book on control theory will derive it for you.
In this context G(s) is called the open loop system function.
Since G CL is a system function, we can ask if the system is stable.

12.2.2 https://math.libretexts.org/@go/page/6545
 Theorem 12.2.1

The poles of the closed loop system function G C L (s) given in Equation 12.3.2 are the zeros of 1 + kG(s) .

Proof
Looking at Equation 12.3.2, there are two possible sources of poles for G CL .
1. The zeros of the denominator 1 + kG . The theorem recognizes these.
2. The poles of G . Since G is in both the numerator and denominator of G it should be clear that the poles cancel. We
CL

can show this formally using Laurent series. If G has a pole of order n at s then 0

1 n n+1
G(s) = (bn + bn−1 (s − s0 ) + . . . a0 (s − s0 ) + a1 (s − s0 ) + . . . ), (12.2.3)
n
(s − s0 )

where b n ≠0 . So,
1
n
(bn + bn−1 (s − s0 ) + . . . a0 (s − s0 ) + ...)
n
(s − s0 )
GC L (s) =
k
n
1+ (bn + bn−1 (s − s0 ) + . . . a0 (s − s0 ) + ...) (12.2.4)
n
(s − s0 )

n
(bn + bn−1 (s − s0 ) + . . . a0 (s − s0 ) + ...)
=
n n
(s − s0 ) + k(bn + bn−1 (s − s0 ) + . . . a0 (s − s0 ) + ...)

which is clearly analytic at s . (At s it equals b


0 0 n /(kbn ) = 1/k .)

 Example 12.2.5
1
Set the feedback factor k = 1 . Assume a is real, for what values of a is the open loop system G(s) = stable? For what
s+a

values of a is the corresponding closed loop system G C L (s) stable?


(There is no particular reason that a needs to be real in this example. But in physical systems, complex poles will tend to come
in conjugate pairs.)
Solution
G(s) has one pole at s = −a . Thus, it is stable when the pole is in the left half-plane, i.e. for a > 0 .
The closed loop system function is
1/(s + a) 1
GC L (s) = = . (12.2.5)
1 + 1/(s + a) s+a+1

This has a pole at s = −a − 1 , so it's stable if a > −1 . The feedback loop has stabilized the unstable open loop systems with
−1 < a ≤ 0 . (Actually, for a = 0 the open loop is marginally stable, but it is fully stabilized by the closed loop.)

 Note

The algebra involved in canceling the s+a term in the denominators is exactly the cancellation that makes the poles of G

removable singularities in G . CL

 Example 12.2.6
s+1
Suppose G(s) = . Is the open loop system stable? Is the closed loop system stable when k = 2 .
s−1

Solution
G(s) has a pole in the right half-plane, so the open loop system is not stable. The closed loop system function is

12.2.3 https://math.libretexts.org/@go/page/6545
G (s + 1)/(s − 1) s+1
GC L (s) = = = . (12.2.6)
1 + kG 1 + 2(s + 1)/(s − 1) 3s + 1

The only pole is at s = −1/3 , so the closed loop system is stable. This is a case where feedback stabilized an unstable system.

 Example 12.2.7
s−1
G(s) = . Is the open loop system stable? Is the closed loop system stable when k = 2 .
s+1

Solution
The only plot of G(s) is in the left half-plane, so the open loop system is stable. The closed loop system function is
G (s − 1)/(s + 1) s−1
GC L (s) = = = . (12.2.7)
1 + kG 1 + 2(s − 1)/(s + 1) 3s − 1

This has one pole at s = 1/3 , so the closed loop system is unstable. This is a case where feedback destabilized a stable system.
It can happen!

Nyquist Plots
For the Nyquist plot and criterion the curve γ will always be the imaginary s -axis. That is
s = γ(ω) = iω, where − ∞ < ω < ∞. (12.2.8)

For a system G(s) and a feedback factor k , the Nyquist plot is the plot of the curve

w = kG ∘ γ(ω) = kG(iω). (12.2.9)

That is, the Nyquist plot is the image of the imaginary axis under the map w = kG(s) .

 Note
In γ(ω) the variable is a greek omega and in w = G ∘ γ we have a double-u.

 Example 12.2.8
1
Let G(s) = . Draw the Nyquist plot with k = 1 .
s+1

Solution
In the case G(s) is a fractional linear transformation, so we know it maps the imaginary axis to a circle. It is easy to check it is
the circle through the origin with center w = 1/2. You can also check that it is traversed clockwise.

Nyquist plot of G(s) = 1/(s + 1) , with k = 1 .

12.2.4 https://math.libretexts.org/@go/page/6545
 Example 12.2.9

Take G(s) from the previous example. Describe the Nyquist plot with gain factor k = 2 .
Solution
The Nyquist plot is the graph of kG(iω). The factor k = 2 will scale the circle in the previous example by 2. That is, the
Nyquist plot is the circle through the origin with center w = 1 .
In general, the feedback factor will just scale the Nyquist plot.

Nyquist Criterion
The Nyquist criterion gives a graphical method for checking the stability of the closed loop system.

 Theorem 12.2.2 Nyquist criterion

Suppose that G(s) has a finite number of zeros and poles in the right half-plane. Also suppose that G(s) decays to 0 as s goes
to infinity. Then the closed loop system with feedback factor k is stable if and only if the winding number of the Nyquist plot
around w = −1 equals the number of poles of G(s) in the right half-plane.
More briefly,

GC L (s) is stable ⇔ Ind(kG ∘ γ, −1) = PG,RH P (12.2.10)

Here, γ is the imaginary s -axis and PG,RH P is the number o poles of the original open loop system function G(s) in the
right half-plane.

Proof
GC L is stable exactly when all its poles are in the left half-plane. Now, recall that the poles of G are exactly the zeros of
CL

1 + kG . So, stability of G is exactly the condition that the number of zeros of 1 + kG in the right half-plane is 0.
CL

Let’s work with a familiar contour.

Let γ = C + C . Note that γ is traversed in the clockwise direction. Choose R large enough that the (finite number)
R 1 R R

of poles and zeros of G in the right half-plane are all inside γ . Now we can apply Equation 12.2.4 in the corollary to the
R

argument principle to kG(s) and γ to get

−Ind(kG ∘ γR , −1) = Z1+kG,γ − PG,γ (12.2.11)


R R

(The minus sign is because of the clockwise direction of the curve.) Thus, for all large R

the system is stable ⇔ Z1+kG,γ = 0 \Leftrightarow Ind(kG ∘ γR , −1) = PG,γ (12.2.12)


R R

Finally, we can let R go to infinity. The assumption that G(s) decays 0 to as s goes to ∞ implies that in the limit, the
entire curve kG ∘ C becomes a single point at the origin. So in the limit kG ∘ γ becomes kG ∘ γ . QED
R R

12.2.5 https://math.libretexts.org/@go/page/6545
Examples using the Nyquist Plot mathlet
The Nyquist criterion is a visual method which requires some way of producing the Nyquist plot. For this we will use one of the
MIT Mathlets (slightly modified for our purposes). Open the Nyquist Plot applet at

http://web.mit.edu/jorloff/www/jmoapplets/nyquist/nyquistCrit.html

Play with the applet, read the help.


Now refresh the browser to restore the applet to its original state. Check the F ormula box. The formula is an easy way to read off
the values of the poles and zeros of G(s) . In its original state, applet should have a zero at s = 1 and poles at s = 0.33 ± 1.75i.
The left hand graph is the pole-zero diagram. The right hand graph is the Nyquist plot.

 Example 12.2.10

To get a feel for the Nyquist plot. Look at the pole diagram and use the mouse to drag the yellow point up and down the
imaginary axis. Its image under kG(s) will trace out the Nyquis plot.
Notice that when the yellow dot is at either end of the axis its image on the Nyquist plot is close to 0.

 Example 12.2.11

Refresh the page, to put the zero and poles back to their original state. There are two poles in the right half-plane, so the open
loop system G(s) is unstable. With k = 1 , what is the winding number of the Nyquist plot around -1? Is the closed loop
system stable?
Solution
The curve winds twice around -1 in the counterclockwise direction, so the winding number Ind(kG ∘ γ, −1) = 2 . Since the
number of poles of G in the right half-plane is the same as this winding number, the closed loop system is stable.

 Example 12.2.12
With the same poles and zeros, move the k slider and determine what range of k makes the closed loop system stable.
Solution
When k is small the Nyquist plot has winding number 0 around -1. For these values of k , G is unstable. As k increases,
CL

somewhere between k = 0.65 and k = 0.7 the winding number jumps from 0 to 2 and the closed loop system becomes stable.
This continues until k is between 3.10 and 3.20, at which point the winding number becomes 1 and G becomes unstable. CL

Answer: The closed loop system is stable for k (roughly) between 0.7 and 3.10.

 Example 12.2.13

In the previous problem could you determine analytically the range of k where G C L (s) is stable?
Solution
Yes! This is possible for small systems. It is more challenging for higher order systems, but there are methods that don’t
require computing the poles. In this case, we have
s−1

2 2
G(s) (s − 0.33 ) + 1.75 s−1
GC L (s) = = =
2 2
1 + kG(s) k(s − 1) (s − 0.33 ) + 1.75 + k(s − 1)
1+
2 2
(s − 0.33 ) + 1.75

So the poles are the roots of

12.2.6 https://math.libretexts.org/@go/page/6545
2 2 2 2 2
(s − 0.33 ) + 1.75 + k(s − 1) = s + (k − 0.66)s + 0.33 + 1.75 −k

For a quadratic with positive coefficients the roots both have negative real part. This happens when
2 2
0.66 < k < 0.33 + 1.75 ≈ 3.17.

 Example 12.2.14

What happens when k goes to 0.


Solution
As k goes to 0, the Nyquist plot shrinks to a single point at the origin. In this case the winding number around -1 is 0 and the
Nyquist criterion says the closed loop system is stable if and only if the open loop system is stable.
This should make sense, since with k = 0 ,
G
GC L = = G.
1 + kG

 Example 12.2.15

Make a system with the following zeros and poles:


A pair of zeros at 0.6 ± 0.75i.
A pair of poles at −0.5 ± 2.5i.
A single pole at 0.25.
Is the corresponding closed loop system stable when k = 6 ?
Solution
The answer is no, G is not stable. G has one pole in the right half plane. The mathlet shows the Nyquist plot winds once
CL

around w = −1 in the clockwise direction. So the winding number is -1, which does not equal the number of poles of G in
the right half-plane.
If we set k = 3 , the closed loop system is stable.

This page titled 12.2: Nyquist Criterion for Stability is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

12.2.7 https://math.libretexts.org/@go/page/6545
12.3: A Bit on Negative Feedback
Given Equation 12.3.2, in 18.04 we can ask if there are any poles in the right half-plane without needing any underlying physical
model. Still, it’s nice to have some sense of where this fits into science and engineering.
In a negative feedback loop the output of the system is looped back and subtracted from the input.

 Example 12.3.1
The heating system in my house is an example of a system stabilized by feedback. The thermostat is the feedback mechanism.
When the temperature outside (input signal) goes down the heat turns on. Without the thermostat it would stay on and overheat
my house. The thermostat turns the heat up or down depending on whether the inside temperature (the output signal) is too low
or too high (negative feedback).

 Example 12.3.2
Walking or balancing on one foot are examples negative feedback systems. If you feel yourself falling you compensate by
shifting your weight or tensing your muscles to counteract the unwanted acceleration.

This page titled 12.3: A Bit on Negative Feedback is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

12.3.1 https://math.libretexts.org/@go/page/6546
CHAPTER OVERVIEW
13: Laplace Transform
The Laplace transform takes a function of time and transforms it to a function of a complex variable s. Because the transform is
invertible, no information is lost and it is reasonable to think of a function f (t) and its Laplace transform F (s) as two views of the
same phenomenon. Each view has its uses and some features of the phenomenon are easier to understand in one view or the other.
We can use the Laplace transform to transform a linear time invariant system from the time domain to the s-domain. This leads to
the system function G(s) for the system –this is the same system function used in the Nyquist criterion for stability.
One important feature of the Laplace transform is that it can transform analytic problems to algebraic problems. We will see
examples of this for differential equations.
13.1: A brief introduction to linear time invariant systems
13.2: Laplace transform
13.3: Exponential Type
13.4: Properties of Laplace transform
13.5: Differential equations
13.6: Table of Laplace transforms
13.7: System Functions and the Laplace Transform
13.8: Laplace inverse
13.9: Delay and Feedback

This page titled 13: Laplace Transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

1
13.1: A brief introduction to linear time invariant systems
Let’s start by defining our terms.
Signal. A signal is any function of time.
System. A system is some machine or procedure that takes one signal as input does something with it and produces another signal
as output.
Linear system. A linear system is one that acts linearly on inputs. That is, f1 (t) and f (t) are inputs to the system with outputs
2

y (t) and y (t) respectively, then the input f + f


1 2 1 produces the output y
2 1 + y2 and, for any constant c , the input cf produces
1

output cy . 1

This is often phrased in one sentence as input c f + c f1 1 2 2 produces output c1 y1 + c2 y2 , i.e. linear combinations of inputs
produces a linear combination of the corresponding outputs.
Time invariance. Suppose a system takes input signal f (t) and produces output signal y(t). The system is called time invariant if
the input signal g(t) = f (t − a) produces output signal y(t − a) .
LTI. We will call a linear time invariant system an LTI system.

 Example 13.1.1
Consider the constant coefficient differential equation
′′ ′
3y + 8 y + 7y = f (t) (13.1.1)

This equation models a damped harmonic oscillator, say a mass on a spring with a damper, where f (t) is the force on the mass
and y(t) is its displacement from equilibrium. If we consider f to be the input and y the output, then this is a linear time
invariant (LTI) system.

 Example 13.1.2
There are many variations on this theme. For example, we might have the LTI system
′′ ′ ′
3y + 8 y + 7y = f (t) (13.1.2)

where we call f (t) the input signal and y(t) the output signal.

This page titled 13.1: A brief introduction to linear time invariant systems is shared under a CC BY-NC-SA 4.0 license and was authored,
remixed, and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

13.1.1 https://math.libretexts.org/@go/page/6551
13.2: Laplace transform
 Definition

The Laplace transform of a function f (t) is defined by the integral



−st
L(f ; s) = ∫ e f (t) dt, (13.2.1)
0

for those s where the integral converges. Here s is allowed to take complex values.

 Important note

The Laplace transform is only concerned with f (t) for t ≥ 0 . Generally, speaking we can require f (t) = 0 for t < 0 .

 Standard notation

Where the notation is clear, we will use an upper case letter to indicate the Laplace transform, e.g, L(f ; s) = F (s) .

The Laplace transform we defined is sometimes called the one-sided Laplace transform. There is a two-sided version where the
integral goes from −∞ to ∞.

First examples
Let’s compute a few examples. We will also put these results in the Laplace transform table at the end of these notes.

 Example 13.2.1
Let f (t) = e . Compute F (s) = L(f ; s) directly. Give the region in the complex s -plane where the integral converges.
at

(a−s)t
∞ ∞ e
at at −st (a−s)t ∞
L(e ; s) = ∫ e e dt = ∫ e dt = |
0 0 0
a−s

1 (13.2.2)

if Re(s) > Re(a)
rcl = = ⎨ s−a

divergent otherwise

The last formula comes from plugging ∞ into the exponential. This is 0 if Re(a − s) < 0 and undefined otherwise.

 Example 13.2.2
Let f (t) = b . Compute F (s) = L(f ; s) directly. Give the region in the complex s -plane where the integral converges.
−st
∞ be
−st ∞
L(b; s) = ∫ be dt = |
0 0
−s

b (13.2.3)
if Re(s) > 0
rcl = ={ s

divergent otherwise

The last formula comes from plugging ∞ into the exponential. This is 0 if Re(−s) < 0 and undefined otherwise.

 Example 13.2.3
Let f (t) = t . Compute F (s) = L(f ; s) directly. Give the region in the complex s -plane where the integral converges.

13.2.1 https://math.libretexts.org/@go/page/6552
−st −st
∞ te e
−st ∞
L(t; s) = ∫ te dt = − |
0 2 0
−s s

1 (13.2.4)
if Re(s) > 0
rcl = = { s2

divergent otherwise

 Example 13.2.4

Compute
L(cos(ωt)). (13.2.5)

Solution
We use the formula
iωt −iωt
e +e
cos(ωt) = . (13.2.6)
2

So,
1/(s − iω) + 1/(s + iω) s
L(cos(ωt); s) = = . (13.2.7)
2 2
2 s +ω

Connection to Fourier transform


The Laplace and Fourier transforms are intimately connected. In fact, the Laplace transform is often called the Fourier-Laplace
transform. To see the connection we’ll start with the Fourier transform of a function f (t).

^ −iωt
f (ω) = ∫ f (t)e dt. (13.2.8)
−∞

If we assume f (t) = 0 for t < 0 , this becomes


^ −iωt
f (ω) = ∫ f (t)e dt. (13.2.9)
0

Now if s = iω then the Laplace transform is



−iωt
L(f ; s) = L(f ; iω) = ∫ f (t)e dt. (13.2.10)
0

Comparing these two equations we see that f^(ω) = L(f ; iω) . We see the transforms are basically the same things using different
notation –at least for functions that are 0 for t < 0 .

This page titled 13.2: Laplace transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

13.2.2 https://math.libretexts.org/@go/page/6552
13.3: Exponential Type
The Laplace transform is defined when the integral for it converges. Functions of exponential type are a class of functions for
which the integral converges for all s with Re(s) large enough.

 Definition
We say that f (t) has exponential type a if there exists an M such that |f (t)| < M e at
for all t ≥ 0 .

 Note

As we’ve defined it, the exponential type of a function is not unique. For example, a function of exponential type 2 is clearly
also of exponential type 3. It’s nice, but not always necessary, to find the smallest exponential type for a function.

 Theorem 13.3.1

If f has exponential type a then L(f ) converges absolutely for Re(s) > a .

Proof
We prove absolute convergence by bounding
−st
|f (t)e |. (13.3.1)

The key here is that Re(s) > a implies Re(a − s) < 0 . So, we can write
∞ ∞ ∞
−st (a−s)t Re(a−s)t
∫ |f (t)e | dt ≤ ∫ |M e | dt = ∫ Me dt (13.3.2)
0 0 0

The last integral clearly converges when Re(a − s) < 0 . RED

 Example 13.3.1

Here is a list of some functions of exponential type.


at Re(a)t
f (t) = e : |f (t)| < 2e (exponential type Re(a))

0−t
f (t) = 1 : |f (t)| < 2 = 2e (exponential type 0) (13.3.3)

f (t) = cos(ωt) : |f (t)| ≤ 1 (exponential type 0)

In the above, all of the inequalities are for t ≥ 0 .


For f (t) = t , it is clear that for any a > 0 there is an M depending on a such that |f (t)| ≤ M e for t ≥ 0 . In fact, it is a at

simple calculus exercise to show M = 1/(ae) works. So, f (t) = t has exponential type a for any a > 0 .
The same is true of t . It’s worth pointing out that this follows because, if f has exponential type a and g has exponential type
n

b then f g has exponential type a + b . So, if t has exponential type a then t has exponential type na.
n

This page titled 13.3: Exponential Type is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

13.3.1 https://math.libretexts.org/@go/page/6553
13.4: Properties of Laplace transform
We have already used the linearity of Laplace transform when we computed L(cos(ωt)) . Let’s officially record it as a property.

 Property 1
The Laplace transform is linear. That is, if a and b are constants and f and g are functions then
L(af + bg) = aL(f ) + bL(g). (13.4.1)

(The proof is trivial –integration is linear.)

 Property 2

A key property of the Laplace transform is that, with some technical details,
Laplace transform transforms derivatives in t to multiplication by s (plus some details).
This is proved in the following theorem.

 Theorem 13.4.1
If f (t) has exponential type a and Laplace transform F (s) then

L(f (t); s) = sF (s) − f (0), valid for Re(s) > a. (13.4.2)

Proof
We prove this using integration by parts.
′ ∞ ′ −st −st ∞ ∞ −st
L(f ; s) = ∫ f (t)e dt = f (t)e | +∫ sf (t)e dt = −f (0) + sF (s).
0 0 0

In the last step we used the fact that at t = ∞, f (t)e −st


=0 , which follows from the assumption about exponential type.
Equation 13.5.2 gives us formulas for all derivatives of f .
′′ 2 ′
L(f ; s) = s F (s) − sf (0) − f (0) (13.4.3)

′′′ 3 2 ′ ′′
L(f ; s) = s F (s) − s f (0) − sf (0) − f (0) (13.4.4)

P roof . For Equation 13.5.3:


′′ ′ ′ ′ ′ ′ 2 ′
L(f ; s) = L((f ) ; s) = sL(f ; s) − f (0) = s(sF (s) − f (0)) − f (0) = s F (s) − sf (0) − f (0). QED

The proof Equation 13.5.4 is similar. Also, similar statements hold for higher order derivatives.

 Note
There is a further complication if we want to consider functions that are discontinuous at the origin or if we want to
allow f (t) to be a generalized function like δ(t). In these cases f (0) is not defined, so our formulas are undefined. The
technical fix is to replace 0 by 0 in the definition and all of the formulas for Laplace transform. You can learn more

about this by taking 18.031.

 Property 3. Theorem 13.4.2


If f (t) has exponential type a , then F (s) is an analytic function for Re(s) > a and

F (s) = −L(tf (t); s). (13.4.5)

Proof

13.4.1 https://math.libretexts.org/@go/page/6554
We take the derivative of F (s) . The absolute convergence for Re(s) large guarantees that we can interchange the order of
integration and taking the derivative.
d ∞ ∞
′ −st −st
F (s) = ∫ f (t)e dt = ∫ −tf (t)e dt = L(−tf (t); s).
0 0
ds

This proves Equation 13.5.5.


Equation 13.5.5 is called the s -derivative rule. We can extend it to more derivatives in s : Suppose L(f ; s) = F (s) . Then,

L(tf (t); s) = −F (s) (13.4.6)

n n (n)
L(t f (t); s) = (−1 ) F (s) (13.4.7)

Equation 13.5.6 is the same as Equation 13.5.5 above. Equation 13.5.7 follows from this.

 Example 13.4.1

Use the s -derivative rule and the formula L(1; s) = 1/s to compute the Laplace transform of t for n a positive integer. n

Solution
Let f (t) = 1 and F (s) = L(f ; s) . Using the s -derivative rule we get
1

L(t; s) = L(tf ; s) = −F (s) =
2
s

2
2 2 2 ′′
L(t ; s) = L(t f ; s) = (−1 ) F (s) =
3
s

n!
n n n n
L(t ; s) = L(t f ; s) = (−1 ) F (s) =
n+1
s

 Property 4 t-shift rule.

As usual, assume f (t) = 0 for t < 0 . Suppose a > 0 . Then,


−as
L(f (t − a); s) = e F (s) (13.4.8)

Proof
We go back to the definition of the Laplace transform and make the change of variables τ = t −a .
∞ ∞
−st −st
L(f (t − a); s) = ∫ f (t − a)e dt = ∫ f (t − a)e dt
0 a

∞ ∞
−s(τ+a) −sa −sτ −sa
= ∫ f (τ )e dτ = e ∫ f (τ )e dτ = e F (s).
0 0

The properties in Equations 13.5.1-13.5.8 will be used in examples below. They are also in the table at the end of these notes.

This page titled 13.4: Properties of Laplace transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

13.4.2 https://math.libretexts.org/@go/page/6554
13.5: Differential equations
Coverup Method
We are going to use partial fractions and the coverup method. We will assume you have seen partial fractions. If you don’t
remember them well or have never seen the coverup method.

 Example 13.5.1
Solve y ′′
−y = e
2t
, y(0) = 1 , y ′
(0) = 1 using Laplace transform.
Solution
Call L(y) = Y . Apply the Laplace transform to the equation gives

2 ′
1
(s Y − sy(0) − y (0)) − Y =
s−2

A little bit of algebra now gives


1
2
(s − 1)Y = + s + 1.
s−2

So
1 s+1 1 1
Y = + = +
2 2 2
(s − 2)(s − 1) s −1 (s − 2)(s − 1) s−1

Use partial fractions to write


A B C 1
Y = + + + .
s−2 s−1 s+1 s−1

The coverup method gives A = 1/3, B = −1/2, C = 1/6.

We recognize
1

s−a

as the Laplace transform of e , so at

2t t −t t
1 2t
1 t
1 −t t
y(t) = Ae + Be + C e +e = e − e + e +e .
3 2 6

 Example 13.5.2
Solve y ′′
−y = 1 , y(0) = 0 , y ′
(0) = 0 .
Solution
The rest (zero) initial conditions are nice because they will not add any terms to the algebra. As in the previous example we
apply the Laplace transform to the entire equation.
1 1 1 A B C
2
s Y −Y = , so Y = = = + +
2
s s(s − 1) s(s − 1)(s + 1) s s−1 s+1

The coverup method gives A = −1, B = 1/2, C = 1/2 . So,

t −t
1 t
1 −t
y = A + Be + C e = −1 + e + e .
2 2

This page titled 13.5: Differential equations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is

13.5.1 https://math.libretexts.org/@go/page/6555
available upon request.

13.5.2 https://math.libretexts.org/@go/page/6555
13.6: Table of Laplace transforms
Properties and Rules
We assume that f (t) = 0 for t < 0 .
Function Transform
∞ −st
f (t) F (s) = ∫ f (t)e dt (Definition)
0

af (t) + bg(t) aF (s) + bG(s) (Linearity)

at
e f (t) F (s − a) (s − shift)


f (t) sF (s) − f (0)

′′ 2 ′
f (t) s F (s) − sf (0) − f (0)

(n) n n−1 (n−1)


f (t) s F (s) − s f (0) − ⋅ ⋅ ⋅ −f (0)


tf (t) −F (s)

n n (n)
t f (t) (−1 ) F (s)

−as
f (t − a) e F (s) (t − translation or t − shift)

t
F (s)
∫ f (τ ) dτ (integration rule)
0
s

f (t) ∞
∫ F (σ) dσ
s
t

Function Transform Region of convergence


1 1/s Re(s) > 0

at
e 1/(s − a) Re(s) > Re(a)

2
t 1/s Re(s) > 0

n n+1
t n!/s Re(s) > 0

2 2
cos(ωt) s/(s +ω ) Re(s) > 0

2 2
sin(ωt) ω/(s +ω ) Re(s) > 0

at 2 2
e cos(ωt) (s − a)/((s − a) +ω ) Re(s) > Re(a)

at 2 2
e sin(ωt) ω/((s − a) +ω ) Re(s) > Re(a)

δ(t) 1 all s

−as
δ(t − a) e all s

kt −kt
e +e
2 2
cosh(kt) = s/(s −k ) Re(s) > k
2
kt −kt
e −e
2 2
sinh(kt) = k/(s −k ) Re(s) > k
2

1 1
(sin(ωt) − ωt cos(ωt)) Re(s) > 0
3 2 2 2
2ω (s +ω )

t s
sin(ωt) Re(s) > 0
2 2 2
2ω (s +ω )

2
1 s
(sin(ωt) + ωt cos(ωt)) Re(s) > 0
2 2 2
2ω (s +ω )

n at n+1
t e n!/(s − a) Re(s) > Re(a)

1 1
Re(s) > 0


√πt √s

Γ(a + 1)
a
t Re(s) > 0
a+1
s

This page titled 13.6: Table of Laplace transforms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit

13.6.1 https://math.libretexts.org/@go/page/51240
history is available upon request.

13.6.2 https://math.libretexts.org/@go/page/51240
13.7: System Functions and the Laplace Transform
When we introduced the Nyquist criterion for stability we stated without any justification that the system was stable if all the poles
of the system function G(s) were in the left half-plane. We also asserted that the poles corresponded to exponential modes of the
system. In this section we’ll use the Laplace transform to more fully develop these ideas for differential equations.

Lightning review of 18.03

 Definitions
d
1. D = is called a differential operator. Applied to a function f (t) we have
dt

df
Df = . (13.7.1)
dt

We read Df as 'D applied to f .'

 Example 13.7.1
If f (t) = t3
+2 then Df 2 2
= 3 t , D f = 6t .

2. If P (s) is a polynomial then P (D) is called a polynomial differential operator.

 Example 13.7.2
Suppose P (s) = s + 8s + 7 . What is
2
P (D) ? Compute P (D) applied to f (t) = t
3
+ 2t + 5 . Compute P (D)

applied to g(t) = e . 2t

Solution
P (D) = D
2
+ 8D + 7I . (The I in 7I is the identity operator.) To compute P (D)f we compute all the terms and sum
them up:
3
f (t) = t + 2t + 5

2
Df (t) = 3t +2 (13.7.2)

2
D f (t) = 6t

Therefore: (D 2
+ 8D + 7I )f = 6t + 8(3 t
2
+ 2) + 7(t
3
+ 2t + 5) = 7 t
3 2
+ 24 t + 20t + 51.

2t
g(t) = e

2t
Dg(t) = 2e (13.7.3)

2 2t
D g(t) = 4e

Therefore: (D 2
+ 8D + 7I )g = 4 e
2t
+ 8(2)e
2t
+ 7e
2t
= (4 + 16 + 7)e
2t
= P (2)e
2t
.

The substitution rule is a straightforward statement about the derivatives of exponentials.

 Theorem 13.7.1 Substitution rule


For a polynomial differential operator P (D) we have
st st
P (D)e = P (s)e . (13.7.4)

Proof
This is obvious. We ‘prove it’ by example. Let P (D) = D 2
+ 8D + 7I . Then
at 2 at at at 2 at at
P (D)e =a e + 8ae + 7e = (a + 8a + 7)e = P (a)e . (13.7.5)

13.7.1 https://math.libretexts.org/@go/page/51237
Let’s continue to work from this specific example. From it we’ll be able to remind you of the general approach to solving
constant coefficient differential equations.

 Example 13.7.3

Suppose P (s) = s 2
+ 8s + 7 . Find the exponential modes of the equation P (D)y = 0 .
Solution
The exponential modes are solutions of the form y(t) = e s0 t
. Using the substititution rule
s0 t
P (D)e = 0 ⇔ P (s0 ) = 0. (13.7.6)

That is, y(t) = e s0 t


is a mode exactly when s is a root of P (s) . The roots of P (s) are -1, -7. So the modal solutions are
0

−t −7t
y1 (t) = e and y2 (t) = e . (13.7.7)

 Example 13.7.4

Redo the previous example using the Laplace transform.


Solution
For this we solve the differential equation with arbitrary initial conditions:
′′ ′ ′
P (D)y = y + 8 y + 7y = 0; y(0) = c1 , y (0) = c2 . (13.7.8)

Let Y (s) = L(y; s) . Applying the Laplace transform to the equation we get
2 ′
(s Y (s) − sy(0) − y (0)) + 8(sY (s) − y(0)) + 7Y (s) = 0 (13.7.9)

Algebra:

2
sc1 + 8 c1 + c2
(s + 8s + 7)Y (s) − sc1 − c2 − 8 c1 = 0 ⇔ Y = (13.7.10)
2
s + 8s + 7

Factoring the denominator and using partial fractions, we get


sc1 + 8 c1 + c2 sc1 + 8 c1 + c2 A B
Y (s) = = = + . (13.7.11)
2
s + 8s + 7 (s + 1)(s + 7) s+1 s+7

We are unconcerned with the exact values of A and B . Taking the Laplace inverse we get
−t −7t
y(t) = Ae + Be . (13.7.12)

That is, y(t) is a linear combination of the exponential modes.


You should notice that the denominator in the expression for Y (s) is none other than the characteristic polynomial P (s) .

system function

 Example 13.7.5

With the same P (s) as in Example 13.7.2 solve the inhomogeneous DE with rest initial conditions: P (D)y = f (t) , y(0) = 0 ,
y (0) = 0 .

Solution
Taking the Laplace transform of the equation we get
P (s)Y (s) = F (s). (13.7.13)

Therefore

13.7.2 https://math.libretexts.org/@go/page/51237
1
Y (s) = F (s) (13.7.14)
P (s)

We can't find y(t) explicitly because f (t) isn't specified.


But, we can make the following definitions and observations. Let G(s) = 1/P (s) . If we declare f to be the input and y the
output of this linear time invariant system, then G(s) is called the system function. So, we have

Y (s) = G(s) ⋅ F (s). (13.7.15)

The formula Y = G⋅ F can be phrased as


output = system function × input.
Note well, the roots of P (s) correspond to the exponential modes of the system, i.e. the poles of G(s) correspond to the
exponential modes.
The system is called stable if the modes all decay to 0 as t goes to infinity. That is, if all the poles have negative real part.

 Example 13.7.6

This example is to emphasize that not all system functions are of the form 1/P (s) . Consider the system modeled by the
differential equation

P (D)x = Q(D)f , (13.7.16)

where P and Q are polynomials. Suppose we consider f to be the input and x to be the ouput. Find the system function.
Solution
If we start with rest initial conditions for x and f then the Laplace transform gives P (s)X(s) = Q(s)F (s) or
Q(s)
X(s) = ⋅ F (s) (13.7.17)
P (s)

Using the formulation


output = system function × input.
we see that the system function is
Q(s)
G(s) = . (13.7.18)
P (s)

Note that when f (t) = 0 the differential equation becomes P (D)x = 0 . If we make the assumption that the Q(s)/P (s) is in
reduced form, i.e. P and Q have no common zeros, then the modes of the system (which correspond to the roots of P (s) ) are
still the poles of the system function.

 Comments

All LTI systems have system functions. They are not even all of the form Q(s)/P (s). But, in the s -domain, the output is
always the system function times the input. If the system function is not rational then it may have an infinite number of poles.
Stability is harder to characterize, but under some reasonable assumptions the system will be stable if all the poles are in the
left half-plane.
The system function is also called the transfer function. You can think of it as describing how the system transfers the input to
the output.

This page titled 13.7: System Functions and the Laplace Transform is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

13.7.3 https://math.libretexts.org/@go/page/51237
13.8: Laplace inverse
Up to now we have computed the inverse Laplace transform by table lookup. For example, L
−1
(1/(s − a)) = e
at
. To do this
properly we should first check that the Laplace transform has an inverse.
We start with the bad news: Unfortunately this is not strictly true. There are many functions with the same Laplace transform. We
list some of the ways this can happen.
1. If f (t) = g(t) for t ≥ 0 , then clearly F (s) = G(s) . Since the Laplace transform only concerns t ≥ 0 , the functions can differ
completely for t < 0 .
2. Suppose f (t) = e and
at

f (t) for t ≠ 1
g(t) = { (13.8.1)
0 for t = 1.

That is, f and g are the same except we arbitrarily assigned them different values at t = 1 . Then, since the integrals won’t notice
the difference at one point, F (s) = G(s) = 1/(s − a) . In this sense it is impossible to define L (F ) uniquely. −1

The good news is that the inverse exists as long as we consider two functions that only differ on a negligible set of points the same.
In particular, we can make the following claim.

 Theorem 13.8.1

Suppose f and g are continuous and F (s) = G(s) for all s with Re(s) > a for some a . Then f (t) = g(t) for t ≥ 0 .

This theorem can be stated in a way that includes piecewise continuous functions. Such a statement takes more care, which would
obscure the basic point that the Laplace transform has a unique inverse up to some, for us, trivial differences.
We start with a few examples that we can compute directly.

 Example 13.8.1
Let
at
f (t) = e . (13.8.2)

So,
1
F (s) = . (13.8.3)
s−a

Show
st
f (t) = ∑ Res(F (s)e ) (13.8.4)

c+i∞
1
st
f (t) = ∫ F (s)e ds (13.8.5)
2πi c−i∞

The sum is over all poles of e st


/(s − a) . As usual, we only consider t > 0 .
Here, c > Re(a) and the integral means the path integral along the vertical line x = c .
Solution
Proving Equation 13.8.4 is straightforward: It is clear that
st
e
(13.8.6)
s−a

has only one pole which is at s = a . Since,


st
e
at
∑ Res( , a) = e (13.8.7)
s−a

13.8.1 https://math.libretexts.org/@go/page/51238
we have proved Equation 13.8.4.
Proving Equation 13.8.5 is more involved. We should first check the convergence of the integral. In this case, s = c + iy , so
the integral is
c+i∞ ∞ (c+iy)t ct ∞ iyt
1 st
1 e e e
∫ F (s)e ds = ∫ i dy = ∫ dy. (13.8.8)
2πi c−i∞
2πi −∞
c + iy − a 2π −∞
c + iy − a

The (conditional) convergence of this integral follows using exactly the same argument as in the example near the end of Topic
9 on the Fourier inversion formula for f (t) = e . That is, the integrand is a decaying oscillation, around 0, so its integral is
at

also a decaying oscillation around some limiting value.


Now we use the contour shown below.

We will let R go to infinity and use the following steps to prove Equation 13.8.5.
1. The residue theorem guarantees that if the curve is large enough to contain a then
st st
1 e e
at
∫ ds = ∑ Res( , a) = e . (13.8.9)
2πi C1 −C2 −C3 +C4
s−a s−a

2. In a moment we will show that the integrals over C , C , C all go to 0 as R → ∞ .


2 3 4

3. Clearly as R goes to infinity, the integral over C goes to the integral in Equation 13.8.5 Putting these steps together we
1

have
st c+i∞ st
e e
at
e = lim ∫ ds = ∫ ds (13.8.10)
R→∞
C1 −C2 −C3 +C4
s−a c−i∞
s−a

Except for proving the claims in step 2, this proves Equation 13.8.5.
To verify step 2 we look at one side at a time.
C2 : C is parametrized by s = γ(u) = u + iR , with −R ≤ u ≤ c . So,
2

st c (u+iR)t c ut ct −Rt
e e e e −e
|∫ ds| = ∫ | | ≤∫ du = . (13.8.11)
C2
s−a −R
u + iR − a −R
R tR

Since c and t are fixed, it’s clear this goes to 0 as R goes to infinity.
The bottom C is handled in exactly the same manner as the top C .
4 2

C3 : C is parametrized by s = γ(u) = −R + iu , with −R ≤ u ≤ R . So,


3

st R (−R+iu)t R −Rt −Rt R −Rt


e e e e 2Re
|∫ ds| = ∫ | | ≤∫ du = ∫ du = . (13.8.12)
C3
s−a −R
−R + iu − a −R
R+a R+a −R
R+a

Since a and t > 0 are fixed, it’s clear this goes to 0 as R goes to infinity.

13.8.2 https://math.libretexts.org/@go/page/51238
 Example 13.8.2

Repeat the previous example with f (t) = t for t > 0 , F (s) = 1/s . 2

This is similar to the previous example. Since F decays like 1/s we can actually allow t ≥ 0
2

 Theorem 13.8.2 Laplace inversion 1

Assume f is continuous and of exponential type a . Then for c > a we have


c+i∞
1 st
f (t) = ∫ F (s)e ds. (13.8.13)
2πi c−i∞

As usual, this formula holds for t > 0 .

Proof
The proof uses the Fourier inversion formula. We will just accept this theorem for now. Example 13.8.1 above illustrates
the theorem.

 Theorem 13.8.3 Laplace inversion 2

Suppose F (s) has a finite number of poles and decays like 1/s (or faster). Define
st
f (t) = ∑ Res(F (s)e , pk ), where the sum is over all the poles pk . (13.8.14)

Then L(f ; s) = F (s)

Proof
Proof given in class. To be added here. The basic ideas are present in the examples above, though it requires a fairly clever
choice of contours.
The integral inversion formula in Equation 13.8.13 can be viewed as writing f (t) as a ‘sum’ of exponentials. This is
extremely useful. For example, for a linear system if we know how the system responds to input f (t) = e for all a , then
at

we know how it responds to any input by writing it as a ‘sum’ of exponentials.

This page titled 13.8: Laplace inverse is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy Orloff
(MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

13.8.3 https://math.libretexts.org/@go/page/51238
13.9: Delay and Feedback
Let f (t) = 0 for t < 0 . Fix a > 0 and let h(t) = f (t − a) . So, h(t) is a delayed version of the signal f (t). The Laplace property
Equation 13.5.8 says
−as
H (s) = e F (s), (13.9.1)

where H and F are the Laplace transforms of h and f respectively.


Now, suppose we have a system with system function G(s) . (Again, called the open loop system.) As before, can feed the output
back through the system. But, instead of just multiplying the output by a scalar we can delay it also. This is captured by the
feedback factor ke .
−as

The system function for the closed loop system is


G
GC L (s) = (13.9.2)
1 + ke−as G

Note even if you start with a rational function the system function of the closed loop with delay is not rational. Usually it has an
infinite number of poles.

 Example 13.9.1

Suppose G(s) = 1 , a = 1 and k = 1 find the poles of G C L (s) .


Solution
1
GC L (s) = . (13.9.3)
−s
1 +e

So the poles occur where e


−s
= −1 , i.e. at , where
inπ n is an odd integer. There are an infinite number of poles on the
imaginary axis.

 Example 13.9.2

Suppose G(s) = 1 , a = 1 and k = 1/2 find the poles of G C L (s) . Is the closed loop system stable?
Solution
1
GC L (s) = . (13.9.4)
−s
1 +e /2

So the poles occur where e = −2 , i.e. at − log(2) + inπ, where n is an odd integer. Since − log(2) < 0, there are an
−s

infinite number of poles in the left half-plane. With all poles in the left half-plane, the system is stable.

 Example 13.9.3

Suppose G(s) = 1 , a = 1 and k = 2 find the poles of G C L (s) . Is the closed loop system stable?
Solution
1
GC L (s) = . (13.9.5)
−s
1 + 2e

So the poles occur where e = −1/2 , i.e. at log(2) + inπ, where n is an odd integer. Since log(2) > 0, there are an infinite
−s

number of poles in the right half-plane. With poles in the right half-plane, the system is not stable.

13.9.1 https://math.libretexts.org/@go/page/51239
 Remark

If Re(s) is large enough we can express the system function


1
G(s) = (13.9.6)
−as
1 + ke

as a geometric series
1 −as 2 −2as 3 −3as
= 1 − ke +k e −k e + ... (13.9.7)
−as
1 + ke

So, for input F (s) , we have output


−as 2 −2as 3 −3as
X(s) = G(s)F (s) = F (s) − ke F (s) + k e F (s) − k e F (s) + . . . (13.9.8)

Using the shift formula Equation Equation 13.5.8, we have


2 3
x(t) = f (t) − kf (t − a) + k f (t − 2a) − k f (t − 3a) + . . . (13.9.9)

(This is not really an infinite series because f (t) = 0 for t < 0 .) If the input is bounded and k < 1 then even for large t the
series is bounded. So bounded input produces bounded output –this is also what is meant by stability. On the other hand if
k > 1 , then bounded input can lead to unbounded output –this is instability.

This page titled 13.9: Delay and Feedback is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

13.9.2 https://math.libretexts.org/@go/page/51239
CHAPTER OVERVIEW
14: Analytic Continuation and the Gamma Function
In this topic we will look at the Gamma function. This is an important and fascinating function that generalizes factorials from
integers to all complex numbers. We look at a few of its many interesting properties. In particular, we will look at its connection to
the Laplace transform. We will start by discussing the notion of analytic continuation. We will see that we have, in fact, been using
this already without any comment. This was a little sloppy mathematically speaking and we will make it more precise here.
14.1: Analytic Continuation
14.2: Definition and properties of the Gamma function
14.3: Connection to Laplace
14.4: Proofs of (some) properties

Thumbnail: Analytic continuation from U (centered at 1) to V (centered at a=(3+i)/2). (CC BY-SA 4.0 International; Ncsinger via
Wikipedia)

This page titled 14: Analytic Continuation and the Gamma Function is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

1
14.1: Analytic Continuation
If we have an function which is analytic on a region A , we can sometimes extend the function to be analytic on a bigger region.
This is called analytic continuation.

 Example 14.1.1
Consider the function

3t −zt
F (z) = ∫ e e dt.
0

We recognize this as the Laplace transform of f (t) = e (though we switched the variable from
3t
s to z ). The integral
converges absolutely and F is analytic in the region A = {Re(z) > 3} .

Can we extend F (z) to be analytic on a bigger region B ? That is, can we find a region B a function F^(z) such that
1. B contains A
2. F^(z) is analytic on B
3. F^(z) agrees with F on A , i.e. F^(z) = F (z) for z ∈ A .
Solution
1 1
Yes! We know that F (z) = -valid for any z in A . So we can define F^(z) = for any z in B = C − {3} .
z−3 z−3

We say that we have analytically continued F on A to F^ on B .

 Note

Usually we don’t rename the function. We would just say F (z) defined by Equation 14.2.1 can be continued to
1
F (z) = on B .
z−3

 Definition: Analytic Continuation


Suppose f (z) is analytic on a region A . Suppose also that A is contained in a region B . We say that f can be analytically
continued from A to B if there is a function f^(z) such that

1. f^(z) is analytic on B .
2. f^(z) = f (z) for all z in A .

As noted above, we usually just use the same symbol f for the function on A and its continuation to B .

The region A = Re(z) > 0 is contained in B = Re(z) > −1 .

14.1.1 https://math.libretexts.org/@go/page/6557
 Note

We used analytic continuation implicitly in, for example, the Laplace inversion formula involving residues of F (s) = L(f ; s) .
1
Recall that we wrote that for f (t) = e , F (s) =
3t
and
s−3

f (t) = ∑ residues of F . (14.1.1)

As an integral, F (s) was defined for Re(s) > 3 , but the residue formula relies on its analytic continuation to C − {3} .

Analytic Continuation is Unique

 Theorem 14.1.1

Suppose f , g are analytic on a connected region A . If f =g on an open subset of A then f =g on all of A .

Proof
Let h = f − g . By hypothesis h(z) = 0 on an open set in A. Clearly this means that the zeros of h are not isolated. Back
in Topic 7 we showed that for analytic h on a connected region A either the zeros are isolated or else h is identically zero
on A. Thus, h is identically 0, which implies f = g on A.

 Corollary

There is at most one way to analytically continue a function from a region A to a connected region B .

Proof
Two analytic continuations would agree on A and therefore must be the same.

Extension. Since the proof of the theorem uses the fact that zeros are isolated, we actually have the stronger statement: if f and g
agree on a nondiscrete subset of A then they are equal. In particular, if f and g are two analytic functions on A and they agree on a
line or ray in A then they are equal.
Here is an example that shows why we need A to be connected in Theorem 14.1.1.

 Example 14.1.2

Suppose A is the plane minus the real axis. Define two functions on A as follows.

1 for z in the upper half-plane


f (z) = {
0 for z in the lower half-plane

1 for z in the upper half-plane


g(z) = {
1 for z in the lower half-plane

Both f and g are analytic on A and agree on an open set (the upper half-plane), but they are not the same function.

Here is an example that shows a little care must be taken in applying the corollary.

 Example 14.1.3

Suppose we define f and g as follows

f (z) = log(z) with 0 < θ < 2π

g(z) = log(z) with − π < θ < π

Clearly f and g agree on the first quadrant. But we can’t use the theorem to conclude that f = g everywhere. The problem is
that the regions where they are defined are different. f is defined on C minus the positive real axis, and g is defined on C

14.1.2 https://math.libretexts.org/@go/page/6557
minus the negative real axis. The region where they are both defined is C minus the real axis, which is not connected.
Because they are both defined on the upper half-plane, we can conclude that they are the same there. (It’s easy to see this is
true.) But (in this case) being equal in the first quadrant doesn’t imply they are the same in the lower half-plane.

This page titled 14.1: Analytic Continuation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

14.1.3 https://math.libretexts.org/@go/page/6557
14.2: Definition and properties of the Gamma function
 Definition: Gamma Function

The Gamma function is defined by the integral formula



z−1 −t
Γ(z) = ∫ t e dt (14.2.1)
0

The integral converges absolutely for Re(z) > 0 .

 Properties
1. Γ(z) is defined and analytic in the region Re(z) > 0 .
2. Γ(n + 1) = n! , for integer n ≥ 0 .
3. Γ(z + 1) = zΓ(z) (function equation)
This property and Property 2 characterize the factorial function. Thus, Γ(z) generalizes n! to complex numbers z . Some
authors will write Γ(z + 1) = z! .
4. Γ(z) can be analytically continued to be meromorphic on the entire plane with simple poles at 0, −1, −2 .... The residues are
m
(−1)
Res(Γ, −m) = (14.2.2)
m!

z
5. Γ(z) = [ze γz ∞

1
(1 + )e
−z/n −1
] , where γ is Euler's constant
n

1 1 1
γ = lim 1 + + +⋅ ⋅ ⋅ − log(n) ≈ 0.577 (14.2.3)
n→∞ 2 3 n

This property uses an infinite product. Unfortunately we won’t have time, but infinite products represent an entire topic on
their own. Note that the infinite product makes the positions of the poles of Γ clear.
π
6. Γ(z)Γ(1 − z) =
sin(πz)

With Property 5 this gives a product formula for sin(πz).


−−
7. Γ(z + 1) ≈ √2πz z+1/2
e for |z| large, Re(z) > 0 .
−z

−−
In particular, n! ≈ √2πn e . (Stirling's formula)
n+1/2 −n


8. 2
2z−1
Γ(z)Γ(z + 1/2) = √π Γ(2z) (Legendre duplication formula)

 Note

These are just some of the many properties of Γ(z) . As is often the case, we could have chosen to define Γ(z) in terms of some
of its properties and derived Equation 14.3.1 as a theorem.

We will prove (some of) these properties below.

 Example 14.2.1

Use the properties of Γ to show that Γ(1/2) = √− −


π and Γ(3/2) = √π /2.

Solution
From Property 2 we have Γ(1) = 0! = 1 . The Legendre duplication formula with z = 1/2 then shows
1 1
0 − −
2 Γ( ) Γ(1) = √π Γ(1) ⇒ Γ ( ) = √π .
2 2

Now, using the functional equation Property 3 we get

14.2.1 https://math.libretexts.org/@go/page/6558

3 1 1 1 √π
Γ( ) = Γ( + 1) = Γ( ) = .
2 2 2 2 2

This page titled 14.2: Definition and properties of the Gamma function is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform;
a detailed edit history is available upon request.

14.2.2 https://math.libretexts.org/@go/page/6558
14.3: Connection to Laplace
 Claim
Γ(z)
For Re(z) > 1 and Re(s) > 0 , L(t z−1
; s) =
z
.
s

Proof

By definition L(t
z−1
; s) = ∫
0
t
z−1
e
−st
dt . It is clear that if Re(z) > 1 , then the integral converges absolutely for
Re(s) > 0 .

Let’s start by assuming that s > 0 is real. Use the change of variable τ = st . The Laplace integral becomes
∞ ∞ ∞
τ dτ 1 Γ(z)
z−1 −st z−1 −τ z−1 −τ
∫ t e dt = ∫ ( ) e = ∫ τ e = dτ . (14.3.1)
z z
0 0
s s s 0
s

Γ(z)
This shows that L(t
z−1
; s) =
z
) for s real and positive. Since both sides of this equation are analytic on Re(s) > 0 ,
s
the extension to Theorem 14.2.1 guarantees they are the same.

 Corollary
Γ(z) = L(t
z−1
; 1) . (Of course, this is also clear directly from the definition of Γ(z) in Equation 14.3.1.

This page titled 14.3: Connection to Laplace is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jeremy
Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

14.3.1 https://math.libretexts.org/@go/page/6559
14.4: Proofs of (some) properties
Proofs of (some) properties of Γ
Property 1. This is clear since the integral converges absolutely for Re(z) > 0 .
n!
Property 2. We know (see the Laplace table) L(t n
; s) =
n+1
. Setting s = 1 and using the corollary to the claim above we get
s

n
Γ(n + 1) = L(t ; 1) = n!. (14.4.1)

(We could also prove this formula directly from the integral definition of of Γ(z) .)
Property 3. We could do this relatively easily using integration by parts, but let’s continue using the Laplace transform. Let
f (t) = t . We know
z

Γ(z + 1)
L(f , s) = (14.4.2)
z+1
s

Now assume Re(z) > 0 , so f (0) = 0 . Then f ′


= zt
z−1
and we can compute L(f ′
; s) two ways.
zΓ(z)
′ z−1
L(f ; s) = L(zt ; s) = (14.4.3)
z
s

Γ(z + 1)
′ z
L(f ; s) = sL(t ; s) = (14.4.4)
sz

Comparing these two equations we get property 3 for Re(z) > 0 .


Property 4. We’ll need the following notation for regions in the plane.

B0 = {Re(z) > 0}

B1 = {Re(z) > −1} − {0}


(14.4.5)
B2 = {Re(z) > −2} − {0, −1}

Bn = {Re(z) > −n} − {0, −1, . . . , −n + 1}

So far we know that Γ(z) is defined and analytic on B . Our strategy is to use Property 3 to analytically continue
0 Γ from B0 to
B . Along the way we will compute the residues at 0 and the negative integers.
n

Rewrite Property 3 as
Γ(z + 1)
Γ(z) = (14.4.6)
z

The right side of this equation is analytic on B . Since it agrees with Γ(z) on B it represents an analytic continuation from B to
1 0 0

B . We easily compute
1

Res(Γ, 0) = lim zΓ(z) = Γ(1) = 1. (14.4.7)


z→0

Γ(z + 2)
Similarly, Equation 14.5.6 can be expressed as Γ(z + 1) = . So,
z+1

Γ(z + 1) Γ(z + 2)
Γ(z) = = (14.4.8)
z (z + 1)z

The right side of this equation is analytic on B . Since it agrees with Γ on B it is an analytic continuation to B . The residue at −1
2 0 2

is
Γ(1)
Res(Γ, −1) = lim (z + 1)Γ(z) = = −1. (14.4.9)
z→−1 −1

We can iterate this procedure as far as we want

14.4.1 https://math.libretexts.org/@go/page/6560
Γ(z + m + 1)
Γ(z) = (14.4.10)
(z + m)(z + m − 1) + . . . +(z + 1)z

The right side of this equation is analytic on Bm+1 . Since it agrees with Γ on B0 it is an analytic continuation to Bm+1 . The
residue at −m is
m
Γ(1) (−1)
Res(Γ, −m) = lim (z + m)Γ(z) = = . (14.4.11)
z→−m (−1)(−2) . . . (−m) m!

We’ll leave the proofs of Properties 5-8 to another class!

This page titled 14.4: Proofs of (some) properties is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Jeremy Orloff (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

14.4.2 https://math.libretexts.org/@go/page/6560
Index
A converging field H
absolute value (complex number) 7.3: Physical Assumptions and Mathematical harmonic conjugate
Consequences
1.3: Terminology and Basic Arithmetic 6.3: Del notation
coverup method
analytic continuation harmonic functions
13.5: Differential equations
14.1: Analytic Continuation 6: Harmonic Functions
curl 6.2: Harmonic Functions
analytic part
3.4: Grad, curl and div holomorphic
8.4: Taylor Series Examples
arg(z) 9.2: Holomorphic and Meromorphic Functions
1.9: The function arg(z)
D
De Moivre's Theorem I
B 1.13: de Moivre's formula Imaginary number
deleted disk 1.1: Motivation
bilinear transforms
2.2: Open Disks, Open Deleted Disks, and Open incompressibility
11.7: Fractional Linear Transformations Regions
branch 7.3: Physical Assumptions and Mathematical
displacement Consequences
1.5: Polar Coordinates 11.2: Tangent vectors as complex numbers
1.9: The function arg(z) inverse Euler formulas
div 1.12: Inverse Euler formula
branch cut
3.4: Grad, curl and div irrotational flow
1.9: The function arg(z)
2.9: Branch Cuts and Function Composition divergent field 7.3: Physical Assumptions and Mathematical
7.3: Physical Assumptions and Mathematical Consequences
Consequences isolated sigularity
C
9.1: Poles and Zeros
Cauchy’s theorem E
4.5: Examples
isolated zeros
eddy 8.3: Taylor Series
Cauchy's integral formula
7.2: Stationary Flows
5.1: Cauchy's Integral for Functions
Cauchy's Residue theorem
essential singularities L
9.3: Behavior of functions near zeros and poles Laplace’s equation
9.5: Cauchy Residue Theorem
essential singularity 6.2: Harmonic Functions
chain rule
8.9: Poles Laplacian
3.3: Chain rule
Euler's formula 6.2: Harmonic Functions
complex derivative
1.6: Euler's Formula Laurent series
2.5: Derivatives
complex differentiation 8.7: Laurent Series
2.5: Derivatives
F level curves
complex exponential feedback factor 3.5: Level Curves
1.6: Euler's Formula
13.9: Delay and Feedback line integrals
1.7: The Exponential Function Fourier inversion formula 3.6: Line Integrals
complex line integrals 10.7: Fourier transform linear vortex
4.2: Complex Line Integrals Fourier transform 7.6: More Examples with Pretty Pictures
complex log function 10.7: Fourier transform
1.11: The Function log(z) fractional linear transformations M
complex multiplication 11.7: Fractional Linear Transformations Möbius transforms
1.14: Representing Complex Multiplication as fundamental theorem for gradient fields 11.7: Fractional Linear Transformations
Matrix Multiplication 3.6: Line Integrals magnitude (complex number)
Complex Numbers Fundamental Theorem of Algebra 1.3: Terminology and Basic Arithmetic
1.3: Terminology and Basic Arithmetic 1.2: Fundamental Theorem of Algebra maximum dodulus principle
complex potential function 5.5: Amazing consequence of Cauchy’s integral
7.4: Complex Potentials G formula
complex potential of the flow gamma function maximum principle
7.4: Complex Potentials 14.2: Definition and properties of the Gamma 6.5: Maximum Principle and Mean Value Property
complex replacement function mean value property
1.6: Euler's Formula 14.4: Proofs of (some) properties
6.5: Maximum Principle and Mean Value Property
complexification geometric series meromorphic
1.6: Euler's Formula 8.1: Geometric Series
9.2: Holomorphic and Meromorphic Functions
conformal functions grad
11.1: Geometric Definition of Conformal Mappings 3.4: Grad, curl and div
N
conformal maps Green's theorem
norm (complex number)
11.1: Geometric Definition of Conformal Mappings 3.7: Green's Theorem
1.3: Terminology and Basic Arithmetic
continuous functions Nyquist criterion
2.3: Limits and Continuous Functions 12.2: Nyquist Criterion for Stability
Nyquist plot
12.2: Nyquist Criterion for Stability

1 https://math.libretexts.org/@go/page/51140
O R stagnation point
open disk ratio of the geometric series 7.5: Stream Functions
2.2: Open Disks, Open Deleted Disks, and Open 8.1: Geometric Series stationary flow
Regions ratio test 7.2: Stationary Flows
open region 8.2: Convergence of Power Series stream function
2.2: Open Disks, Open Deleted Disks, and Open regular part 7.5: Stream Functions
Regions
8.4: Taylor Series Examples system function
order of the zero regular part (Laurent series) 13.7: System Functions and the Laplace Transform
8.3: Taylor Series
8.7: Laurent Series
removable singularity T
P 8.9: Poles triangle inequality
path Independence residue 1.4: The Complex Plane
4.4: Path Independence 8.9: Poles
Picard's theorem 9.4: Residues U
9.3: Behavior of functions near zeros and poles Riemann mapping theorem uniform flow
piecewise smooth 11.5: Riemann Mapping Theorem 7.2: Stationary Flows
3.7: Green's Theorem Riemann sphere
pole 2.4: The Point at Infinity V
8.9: Poles Rouché’s theorem velocity fields
pole of infinite order 12.1: Principle of the Argument
7.1: Velocity Fields
8.9: Poles
vortex
pole order S 7.2: Stationary Flows
8.9: Poles shearing flow
positively oriented 7.3: Physical Assumptions and Mathematical W
3.7: Green's Theorem Consequences
potential function simple pole winding index
12.1: Principle of the Argument
7.4: Complex Potentials 8.9: Poles
principal part (Laurent series) singular function winding number
3.8: Extensions and Applications of Green’s
8.7: Laurent Series 8.5: Singularities
Theorem
punctured disk singular part 12.1: Principle of the Argument
2.2: Open Disks, Open Deleted Disks, and Open 8.4: Taylor Series Examples
Regions singularities
punctured plane 8.5: Singularities
1.8: Complex Functions as Mappings

2 https://math.libretexts.org/@go/page/51140
Detailed Licensing
Overview
Title: Complex Variables with Applications (Orloff)
Webpages: 129
Applicable Restrictions: Noncommercial
All licenses found:
CC BY-NC-SA 4.0: 87.6% (113 pages)
Undeclared: 12.4% (16 pages)

By Page
Complex Variables with Applications (Orloff) - CC BY-NC- 2.5: Derivatives - CC BY-NC-SA 4.0
SA 4.0 2.6: Cauchy-Riemann Equations - Undeclared
Front Matter - Undeclared 2.7: Cauchy-Riemann all the way down - Undeclared
TitlePage - Undeclared 2.8: Gallery of Functions - Undeclared
InfoPage - Undeclared 2.9: Branch Cuts and Function Composition -
Table of Contents - Undeclared Undeclared
Licensing - Undeclared 2.10: Appendix - Limits - Undeclared
Preface - CC BY-NC-SA 4.0 3: Multivariable Calculus (Review) - CC BY-NC-SA 4.0
1: Complex Algebra and the Complex Plane - CC BY- 3.1: Terminology and Notation - CC BY-NC-SA 4.0
NC-SA 4.0 3.2: Parametrized curves - CC BY-NC-SA 4.0
3.3: Chain rule - CC BY-NC-SA 4.0
1.1: Motivation - CC BY-NC-SA 4.0
3.4: Grad, curl and div - CC BY-NC-SA 4.0
1.2: Fundamental Theorem of Algebra - CC BY-NC-
3.5: Level Curves - CC BY-NC-SA 4.0
SA 4.0
3.6: Line Integrals - CC BY-NC-SA 4.0
1.3: Terminology and Basic Arithmetic - CC BY-NC-
3.7: Green's Theorem - CC BY-NC-SA 4.0
SA 4.0
3.8: Extensions and Applications of Green’s Theorem
1.4: The Complex Plane - CC BY-NC-SA 4.0
- CC BY-NC-SA 4.0
1.5: Polar Coordinates - CC BY-NC-SA 4.0
1.6: Euler's Formula - CC BY-NC-SA 4.0 4: Line Integrals and Cauchy’s Theorem - CC BY-NC-SA
1.7: The Exponential Function - CC BY-NC-SA 4.0 4.0
1.8: Complex Functions as Mappings - CC BY-NC-SA 4.1: Introduction to Line Integrals and Cauchy’s
4.0 Theorem - CC BY-NC-SA 4.0
1.9: The function arg(z) - CC BY-NC-SA 4.0 4.2: Complex Line Integrals - CC BY-NC-SA 4.0
1.10: Concise summary of branches and branch cuts - 4.3: Fundamental Theorem for Complex Line
CC BY-NC-SA 4.0 Integrals - CC BY-NC-SA 4.0
1.11: The Function log(z) - CC BY-NC-SA 4.0 4.4: Path Independence - CC BY-NC-SA 4.0
1.12: Inverse Euler formula - CC BY-NC-SA 4.0 4.5: Examples - CC BY-NC-SA 4.0
1.13: de Moivre's formula - CC BY-NC-SA 4.0 4.6: Cauchy's Theorem - CC BY-NC-SA 4.0
1.14: Representing Complex Multiplication as Matrix 4.7: Extensions of Cauchy's theorem - CC BY-NC-SA
Multiplication - CC BY-NC-SA 4.0 4.0
2: Analytic Functions - CC BY-NC-SA 4.0 5: Cauchy Integral Formula - CC BY-NC-SA 4.0
2.1: The Derivative - Preliminaries - CC BY-NC-SA 5.1: Cauchy's Integral for Functions - CC BY-NC-SA
4.0 4.0
2.2: Open Disks, Open Deleted Disks, and Open 5.2: Cauchy’s Integral Formula for Derivatives - CC
Regions - CC BY-NC-SA 4.0 BY-NC-SA 4.0
2.3: Limits and Continuous Functions - CC BY-NC- 5.3: Proof of Cauchy's integral formula - CC BY-NC-
SA 4.0 SA 4.0
2.4: The Point at Infinity - CC BY-NC-SA 4.0

1 https://math.libretexts.org/@go/page/115360
5.4: Proof of Cauchy's integral formula for 10.5: Cauchy principal value - CC BY-NC-SA 4.0
derivatives - CC BY-NC-SA 4.0 10.6: Integrals over portions of circles - CC BY-NC-
5.5: Amazing consequence of Cauchy’s integral SA 4.0
formula - CC BY-NC-SA 4.0 10.7: Fourier transform - CC BY-NC-SA 4.0
6: Harmonic Functions - CC BY-NC-SA 4.0 10.8: Solving DEs using the Fourier transform - CC
6.2: Harmonic Functions - CC BY-NC-SA 4.0 BY-NC-SA 4.0
6.3: Del notation - CC BY-NC-SA 4.0 11: Conformal Transformations - CC BY-NC-SA 4.0
6.4: A second Proof that u and v are Harmonic - CC 11.1: Geometric Definition of Conformal Mappings -
BY-NC-SA 4.0 CC BY-NC-SA 4.0
6.5: Maximum Principle and Mean Value Property - 11.2: Tangent vectors as complex numbers - CC BY-
CC BY-NC-SA 4.0 NC-SA 4.0
6.6: Orthogonality of Curves - CC BY-NC-SA 4.0 11.3: Analytic functions are Conformal - CC BY-NC-
7: Two Dimensional Hydrodynamics and Complex SA 4.0
Potentials - CC BY-NC-SA 4.0 11.4: Digression to harmonic functions - CC BY-NC-
SA 4.0
7.1: Velocity Fields - CC BY-NC-SA 4.0
11.5: Riemann Mapping Theorem - CC BY-NC-SA
7.2: Stationary Flows - CC BY-NC-SA 4.0
4.0
7.3: Physical Assumptions and Mathematical
11.6: Examples of conformal maps and excercises -
Consequences - CC BY-NC-SA 4.0
CC BY-NC-SA 4.0
7.4: Complex Potentials - CC BY-NC-SA 4.0
11.7: Fractional Linear Transformations - CC BY-NC-
7.5: Stream Functions - CC BY-NC-SA 4.0
SA 4.0
7.6: More Examples with Pretty Pictures - CC BY-
11.8: Reflection and symmetry - Undeclared
NC-SA 4.0
11.9: Flows around cylinders - CC BY-NC-SA 4.0
8: Taylor and Laurent Series - CC BY-NC-SA 4.0
11.10: Solving the Dirichlet problem for harmonic
8.1: Geometric Series - CC BY-NC-SA 4.0 functions - CC BY-NC-SA 4.0
8.2: Convergence of Power Series - CC BY-NC-SA
12: Argument Principle - CC BY-NC-SA 4.0
4.0
8.3: Taylor Series - CC BY-NC-SA 4.0 12.1: Principle of the Argument - CC BY-NC-SA 4.0
8.4: Taylor Series Examples - CC BY-NC-SA 4.0 12.2: Nyquist Criterion for Stability - CC BY-NC-SA
8.5: Singularities - CC BY-NC-SA 4.0 4.0
8.6: Appendix- Convergence - CC BY-NC-SA 4.0 12.3: A Bit on Negative Feedback - CC BY-NC-SA
8.7: Laurent Series - CC BY-NC-SA 4.0 4.0
8.8: Digression to Differential Equations - 13: Laplace Transform - CC BY-NC-SA 4.0
Undeclared 13.1: A brief introduction to linear time invariant
8.9: Poles - CC BY-NC-SA 4.0 systems - CC BY-NC-SA 4.0
9: Residue Theorem - CC BY-NC-SA 4.0 13.2: Laplace transform - CC BY-NC-SA 4.0
13.3: Exponential Type - CC BY-NC-SA 4.0
9.1: Poles and Zeros - CC BY-NC-SA 4.0
13.4: Properties of Laplace transform - CC BY-NC-SA
9.2: Holomorphic and Meromorphic Functions - CC
4.0
BY-NC-SA 4.0
13.5: Differential equations - CC BY-NC-SA 4.0
9.3: Behavior of functions near zeros and poles - CC
13.6: Table of Laplace transforms - CC BY-NC-SA 4.0
BY-NC-SA 4.0
13.7: System Functions and the Laplace Transform -
9.4: Residues - CC BY-NC-SA 4.0
CC BY-NC-SA 4.0
9.5: Cauchy Residue Theorem - CC BY-NC-SA 4.0
13.8: Laplace inverse - CC BY-NC-SA 4.0
9.6: Residue at ∞ - CC BY-NC-SA 4.0
13.9: Delay and Feedback - CC BY-NC-SA 4.0
10: Definite Integrals Using the Residue Theorem - CC
14: Analytic Continuation and the Gamma Function - CC
BY-NC-SA 4.0
BY-NC-SA 4.0
10.1: Integrals of functions that decay - CC BY-NC-
SA 4.0 14.1: Analytic Continuation - CC BY-NC-SA 4.0
10.2: Integrals - CC BY-NC-SA 4.0 14.2: Definition and properties of the Gamma
10.3: Trigonometric Integrals - CC BY-NC-SA 4.0 function - CC BY-NC-SA 4.0
10.4: Integrands with branch cuts - CC BY-NC-SA 4.0 14.3: Connection to Laplace - CC BY-NC-SA 4.0
14.4: Proofs of (some) properties - CC BY-NC-SA 4.0

2 https://math.libretexts.org/@go/page/115360
Back Matter - Undeclared Glossary - Undeclared
Index - Undeclared Detailed Licensing - Undeclared

3 https://math.libretexts.org/@go/page/115360

You might also like