22 Edaf 72
22 Edaf 72
1
Contents
2 Transforms 1
2.1 Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Sinusoidal waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Fourier integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Conceptual interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Delta functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Properties of Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.3 Multiplication by a constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.4 Translation along the x axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.5 Multiplication by an exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Point response function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 Fourier representation of convolution . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 An example of convolution: smoothing . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 The Laplace transform of functions . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 The Laplace transform of derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.3 Solving differential equations given initial conditions . . . . . . . . . . . . . . . . 16
2.5.4 The Shift Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.5 Laplace Transform of a delta function . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.6 Convolution Theorem for Laplace Transforms . . . . . . . . . . . . . . . . . . . . 18
2
Introduction
A transform is a mathematical tool for changing the way something is represented. A transform can
always be reversed. In other words, an original something and its transformed version are always
equivalent; they contain the same information, without anything added or taken away in the process
of moving from one representation to another.
sin(x) sin(x)
cos(x)
1 1
amplitude
x x
π 2π 3π 4π π π 3π 2π 5π 3π 7π 4π
2 2 2 2
wavelength
−1 −1
The Fourier transform is a way of representing something as a sum of waveforms. Suppose we have
a stream of numbers which represent the values of a given quantity (e.g. intensity of light, voltage,
temperature, or whatever) at different points in space, or at different times. We will represent these
set of values by a function f (x), where x is the variable (space or time). The Fourier Transform of
f (x) provides us with a knowledge of the amplitudes, frequencies, and phases of all the waveforms
which, when added together, would generate f (x). To avoid confusion, frequencies are labelled with
the letter u, and the Fourier Transforms of functions are denoted using upper-case letters. Thus the
Fourier Transform of f (x) is indicated with F (u).
The amplitude (or modulus) of F (u) is a distribution representing waveform amplitude as a function
of frequency u. The figure below shows five sinusoidal (cosine) waveforms with different wavelengths
and amplitudes
3
by adding them together we obtain a function f (x) with a fairly strong peak around 2π, where peaks
in all the individual waveforms happened to coincide.
f (x)
x
π 2π 3π 4π
−1
The Fourier Transform of this sum is represented by a graph which shows the amplitude of the five
waveforms plotted against their frequency. Incidentally, anything plotted against frequency is generally
known as a spectrum. The Fourier Transform of the sum of the above set of waveforms (which only
consists of five points!) contains exactly the same information as the sum itself. There is no loss of
information or accuracy in choosing a representation in the x coordinate domain (often referred to as
the real space) or a representation in the u coordinate (often referred to as the reciprocal space).
F (u)
4
2.1.2 Fourier integrals
In the section dedicated to Fourier Series it was shown how a (periodic) mathematical function f (x)
can be re-expressed as a sum of cosine and sine functions. In the complex form
∞
X
f (x) = cn ejnx (2.1)
n=−∞
where the complex coefficients cn describing the sines and cosines are found by solving a set of integrals
as follows
1 T /2
Z
cn = f (x)e−jnx dx. (2.2)
T −T /2
where T is the period over which the function f (x) repeats itself.
While the Fourier series is used to represent a periodic function as a set of discrete frequencies, the
Fourier transform enables us to represent a function which is not periodic, and has a continuous range
of frequencies. This is achieved by replacing the series summation with an integral (i.e. replacing
integer n with a frequency variable u). The continuous set of complex coefficients cn thus becomes the
summation of waveforms F (u) described above, and we obtain the following mathematical definitions
of the Fourier transform and its inverse:
Z ∞
F (u) = f (x)e−2πjux dx (2.3)
−∞
Z ∞
f (x) = F (u)e2πjux du. (2.4)
−∞
The inverse transform enables a set of waveforms to be transformed back into the original function.
These beautifully simple expressions are of immense use in engineering. Fourier transform and the
inverse Fourier transforms are often indicated with a more compact notation, as follows:
The Fourier transform of f (x) is obtained by inserting f (x) = 1 into the Fourier integral and reducing
a
the limits of integration to ± since the function f (x) is zero outside this range:
2
−1 −2πjux a/2
F (u) = e (2.7)
2πju −a/2
1
ejπua − e−jπua
= (2.8)
2πju
(ejx − e−jx )
and recalling that sin (x) =
2j
sin (πau)
F (u) = . (2.9)
πu
A function of the form sin (x)/x is known as a sinc function. It has values of zero when πau = ±π (i.e.
the argument equals ±180 degrees), which occurs when u = ±1/a. We can therefore assign a width
to F (u), the length between the two first zero-crossings, which is equal to 2/a. This is shown in the
figure below, along with the rectangular function f (x), and its corresponding width.
5
f (x) F (u)
x u
a 2/a
Note that in general a Fourier transform F (u) is complex. For each frequency u, the value of F (u)
can have both real (a) and imaginary (jb) components. The amplitude of each waveform at frequency
1/2
u is described by the modulus R = (a2 + b2 ) . Meanwhile the phase of each waveform is described
by θ = arctan (b/a). Since a + jb = R(cos θ + j sin (θ)), the components a and b represent the cosine
and sine components of the waveform. If a >> b then the cosine component dominates, and if a << b
then the sine component dominates.
The cosine function is even (i.e. cos (−θ) = cos (θ), and thus symmetric about the line x = 0,
while the sine function is odd (i.e. sin (−θ) = − sin (θ)). Consequently the Fourier transforms of even
functions are entirely real, and the Fourier series of even functions only contain cosine terms. Meanwhile
the Fourier transforms of odd functions are imaginary, and the Fourier series of odd functions only
contain sine terms. In the example above, the rectangular function is even, and thus F (u) is entirely
real.
6
In general, it is observed that the product of the width of the real and reciprocal space represen-
tations is conserved:
where the term “width” indicates the value that can be attributed to the functions and may be an
approximation. In the example of the rectangualr function and sinc, shown above, is clear how we can
assign a width to f (x) but F (u) is a much smoother function and it might not be as obvious which
points to choose for assigning its width (but zero-crossings are a reasonable choice). This tells us that
narrower functions have broader Fourier Transforms and vice versa. A broader Fourier Transform
implies that more higher frequencies are included within it (higher frequencies provide more sharp
detail). The rectangular function above had a width of a, whereas the width of its Fourier transform
was approximately 2/a. Thus the product is a constant (2), and independent of the value of a.
the Dirac delta is a very peculiar function because no other function defined over the real numbers
has these properties. This unusual function is very useful for representing physical phenomena which
have negligible (≈ 0) duration or spatial extent. Examples include the force of a hammer blow and
the image of a distant star.
The product of a continuous function f (x) and a delta function δ(x − x0 ) is only non-zero at x0 :
Z
f (x)δ(x − x0 )dx = f (x0 ) (2.12)
and integrating product has, in practice, the effect of extracting the value of the function f (x) at
x = x0 . The Fourier transform and its inverse for the Dirac delta function are:
Z ∞
FT[δ(x − x0 )] = δ(x − x0 )e−2πjux dx = e−2πjux0 = cos (2πux0 ) − j sin (2πux0 ) (2.13)
−∞
Z ∞
FT−1 [δ(u − u0 )] = δ(u − u0 )e2πjux du = e2πju0 x = cos (2πu0 x) + j sin (2πu0 x) (2.14)
−∞
Note that the result in each case is a single-frequency waveform which has both cosine (real) and sine
(imaginary) components. Note that the frequency of these waveforms depends on the position of the
delta function (i.e. x0 or u0 ). Also note that for x0 = 0 and u0 = 0:
Z ∞
FT[δ(x)] = δ(x)e−2πjux dx = e0 = 1 (2.15)
−∞
Z ∞
FT−1 [δ(u)] = δ(u)e2πjux du = e0 = 1 (2.16)
−∞
thus the Fourier Transform of a delta function centred at x0 = 0 is equal to 1 at all frequencies (i.e.
at all values of u). This tells us that a delta function requires waveforms at infinite frequencies to
represent it. Likewise, the inverse Fourier Transform of a delta function is a constant if the frequency
is zero. In other words, a constant (at all values of x) is equivalent to a single waveform with a zero
frequency (i.e. with an infinite wavelength).
It can also be shown that the Fourier transform of a sinusoidal waveform is a delta function,
occurring at the point on the frequency (u) axis corresponding to the frequency of the waveform (i.e.
the function contains just a single frequency component). In practice, the Fourier transform yields two
delta functions at positive and negative frequencies. If the waveform is a pure cosine wave, the delta
functions are real. However, if the waveform is a pure sine wave, the delta functions are imaginary.
7
1
FT[cos (2πu0 x)] = (δ(u − u0 ) + δ(u + u0 )) (2.17)
2
1 j
FT[sin (2πu0 x)] = (δ(u − u0 ) − δ(u + u0 )) = − (δ(u − u0 ) − δ(u + u0 )) (2.18)
2j 2
These transforms are represented graphically below:
FT
x u
FT
x u
constant F (u)
FT
x u
8
2.3 Properties of Fourier transforms
The Fourier integrals (expressions for F (u) and f (x) given in section 2.1.2) can be used to verify the
following properties of Fourier transforms:
2.3.1 Differentiation
df (x)
FT = j2πuFT [f (x)] = j2πuF (u) (2.19)
dx
This result can be used to generate results for higher derivatives, such as:
2
d f (x) df (x)
FT = j2πuFT = (j2πu)2 F (u). (2.23)
dx2 dx
2.3.2 Integration
Z x
1
FT f (s)ds = F (u) + 2πcδ(u) (2.24)
−∞ j2πu
where the term 2πcδ(u) represents the Fourier transform of the constant of integration.
1 u
FT [f (ax)] = F (2.25)
a a
9
2.4 Convolution
Now that we know a little about the Fourier transform, we can examine one of its most fundamental
applications in physics and engineering, known as convolution.
δ(t) p(t)
t t
Let us denote this function by p(t). If we attempted to record a whole series of events of arbitrary
amplitude, our actual measurement would consist of a series of point responses functions with each
amplitude dependent on that of the corresponding delta function:
t t
If the instantaneous events are spaced very close together, obviously our device will generate a
series of overlapping point response functions added together.
p(t′ − t)
P
s(t) m(t) =
t t
Now suppose that, instead of a delta function, the signal we wish to measure is a continuous signal
represented by s(t). We can consider that s(t) is composed of an infinite number of closely-spaced delta
functions with a varying amplitude dependent on the shape of s(t). Thus the recorded measurement
of s(t) will consist of an infinite sum of displaced point response functions with amplitude varying
according to s(t).
The observed measurement m(t) is known as the convolution of s(t) with p(t), and can be repre-
sented by the following mathematical expression known as the convolution integral:
Z ∞
′
m(t ) = s(t)p(t′ − t)dt (2.28)
−∞
the integral denotes the sum, and the (t′ − t) denotes the shift of p(t) along the time axis.
10
2.4.2 Fourier representation of convolution
To avoid having to write out the integral, the convolution of two functions is often represented using
a star symbol, with a much more compact notation as follows:
A property of the Fourier Transform allows to write the convolution of two functions as the product
of the Fourier Transforms of the two functions. This is often a much more convenient method than
solving the integral form. Thus:
or also expressed as
and by applying the inverse Fourier Transform on both sides of equation 2.30
which means that for obtaining the convolution of two functions in the real space (m(t)) we can
calculate the Fourier Transform of the two functions to be convolved, multiply them and subsequently
apply the inverse Fourier Transform to the result.
To proof this it is slightly easier if we work backwards, starting with the product of two Fourier
transforms, as expressed in equation 2.31. The proof is as follows:
Z ∞
m(t) = S(u)P (u)e2πiut du (2.33)
−∞
where t′ is just a dummy variable for allowing a variable over which to integrate without introducing
notation ambiguities. We now change the order of integration to obtain:
Z ∞ Z ∞
′
m(t) = P (u)e2πju(t−t ) du s(t)dt′ (2.35)
−∞ −∞
and therefore
Z ∞
′
′
p(t − t ) = P (u)e2πju(t−t ) du (2.37)
−∞
which is the convolution integral. Thus we have proven that the Fourier transform of the convolution
of s(t) with p(t) is equal to the product of the Fourier transforms of s(t) and p(t).
11
2.4.3 An example of convolution: smoothing
Often engineers need to remove noise from data by process known as smoothing. This can be achieved,
for example, by averaging adjacent values in the data. However the smoothing is performed, the process
is equivalent to the removal of higher frequencies from the Fourier transform of the data. Consider the
function f (x) exhibited graphically at the top left of the diagram above, and let us suppose that it has
the Fourier Transform F (u) as shown at the top right. Remember that each value of F (u) represents
the amplitude of the waveform with a frequency u which, when added to all the other waveforms gives
us f (x).
Let us suppose that f (x) is convolved with a function h(x), which is also shown in the diagram.
The function h(x) is quite smooth, and therefore its Fourier transform H(u) (also shown) does not
contain high frequencies. Note how the product of F (u) and H(u) causes the high frequency content
in F (u) to be multiplied by zero. The inverse transform of the product, equal to f (x)∗h(x), thus yields
a version of f (x) with the high frequency information removed.
Note that, strictly speaking, the above process is irreversible. A multiplication by zero cannot be
undone by a division. Nevertheless, engineers commonly employ so-called deconvolution techniques
which attempt to recover the true signal s(t) from a measurement m(t) using knowledge of the point
spread function p(t). However, all such techniques are necessarily approximate. As high frequencies are
amplified to recover the full spectrum of the original signal, deconvolution techniques must compromise
in how much the final result is dominated by noise.
We note that the transform is a function of a new variable s and also that the Laplace transform is
only defined for positive values of t, such that it is always assumed that y(t) ≡ 0 for t < 0. Instead of
12
writing the integral each time, we shall use the following shorthand notation:
Y (s) = L [y(t)] (2.40)
and similarly we can define an inverse Laplace Transform, which converts Y (s) back into y(t):
y(t) = L−1 [Y (s)] (2.41)
The Laplace transform and the Fourier transform are performed using very similar integrals. There
are just two differences:
a) The Laplace transform integral starts from zero instead of minus infinity (partly due to conven-
tion, because the Laplace transform is often applied to engineering problems which begin at a
zero time);
b) Whereas the Fourier transform integral (section 2.1.2) uses an imaginary exponent 2πiut = jωt,
the Laplace transform integral uses a complex exponent st, where s is a complex number we can
write as s = a + jω.
From a comparison of the two integrals we can also see that the Laplace transform of a function f (t)
is equal to the Fourier transform of a function f (t)e−at . Thus the Fourier transform is a special case
of the Laplace transform when the real part of the exponent a = 0.
Whereas the Fourier transform tells us which sinusoids are contained in a function, the Laplace
transform tells use which sinusoids and exponentials are contained in a function. Whereas a Fourier
transform is expressed as a function of frequency u (or ω = 2πu), the Laplace transform is expressed as
a function of the complex variable s = a+jω. This is a good match to the basic mathematics of systems
which oscillate, such as the sinusoidal motion of a mass oscillating up and down on the end of a spring.
In many physical systems we found that the oscillation amplitude decays exponentially, through a
process known as damping. The Laplace transform enables us to easily study such systems whose
behavior is described by a combination of sinusoids and exponentials, or in other words, oscillations
and dampings.
As briefly described in section 2.1.2, the (continuous) Fourier integral can be derived from the
(discrete) Fourier Series. Similarly, the (continuous) Laplace integral can be derived from the a simple
(discrete) power series. Consider the following convergent power series:
∞
X
f (x) = ak xk . (2.42)
k=0
For the power series to converge, it must be true that 0 < x < 1. Powers of x can easily be converted
to powers of e by substituting x = es , where s is a new variable. By taking the natural logarithm of
both sides we find that s = − ln (n). The negative sign appears because the log of a number less than
1 must be negative. Thus the power series in equation 2.42 becomes:
∞
X
f (s) = ak e−sk . (2.43)
k=0
As in the case of the Fourier transform, when the set of coefficients ak is shifted into a continuous
function y(k), the above infinite sum becomes an integral of the form:
Z ∞
f (s) = y(k)e−sk dk. (2.44)
0
The usefulness of Laplace transforms stems from their ability to transform a challenging problem
expressed in terms of differential equations to a much easier problem expressed as algebraic equations,
and they are commonly employed when needing to simplify the way that a system is described math-
ematically. Note that different conventions are used to denote Laplace transforms. In these notes, the
transform of a function y(t) is denoted using a capital letter Y (s). However, others use a line placed
above the function, such as ȳ(s). We will now illustrate how Laplace transforms are calculated for a
few examples of simple functions.
13
The Laplace transform of a constant: y(t) = k
Because we must assume that y(t) ≡ 0 for t < 0, we consider a so-called step function rather than a
constant. The Heaviside Step Function is defined as:
(
0 for t < 0
u(t) = (2.45)
1 for t > 0
∞ ∞ ∞
−1 −t(s−a)
Z Z
1
L eat = eat e−st dt = e−t(s−a) dt =
e = (2.47)
0 0 (s − a) 0 s−a
14
Table 2.1: Laplace Transforms
1
1 unit step u(t)
s
2 impulse δ(t) 1
1 −as
3 delayed unit step u(t − a) e
s
1
5 rectangular impulse u(t) − u(t − a) (1 − e−as )
s
1
6 ramp u(t) t
s2
n!
7 n−th power u(t) tn n = 1, 2, 3, . . .
sn+1
1
8 exponential u(t) e±at
s∓1
a
9 sine u(t) sin (at)
s2 + a2
s
10 cosine u(t) cos (at)
s2 + a2
b
11 damped sine u(t) e−at sin (bt)
(s + a)2 + b2
s+a
12 damped cosine u(t) e−at cos (bt)
(s + a)2 + b2
15
which shows that the Laplace Transform of the derivative of y(t) is equal to s multiplied by the Laplace
Transform of y(t) minus y(0) which is also called the initial value. The initial value is just the value
of y(t) at the point t = 0. It can be shown that the transform of the second derivative of y(t) is given
by:
2
d y dy
L = s2 Y (s) − (0) − sy(0). (2.58)
dt2 dt
Note that now the solution contains two initial values: the value of the first derivative of y at t = 0,
and the value of y at t = 0. Similar expressions are available for the Laplace Transforms of higher-order
derivatives. In fact there is a general formula which allows the transform of any order of derivative to
be calculated:
n
d y dn−1 y dn−2 y dn−3 y
L n
= sn Y (s) − n−1 (0) − s n−2 (0) − s2 n−3 (0) − . . . − sn−1 y(0). (2.59)
dt dt dt dt
Example
We have the following equation which we are told describes the relationship between voltage and time
for an electrical circuit:
dy
+ 3y = sin (t) (2.60)
dt
We also know that at a time t = 0 the voltage y was 2 volts. We wish to solve this equation,
which means that we want to find an expression for y(t) that describes how the voltage changes with
time. The first step is to replace each term by its Laplace Transform. The transform of the first two
terms (y and the first derivative of y) were given above. The transform of sin(t), found in a table of
transforms, is given by:
a
L [sin (at)] = 2 (2.61)
s + a2
we thus get
1
sY (s) − y(0) + 3Y (s) = (2.62)
s2 + a2
and by rearranging and substituting y(0) = 2, we get:
2 1
Y (s) = + . (2.63)
s + 3 (s + 3)(s2 + 1)
Before we can continue, the term on the right needs to be simplified a little further.To accomplish this,
we need to use the method of partial fractions. This simplification gives us:
2 1 s 3
Y (s) = + − 2
+ 2
. (2.64)
s + 3 10(s + 3) 10(s + 1) 10(s + 1)
The function y(t) is the inverse Laplace Transform of the above expression:
−1 1 1 −1 1 1 −1 s 3 −1 1
y(t) = 2L + L − L + L . (2.65)
s+3 10 s+3 10 s2 + 1 10 s2 + 1
By inserting each of the four inverse transform terms which can be found in a table, we obtain our
solution:
1
21e−3t − cos (t) + 3 sin (t)
y(t) = (2.66)
10
This is clearly quite a complicated expression. However, obtaining the result was relatively easy and
required some simple algebra and looking up some transforms in a table.
16
2.5.4 The Shift Theorems
Translation of transforms
It can be shown that the Laplace transform of y(t) multiplied by a factor eat is given by:
L y(t)eat = Y (s − a)
(2.67)
which result in a simple translation of the Laplace Transform Y (s) along the s axis by an amount a.
Example
Use the above theorem to find the Inverse Laplace transform:
" #
1
L−1 2 . (2.68)
(s − 1)
We know (e.g. from section 2.5.1 ot from the tables) that L [t] = 1/s2 , by applying the shift theorem
with a = 1 we obtain
1
L et t =
2. (2.69)
(s − 1)
and by applying the Inverse Laplace Transform on both sides we get
" #
1
L−1 t
2 = te . (2.70)
(s − 1)
Translation of functions
It can also be shown that the Laplace transform of y(t − a) is equal to the Laplace transform of y(t)
multiplied by a factor e−as :
Example
Use the above theorem to find the inverse transform:
−1 1 −6s
L e . (2.72)
s2 + 1
From our table of Laplace transforms we find that L [sin (t)] = 1/(s2 + 1), and thus
−1 1 −6s
L e = sin (t − 6) (2.73)
s2 + 1
we note that the Laplace transform of sin (t − 6) is undefined when (t − 6) < 0, and therefore this
solution requires that t > 6.
and the function inside the integral is only non zero at t = t0 , and therefore
17
2.5.6 Convolution Theorem for Laplace Transforms
The Convolution Theorem for Laplace transforms is expressed as follows:
Z t Z t
−1
L [F (s)G(s)] = f (s)g(t − s)ds = f (t − s)g(s)ds (2.76)
0 0
where F (s) and G(s) are the Laplace transforms of f (t) and g(t), respectively. This enables some
inverse Laplace transforms to be found more easily when the Laplace transform is evidently a product
of two standard transforms.
Example
Use the Convolution theorem to find the inverse transform:
1
L−1 . (2.77)
s2 (s + 5)
First, we note that 1/[s2 (s + 5)] is the product of 1/s2 and 1/(s + 5), two transforms we already know
1
F (s) = when f (t) = t (2.78)
s2
1
G(s) = when g(t) = e−5t (2.79)
s+5
then, to use the Convolution Theorem, we calculate the terms inside one of the integrals:
thus
Z t
−1 1
L = se−5(t−s) ds (2.82)
s2 (s + 5) 0
Z t
1
or L−1 2 = (t − s)e−5s ds (2.83)
s (s + 5) 0
Either integral requires solving using integration by parts, and both yield the same answer:
−1 1 1
5t − 1 + e−5t
L = (2.84)
s2 (s + 5) 25
18
Summary
This section has addressed the following key concepts:
• Mathematical transformation to change the way functions or signals are represented.
• Convolution.
• Smoothing as a convolution process.
• Laplace transforms of functions and derivatives.
• Use of the Laplace transform to solve differential equations.
19