0% found this document useful (0 votes)
45 views19 pages

22 Edaf 72

Uploaded by

handsareguide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views19 pages

22 Edaf 72

Uploaded by

handsareguide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ENGF0004

Mathemathical Modelling and Analysis II


Topic 2: Transforms

Marco Endrizzi (Medical Physics and Biomedical Engineering)

1
Contents

2 Transforms 1
2.1 Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Sinusoidal waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Fourier integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Conceptual interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Delta functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Properties of Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.3 Multiplication by a constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.4 Translation along the x axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.5 Multiplication by an exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Point response function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 Fourier representation of convolution . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 An example of convolution: smoothing . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 The Laplace transform of functions . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 The Laplace transform of derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.3 Solving differential equations given initial conditions . . . . . . . . . . . . . . . . 16
2.5.4 The Shift Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.5 Laplace Transform of a delta function . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.6 Convolution Theorem for Laplace Transforms . . . . . . . . . . . . . . . . . . . . 18

2
Introduction
A transform is a mathematical tool for changing the way something is represented. A transform can
always be reversed. In other words, an original something and its transformed version are always
equivalent; they contain the same information, without anything added or taken away in the process
of moving from one representation to another.

2.1 Fourier transform


2.1.1 Sinusoidal waveforms
A sinusoidal waveform is one whose shape resembles a sine or cosine wave, which has an amplitude
(height) and a wavelength (distance between consecutive peaks, or valleys or two general points at the
same relative position through the cycle). The number of peaks per unit distance along the length of
the waveform is called the frequency. Wavelength is inversely proportional to frequency. The phase of
the waveform tells us the position of the peaks relative to some fixed point on the horizontal axis (such
as the zero point). A sine wave is said to have a 90 degree (or π/2 radian) phase shift with respect to
a cosine wave.

sin(x) sin(x)
cos(x)
1 1
amplitude

x x
π 2π 3π 4π π π 3π 2π 5π 3π 7π 4π
2 2 2 2

wavelength
−1 −1

The Fourier transform is a way of representing something as a sum of waveforms. Suppose we have
a stream of numbers which represent the values of a given quantity (e.g. intensity of light, voltage,
temperature, or whatever) at different points in space, or at different times. We will represent these
set of values by a function f (x), where x is the variable (space or time). The Fourier Transform of
f (x) provides us with a knowledge of the amplitudes, frequencies, and phases of all the waveforms
which, when added together, would generate f (x). To avoid confusion, frequencies are labelled with
the letter u, and the Fourier Transforms of functions are denoted using upper-case letters. Thus the
Fourier Transform of f (x) is indicated with F (u).
The amplitude (or modulus) of F (u) is a distribution representing waveform amplitude as a function
of frequency u. The figure below shows five sinusoidal (cosine) waveforms with different wavelengths
and amplitudes

3
by adding them together we obtain a function f (x) with a fairly strong peak around 2π, where peaks
in all the individual waveforms happened to coincide.

f (x)

x
π 2π 3π 4π

−1

The Fourier Transform of this sum is represented by a graph which shows the amplitude of the five
waveforms plotted against their frequency. Incidentally, anything plotted against frequency is generally
known as a spectrum. The Fourier Transform of the sum of the above set of waveforms (which only
consists of five points!) contains exactly the same information as the sum itself. There is no loss of
information or accuracy in choosing a representation in the x coordinate domain (often referred to as
the real space) or a representation in the u coordinate (often referred to as the reciprocal space).

F (u)

4
2.1.2 Fourier integrals
In the section dedicated to Fourier Series it was shown how a (periodic) mathematical function f (x)
can be re-expressed as a sum of cosine and sine functions. In the complex form

X
f (x) = cn ejnx (2.1)
n=−∞

where the complex coefficients cn describing the sines and cosines are found by solving a set of integrals
as follows
1 T /2
Z
cn = f (x)e−jnx dx. (2.2)
T −T /2

where T is the period over which the function f (x) repeats itself.
While the Fourier series is used to represent a periodic function as a set of discrete frequencies, the
Fourier transform enables us to represent a function which is not periodic, and has a continuous range
of frequencies. This is achieved by replacing the series summation with an integral (i.e. replacing
integer n with a frequency variable u). The continuous set of complex coefficients cn thus becomes the
summation of waveforms F (u) described above, and we obtain the following mathematical definitions
of the Fourier transform and its inverse:
Z ∞
F (u) = f (x)e−2πjux dx (2.3)
−∞
Z ∞
f (x) = F (u)e2πjux du. (2.4)
−∞

The inverse transform enables a set of waveforms to be transformed back into the original function.
These beautifully simple expressions are of immense use in engineering. Fourier transform and the
inverse Fourier transforms are often indicated with a more compact notation, as follows:

F (u) = FT[f (x)] and f (x) = FT−1 [F (u)] (2.5)

Example: Fourier Transform of rectangular function


Let look into calculating the Fourier transform of a rectangular function, which can be expressed
mathematically as follows:
(
1 for − a2 < x < a2
f (x) = (2.6)
0 elsewhere

The Fourier transform of f (x) is obtained by inserting f (x) = 1 into the Fourier integral and reducing
a
the limits of integration to ± since the function f (x) is zero outside this range:
2
−1 −2πjux a/2
F (u) = e (2.7)
2πju −a/2
1
ejπua − e−jπua

= (2.8)
2πju

(ejx − e−jx )
and recalling that sin (x) =
2j
sin (πau)
F (u) = . (2.9)
πu
A function of the form sin (x)/x is known as a sinc function. It has values of zero when πau = ±π (i.e.
the argument equals ±180 degrees), which occurs when u = ±1/a. We can therefore assign a width
to F (u), the length between the two first zero-crossings, which is equal to 2/a. This is shown in the
figure below, along with the rectangular function f (x), and its corresponding width.

5
f (x) F (u)

x u

a 2/a

Note that in general a Fourier transform F (u) is complex. For each frequency u, the value of F (u)
can have both real (a) and imaginary (jb) components. The amplitude of each waveform at frequency
1/2
u is described by the modulus R = (a2 + b2 ) . Meanwhile the phase of each waveform is described
by θ = arctan (b/a). Since a + jb = R(cos θ + j sin (θ)), the components a and b represent the cosine
and sine components of the waveform. If a >> b then the cosine component dominates, and if a << b
then the sine component dominates.
The cosine function is even (i.e. cos (−θ) = cos (θ), and thus symmetric about the line x = 0,
while the sine function is odd (i.e. sin (−θ) = − sin (θ)). Consequently the Fourier transforms of even
functions are entirely real, and the Fourier series of even functions only contain cosine terms. Meanwhile
the Fourier transforms of odd functions are imaginary, and the Fourier series of odd functions only
contain sine terms. In the example above, the rectangular function is even, and thus F (u) is entirely
real.

2.1.3 Conceptual interpretation


While engineers are not frequently required to calculate the Fourier transforms of specific functions,
it is essential that they have a very good grasp of the concept of representing functions as a spectrum
of frequencies. This concept is explored below for two different physical definitions of the variable x.

a) x is a measure of time (units of seconds)


In this case we would replace x with the more familiar time variable t, and f (t) represents a time-
varying quantity. The Fourier transform of f (t) is then a function of frequency u expressed in units
of Hertz (s−1 ). For example, suppose f (t) represents the intensity of audible sound produced by a
symphony orchestra, recorded over a short period of time. The modulus of the Fourier transform F (u)
would represent the spectrum of frequencies contained within that sound. The values of the modulus
at lower values of u would depend on the loudness of the low-pitched instruments (such as cellos
and tubas), whereas the modulus at higher values of u would depend on the loudness of high-pitched
instruments (such as violins and flutes). Audio systems often display the spectral content of the sound
during recording and playback (typically averaged over discrete bands of frequencies).

b) x is a measure of distance (units of metres)


When the variable x represents distance, the Fourier transform becomes a function of spatial frequency
u expressed in units of inverse metres (m−1 ). For example, suppose f (x) represents the variation in
brightness across a single line of an image. If the image is smooth, the intensity will vary slowly, and
the modulus of F (u) will only be non-zero at lower values of u. If the image contains a lot of sharp
detail, then the modulus of F (u) will be non-zero at much higher values of u. Sharp detail in f (x)
can also be viewed as the signal containing waveforms at high spatial frequencies. This concept will
be explored further below in section 2.4 on Convolution.

6
In general, it is observed that the product of the width of the real and reciprocal space represen-
tations is conserved:

width of f (x) × width of F (u) ≈ order 1 (and a constant value) (2.10)

where the term “width” indicates the value that can be attributed to the functions and may be an
approximation. In the example of the rectangualr function and sinc, shown above, is clear how we can
assign a width to f (x) but F (u) is a much smoother function and it might not be as obvious which
points to choose for assigning its width (but zero-crossings are a reasonable choice). This tells us that
narrower functions have broader Fourier Transforms and vice versa. A broader Fourier Transform
implies that more higher frequencies are included within it (higher frequencies provide more sharp
detail). The rectangular function above had a width of a, whereas the width of its Fourier transform
was approximately 2/a. Thus the product is a constant (2), and independent of the value of a.

2.2 Delta functions


The Dirac delta function δ(x − x0 ) is a function which is equal to zero everywhere except at a position
x = x0 where the value is infinite in order to satisfy the identity:
Z
δ(x − x0 )dx = 1 (2.11)

the Dirac delta is a very peculiar function because no other function defined over the real numbers
has these properties. This unusual function is very useful for representing physical phenomena which
have negligible (≈ 0) duration or spatial extent. Examples include the force of a hammer blow and
the image of a distant star.
The product of a continuous function f (x) and a delta function δ(x − x0 ) is only non-zero at x0 :
Z
f (x)δ(x − x0 )dx = f (x0 ) (2.12)

and integrating product has, in practice, the effect of extracting the value of the function f (x) at
x = x0 . The Fourier transform and its inverse for the Dirac delta function are:
Z ∞
FT[δ(x − x0 )] = δ(x − x0 )e−2πjux dx = e−2πjux0 = cos (2πux0 ) − j sin (2πux0 ) (2.13)
−∞
Z ∞
FT−1 [δ(u − u0 )] = δ(u − u0 )e2πjux du = e2πju0 x = cos (2πu0 x) + j sin (2πu0 x) (2.14)
−∞

Note that the result in each case is a single-frequency waveform which has both cosine (real) and sine
(imaginary) components. Note that the frequency of these waveforms depends on the position of the
delta function (i.e. x0 or u0 ). Also note that for x0 = 0 and u0 = 0:

Z ∞
FT[δ(x)] = δ(x)e−2πjux dx = e0 = 1 (2.15)
−∞
Z ∞
FT−1 [δ(u)] = δ(u)e2πjux du = e0 = 1 (2.16)
−∞

thus the Fourier Transform of a delta function centred at x0 = 0 is equal to 1 at all frequencies (i.e.
at all values of u). This tells us that a delta function requires waveforms at infinite frequencies to
represent it. Likewise, the inverse Fourier Transform of a delta function is a constant if the frequency
is zero. In other words, a constant (at all values of x) is equivalent to a single waveform with a zero
frequency (i.e. with an infinite wavelength).
It can also be shown that the Fourier transform of a sinusoidal waveform is a delta function,
occurring at the point on the frequency (u) axis corresponding to the frequency of the waveform (i.e.
the function contains just a single frequency component). In practice, the Fourier transform yields two
delta functions at positive and negative frequencies. If the waveform is a pure cosine wave, the delta
functions are real. However, if the waveform is a pure sine wave, the delta functions are imaginary.

7
1
FT[cos (2πu0 x)] = (δ(u − u0 ) + δ(u + u0 )) (2.17)
2
1 j
FT[sin (2πu0 x)] = (δ(u − u0 ) − δ(u + u0 )) = − (δ(u − u0 ) − δ(u + u0 )) (2.18)
2j 2
These transforms are represented graphically below:

cos (x) F (u)

FT

x u

sin (x) F (u)

FT

x u

constant F (u)

FT

x u

8
2.3 Properties of Fourier transforms
The Fourier integrals (expressions for F (u) and f (x) given in section 2.1.2) can be used to verify the
following properties of Fourier transforms:

2.3.1 Differentiation
 
df (x)
FT = j2πuFT [f (x)] = j2πuF (u) (2.19)
dx

The proof is as follows:


  Z ∞
df (x) df −2πjux
FT = e dx (2.20)
dx −∞ dx

and by using integration by parts ν du


R R dv
dx dx = uν − u dx dx
 

Z ∞
df (x) d −2πjux 
FT = f (x)e−2πjux − f (x) e dx (2.21)
dx −∞ −∞ dx

assuming f (±∞) = 0 so that the integral of f (x) is finite:


  Z ∞
df (x)
FT = j2πu f (x)e−2πjux dx = j2πuF (u). (2.22)
dx −∞

This result can be used to generate results for higher derivatives, such as:
 2   
d f (x) df (x)
FT = j2πuFT = (j2πu)2 F (u). (2.23)
dx2 dx

Similar proofs can be derived for the properties listed below.

2.3.2 Integration
Z x 
1
FT f (s)ds = F (u) + 2πcδ(u) (2.24)
−∞ j2πu

where the term 2πcδ(u) represents the Fourier transform of the constant of integration.

2.3.3 Multiplication by a constant

1 u
FT [f (ax)] = F (2.25)
a a

2.3.4 Translation along the x axis

FT [f (x + a)] = ej2πua F (u) (2.26)

2.3.5 Multiplication by an exponential


 
αx jα
FT [e f (x)] = F u+ (2.27)

where α may be real, imaginary, or complex.

9
2.4 Convolution
Now that we know a little about the Fourier transform, we can examine one of its most fundamental
applications in physics and engineering, known as convolution.

2.4.1 Point response function


Consider a situation where we attempt to make a measurement of an instantaneous event represented
by the Dirac delta function δ(t). Remember that δ(t) only has a non-zero value at t = 0. However,
suppose the device we use to make the measurement is unable to respond infinitely fast, and instead
of recording a delta function, it records a smooth distribution of finite width. We call this distribution
the point-response function of the device.

δ(t) p(t)
t t

Let us denote this function by p(t). If we attempted to record a whole series of events of arbitrary
amplitude, our actual measurement would consist of a series of point responses functions with each
amplitude dependent on that of the corresponding delta function:

t t

If the instantaneous events are spaced very close together, obviously our device will generate a
series of overlapping point response functions added together.

p(t′ − t)
P
s(t) m(t) =

t t

Now suppose that, instead of a delta function, the signal we wish to measure is a continuous signal
represented by s(t). We can consider that s(t) is composed of an infinite number of closely-spaced delta
functions with a varying amplitude dependent on the shape of s(t). Thus the recorded measurement
of s(t) will consist of an infinite sum of displaced point response functions with amplitude varying
according to s(t).
The observed measurement m(t) is known as the convolution of s(t) with p(t), and can be repre-
sented by the following mathematical expression known as the convolution integral:
Z ∞

m(t ) = s(t)p(t′ − t)dt (2.28)
−∞

the integral denotes the sum, and the (t′ − t) denotes the shift of p(t) along the time axis.

10
2.4.2 Fourier representation of convolution
To avoid having to write out the integral, the convolution of two functions is often represented using
a star symbol, with a much more compact notation as follows:

m(t) = s(t)∗p(t). (2.29)

A property of the Fourier Transform allows to write the convolution of two functions as the product
of the Fourier Transforms of the two functions. This is often a much more convenient method than
solving the integral form. Thus:

FT[m(t)] = FT[s(t)]FT[p(t)] (2.30)

or also expressed as

M (u) = S(u)P (u) (2.31)

and by applying the inverse Fourier Transform on both sides of equation 2.30

m(t) = FT−1 [FT[s(t)]FT[p(t)]] (2.32)

which means that for obtaining the convolution of two functions in the real space (m(t)) we can
calculate the Fourier Transform of the two functions to be convolved, multiply them and subsequently
apply the inverse Fourier Transform to the result.
To proof this it is slightly easier if we work backwards, starting with the product of two Fourier
transforms, as expressed in equation 2.31. The proof is as follows:
Z ∞
m(t) = S(u)P (u)e2πiut du (2.33)
−∞

the function S(u) is the Fourier Transform of s(t), therefore:


Z ∞ Z ∞ 
−2πjut′ ′
m(t) = s(t)e dt P (u)e2πiut du (2.34)
−∞ −∞

where t′ is just a dummy variable for allowing a variable over which to integrate without introducing
notation ambiguities. We now change the order of integration to obtain:
Z ∞ Z ∞ 

m(t) = P (u)e2πju(t−t ) du s(t)dt′ (2.35)
−∞ −∞

the Inverse Fourier Transform of P (u) is given by


Z ∞
p(t) = P (u)e2πjut du (2.36)
−∞

and therefore
Z ∞


p(t − t ) = P (u)e2πju(t−t ) du (2.37)
−∞

which we can now substitute in equation 2.35 to obtain


Z ∞
m(t) = p(t − t′ )s(t)dt′ = s(t)∗p(t) (2.38)
−∞

which is the convolution integral. Thus we have proven that the Fourier transform of the convolution
of s(t) with p(t) is equal to the product of the Fourier transforms of s(t) and p(t).

11
2.4.3 An example of convolution: smoothing
Often engineers need to remove noise from data by process known as smoothing. This can be achieved,
for example, by averaging adjacent values in the data. However the smoothing is performed, the process
is equivalent to the removal of higher frequencies from the Fourier transform of the data. Consider the
function f (x) exhibited graphically at the top left of the diagram above, and let us suppose that it has
the Fourier Transform F (u) as shown at the top right. Remember that each value of F (u) represents
the amplitude of the waveform with a frequency u which, when added to all the other waveforms gives
us f (x).
Let us suppose that f (x) is convolved with a function h(x), which is also shown in the diagram.
The function h(x) is quite smooth, and therefore its Fourier transform H(u) (also shown) does not
contain high frequencies. Note how the product of F (u) and H(u) causes the high frequency content
in F (u) to be multiplied by zero. The inverse transform of the product, equal to f (x)∗h(x), thus yields
a version of f (x) with the high frequency information removed.
Note that, strictly speaking, the above process is irreversible. A multiplication by zero cannot be
undone by a division. Nevertheless, engineers commonly employ so-called deconvolution techniques
which attempt to recover the true signal s(t) from a measurement m(t) using knowledge of the point
spread function p(t). However, all such techniques are necessarily approximate. As high frequencies are
amplified to recover the full spectrum of the original signal, deconvolution techniques must compromise
in how much the final result is dominated by noise.

2.5 Laplace transform


2.5.1 The Laplace transform of functions
The Laplace Transform is a very useful tool in mathematical analysis, especially for solving linear
differential equations. Solving such an equation involves finding an expression for the function y(x)
which can be inserted into the differential equation to make it true, and different approaches and
techniques exist for solving certain classes of differential equations. Like any other transform (such
as the Fourier Transform), the Laplace Transform converts a function to an equivalent form, and its
inverse takes us back to the starting point, without loss of content. The Laplace transform of a function
y(t) is defined as follows:
Z ∞
Y (s) = e−st y(t)dt (2.39)
0

We note that the transform is a function of a new variable s and also that the Laplace transform is
only defined for positive values of t, such that it is always assumed that y(t) ≡ 0 for t < 0. Instead of

12
writing the integral each time, we shall use the following shorthand notation:
Y (s) = L [y(t)] (2.40)
and similarly we can define an inverse Laplace Transform, which converts Y (s) back into y(t):
y(t) = L−1 [Y (s)] (2.41)
The Laplace transform and the Fourier transform are performed using very similar integrals. There
are just two differences:
a) The Laplace transform integral starts from zero instead of minus infinity (partly due to conven-
tion, because the Laplace transform is often applied to engineering problems which begin at a
zero time);
b) Whereas the Fourier transform integral (section 2.1.2) uses an imaginary exponent 2πiut = jωt,
the Laplace transform integral uses a complex exponent st, where s is a complex number we can
write as s = a + jω.
From a comparison of the two integrals we can also see that the Laplace transform of a function f (t)
is equal to the Fourier transform of a function f (t)e−at . Thus the Fourier transform is a special case
of the Laplace transform when the real part of the exponent a = 0.
Whereas the Fourier transform tells us which sinusoids are contained in a function, the Laplace
transform tells use which sinusoids and exponentials are contained in a function. Whereas a Fourier
transform is expressed as a function of frequency u (or ω = 2πu), the Laplace transform is expressed as
a function of the complex variable s = a+jω. This is a good match to the basic mathematics of systems
which oscillate, such as the sinusoidal motion of a mass oscillating up and down on the end of a spring.
In many physical systems we found that the oscillation amplitude decays exponentially, through a
process known as damping. The Laplace transform enables us to easily study such systems whose
behavior is described by a combination of sinusoids and exponentials, or in other words, oscillations
and dampings.
As briefly described in section 2.1.2, the (continuous) Fourier integral can be derived from the
(discrete) Fourier Series. Similarly, the (continuous) Laplace integral can be derived from the a simple
(discrete) power series. Consider the following convergent power series:


X
f (x) = ak xk . (2.42)
k=0

For the power series to converge, it must be true that 0 < x < 1. Powers of x can easily be converted
to powers of e by substituting x = es , where s is a new variable. By taking the natural logarithm of
both sides we find that s = − ln (n). The negative sign appears because the log of a number less than
1 must be negative. Thus the power series in equation 2.42 becomes:


X
f (s) = ak e−sk . (2.43)
k=0

As in the case of the Fourier transform, when the set of coefficients ak is shifted into a continuous
function y(k), the above infinite sum becomes an integral of the form:

Z ∞
f (s) = y(k)e−sk dk. (2.44)
0

The usefulness of Laplace transforms stems from their ability to transform a challenging problem
expressed in terms of differential equations to a much easier problem expressed as algebraic equations,
and they are commonly employed when needing to simplify the way that a system is described math-
ematically. Note that different conventions are used to denote Laplace transforms. In these notes, the
transform of a function y(t) is denoted using a capital letter Y (s). However, others use a line placed
above the function, such as ȳ(s). We will now illustrate how Laplace transforms are calculated for a
few examples of simple functions.

13
The Laplace transform of a constant: y(t) = k
Because we must assume that y(t) ≡ 0 for t < 0, we consider a so-called step function rather than a
constant. The Heaviside Step Function is defined as:
(
0 for t < 0
u(t) = (2.45)
1 for t > 0

and the Laplace transform of u(t) is:


Z ∞ ∞
1 1
L [u(t)] = 1e−st dt = − e−st = (2.46)
0 s 0 s
which holds provided that s > 0. If we multiply u(t) by k, or any other constant value, we obtain
L [k] = k/s.

The Laplace transform of an exponential: y(t) = eat

∞ ∞ ∞
−1 −t(s−a)
Z Z
1
L eat = eat e−st dt = e−t(s−a) dt =
 
e = (2.47)
0 0 (s − a) 0 s−a

The Laplace transform of y(t) = t


Z ∞
L [t] = te−st dt (2.48)
0

which we can integrate by parts


Z ∞
L [t] = te−st dt (2.49)
0
Z ∞  
d 1 −st
= t − e dt (2.50)
0 dt s
∞  
t −st ∞
Z
1 −st
=− e − − e dt (2.51)
s 0 0 s
1 ∞
= − 2 e−st (2.52)
s 0
1
= 2 (2.53)
s
which holds provided that s > 0. Note that the existence of each of the above Laplace transforms
places a condition on the variable s. Tables of Laplace transforms of common functions are available
in many mathematics textbooks, and online.

2.5.2 The Laplace transform of derivatives


Laplace Transforms of derivatives of a function are related in a simple way to the transform of the
function itself. Consider the Laplace transform of dy/dt:
  Z ∞ 
dy dy
L = e−st dt (2.54)
dt 0 dt
which can be integrated by parts to obtain

d(e−st )
 

Z
dy
L = ye−st − y dt (2.55)
dt 0 0 dt
Z ∞
= −y(0) − (−s) ye−st dt (2.56)
0
= sY (s) − y(0) (2.57)

14
Table 2.1: Laplace Transforms

function name f (t) = L−1 [F (s)] F (s) = L [f (t)]

1
1 unit step u(t)
s

2 impulse δ(t) 1

1 −as
3 delayed unit step u(t − a) e
s

4 delayed impulse δ(t − a) e−as

1
5 rectangular impulse u(t) − u(t − a) (1 − e−as )
s

1
6 ramp u(t) t
s2

n!
7 n−th power u(t) tn n = 1, 2, 3, . . .
sn+1

1
8 exponential u(t) e±at
s∓1

a
9 sine u(t) sin (at)
s2 + a2

s
10 cosine u(t) cos (at)
s2 + a2

b
11 damped sine u(t) e−at sin (bt)
(s + a)2 + b2

s+a
12 damped cosine u(t) e−at cos (bt)
(s + a)2 + b2

15
which shows that the Laplace Transform of the derivative of y(t) is equal to s multiplied by the Laplace
Transform of y(t) minus y(0) which is also called the initial value. The initial value is just the value
of y(t) at the point t = 0. It can be shown that the transform of the second derivative of y(t) is given
by:
 2 
d y dy
L = s2 Y (s) − (0) − sy(0). (2.58)
dt2 dt
Note that now the solution contains two initial values: the value of the first derivative of y at t = 0,
and the value of y at t = 0. Similar expressions are available for the Laplace Transforms of higher-order
derivatives. In fact there is a general formula which allows the transform of any order of derivative to
be calculated:
 n 
d y dn−1 y dn−2 y dn−3 y
L n
= sn Y (s) − n−1 (0) − s n−2 (0) − s2 n−3 (0) − . . . − sn−1 y(0). (2.59)
dt dt dt dt

2.5.3 Solving differential equations given initial conditions


Solving a linear differential equation using Laplace Transforms will now be illustrated using an example.

Example
We have the following equation which we are told describes the relationship between voltage and time
for an electrical circuit:

dy
+ 3y = sin (t) (2.60)
dt
We also know that at a time t = 0 the voltage y was 2 volts. We wish to solve this equation,
which means that we want to find an expression for y(t) that describes how the voltage changes with
time. The first step is to replace each term by its Laplace Transform. The transform of the first two
terms (y and the first derivative of y) were given above. The transform of sin(t), found in a table of
transforms, is given by:
a
L [sin (at)] = 2 (2.61)
s + a2
we thus get
1
sY (s) − y(0) + 3Y (s) = (2.62)
s2 + a2
and by rearranging and substituting y(0) = 2, we get:
2 1
Y (s) = + . (2.63)
s + 3 (s + 3)(s2 + 1)
Before we can continue, the term on the right needs to be simplified a little further.To accomplish this,
we need to use the method of partial fractions. This simplification gives us:
2 1 s 3
Y (s) = + − 2
+ 2
. (2.64)
s + 3 10(s + 3) 10(s + 1) 10(s + 1)
The function y(t) is the inverse Laplace Transform of the above expression:
       
−1 1 1 −1 1 1 −1 s 3 −1 1
y(t) = 2L + L − L + L . (2.65)
s+3 10 s+3 10 s2 + 1 10 s2 + 1
By inserting each of the four inverse transform terms which can be found in a table, we obtain our
solution:
1
21e−3t − cos (t) + 3 sin (t)

y(t) = (2.66)
10
This is clearly quite a complicated expression. However, obtaining the result was relatively easy and
required some simple algebra and looking up some transforms in a table.

16
2.5.4 The Shift Theorems
Translation of transforms
It can be shown that the Laplace transform of y(t) multiplied by a factor eat is given by:

L y(t)eat = Y (s − a)
 
(2.67)

which result in a simple translation of the Laplace Transform Y (s) along the s axis by an amount a.

Example
Use the above theorem to find the Inverse Laplace transform:
" #
1
L−1 2 . (2.68)
(s − 1)

We know (e.g. from section 2.5.1 ot from the tables) that L [t] = 1/s2 , by applying the shift theorem
with a = 1 we obtain
1
L et t =
 
2. (2.69)
(s − 1)
and by applying the Inverse Laplace Transform on both sides we get
" #
1
L−1 t
2 = te . (2.70)
(s − 1)

Translation of functions
It can also be shown that the Laplace transform of y(t − a) is equal to the Laplace transform of y(t)
multiplied by a factor e−as :

L [y(t − a)] = e−as Y (s) (2.71)

In this case y(t) has shifted along the t axis by an amount a.

Example
Use the above theorem to find the inverse transform:
 
−1 1 −6s
L e . (2.72)
s2 + 1

From our table of Laplace transforms we find that L [sin (t)] = 1/(s2 + 1), and thus
 
−1 1 −6s
L e = sin (t − 6) (2.73)
s2 + 1

we note that the Laplace transform of sin (t − 6) is undefined when (t − 6) < 0, and therefore this
solution requires that t > 6.

2.5.5 Laplace Transform of a delta function


The Laplace transform of a delta function δ(t − t0 ) is derived as follows (assuming t0 > 0):
Z ∞
L [δ(t − t0 )] = δ(t − t0 )e−st dt (2.74)
0

and the function inside the integral is only non zero at t = t0 , and therefore

L [δ(t − t0 )] = e−st0 . (2.75)

17
2.5.6 Convolution Theorem for Laplace Transforms
The Convolution Theorem for Laplace transforms is expressed as follows:
Z t Z t
−1
L [F (s)G(s)] = f (s)g(t − s)ds = f (t − s)g(s)ds (2.76)
0 0

where F (s) and G(s) are the Laplace transforms of f (t) and g(t), respectively. This enables some
inverse Laplace transforms to be found more easily when the Laplace transform is evidently a product
of two standard transforms.

Example
Use the Convolution theorem to find the inverse transform:

 
1
L−1 . (2.77)
s2 (s + 5)

First, we note that 1/[s2 (s + 5)] is the product of 1/s2 and 1/(s + 5), two transforms we already know
1
F (s) = when f (t) = t (2.78)
s2
1
G(s) = when g(t) = e−5t (2.79)
s+5
then, to use the Convolution Theorem, we calculate the terms inside one of the integrals:

f (s) = s and g(t − s) = e−5(t−s) (2.80)


−5s
or f (t − s) = t − s and g(s) = e (2.81)

thus
  Z t
−1 1
L = se−5(t−s) ds (2.82)
s2 (s + 5) 0
  Z t
1
or L−1 2 = (t − s)e−5s ds (2.83)
s (s + 5) 0

Either integral requires solving using integration by parts, and both yield the same answer:
 
−1 1 1
5t − 1 + e−5t

L = (2.84)
s2 (s + 5) 25

18
Summary
This section has addressed the following key concepts:
• Mathematical transformation to change the way functions or signals are represented.

• Representing functions as a sum of sinusoidal waveforms.


• Use of the Fourier integral to find the spectral content of a function.
• Delta functions.

• Convolution.
• Smoothing as a convolution process.
• Laplace transforms of functions and derivatives.
• Use of the Laplace transform to solve differential equations.

19

You might also like