8control and DSP Lab - PDF
8control and DSP Lab - PDF
Don’ts
Ø Avoid unnecessary chat or walk.
Ø Disfiguring of furniture is prohibited.
Ø Avoid using cell phones unless absolutely necessary.
Ø Do not use personal pen drives without permission.
Ø Do not displace monitor, keyboard, mouse etc.
Ø Avoid late submission of laboratory reports.
IN505 DSP LABORATORY Semester V L-T-P 1 CREDIT
0-0-2
Text book: Digital Signal Processing Using MATLAB, John G Proakis, Vinay K. Ingle,
Cengage Learning.
Student Profile
Name
Roll Number
Department
Year
Student Performance
Sl. No. Title of the Experiment Remarks
1 Representation of Basic signals
i) Sine wave
ii) Cosine wave
iii) Unit impulse
iv) Unit step wave
v) Square wave
vi) Exponential waveform
Sawtooth waveform
2 i) Verification of sampling theorem
ii) Demonstrate the effects of
aliasing arising from improper
sampling
iii) Computation of Circular
Convolution
iv) Calculation of FFT
3 Filtering by Convolution:
i) Design of an analog filter: Band
pass filter
ii) Design of IIR filter using
impulse invariance
iii) Design of FIR filter: low pass
filter
iv) Design of FIR filter: highpass
filter
v) Design of FIR filter: bandstop
filter
vi) Design of FIR filter: bandpass
filter
4 Estimation of power spectral density
5 Application of Filtering techniques to a
noisy biomedical signal and find its
following features: Mean, Median, Auto
correlation co-efficients.
6 i) Introduction to DSP Kit
ii) Implementation of Digital FIR and
IIR filters on DSP Starter Kit
Office Use
Checked and found
…………………………………………………
Grade/ Marks
…………………………………………………
Signature
……………………………………………………
EXPERIMENT: 1
PROCEDURE:
% sine wave
t=0:0.01:1;
a=2; b=a*sin(2*pi*2*t); subplot(3,3,1); stem(t,b); xlabel('time'); ylabel('Amplitude'); title
('sinewave');
% Cosine wave
t=0:0.01:1;
a=2; b=a*cos(2*pi*2*t); subplot(3,3,2); stem(t,b); xlabel('time'); ylabel('Amplitude'); title ('Cos
wave');
% Square wave
t=0:0.01:1;
a=2; b=a*square(2*pi*2*t); subplot(3,3,3); stem(t,b); xlabel('time'); ylabel('Amplitude'); title
('square wave');
% Exponential waveform
t=0:0.01:1;
a=2;
b=a*exp(2*pi*2*t); subplot(3,3,4);
stem(t,b);
xlabel('time'); ylabel('Amplitude');
title ('exponential wave');
%sawtooth
t=0:0.01:1;
a=2; b=a*sawtooth(2*pi*2*t);
n=-5:5;
a = [zeros(1,5),ones(1,6)]; subplot(3,3,6);
stem(n,a);
Xlabel ('time');
Ylabel ('amplitude'); title('Unit step');
% unit impulse
n=-5:5;
a = [zeros(1,5),ones(1,1),zeros(1,5)]; subplot(3,3,7);
stem(n,a);
Xlabel ('time');
Ylabel ('amplitude');
title('Unit impulse');
RESULT:
DISCUSSION:
EXPERIMENT: 2
TITLE:
AIM: To illustrate the effect of sampling theorem and perform Circular convolution and FFT on
signal.
THEORY:
The definition of proper sampling is quite simple. Suppose you sample a continuous signal in
some manner. If you can exactly reconstruct the analog signal from the samples, you must have
done the sampling properly. Even if the sampled data appears confusing or incomplete, the key
information has been captured if you can reverse the process.
Figure 3-3 shows several sinusoids before and after digitization. The continious line represents the
analog signal entering the ADC, while the square markers are the digital signal leaving the ADC.
In (a), the analog signal is a constant DC value, a cosine wave of zero frequency. Since the analog
signal is a series of straight lines between each of the samples, all of the information needed to
reconstruct the analog signal is contained in the digital data. According to our definition, this
is proper sampling.
The sine wave shown in (b) has a frequency of 0.09 of the sampling rate. This might represent, for
example, a 90 cycle/second sine wave being sampled at 1000 samples/second. Expressed in
another way, there are 11.1 samples taken over each complete cycle of the sinusoid. This situation
is more complicated than the previous case, because the analog signal cannot be reconstructed by
simply drawing straight lines between the data points. Do these samples properly represent the
analog signal? The answer is yes, because no other sinusoid, or combination of sinusoids, will
produce this pattern of samples (within the reasonable constraints listed below). These samples
correspond to only one analog signal, and therefore the analog signal can be exactly reconstructed.
Again, an instance of proper sampling.
In (c), the situation is made more difficult by increasing the sine wave's frequency to 0.31 of the
sampling rate. This results in only 3.2 samples per sine wave cycle. Here the samples are so sparse
that they don't even appear to follow the general trend of the analog signal. Do these samples
properly represent the analog waveform? Again, the answer is yes, and for exactly the same
reason. The samples are a unique representation of the analog signal. All of the information
needed to reconstruct the continuous waveform is contained in the digital data. How you go about
doing this will be discussed later in this chapter. Obviously, it must be more sophisticated than
just drawing straight lines between the data points. As strange as it seems, this is proper
sampling according to our definition.
In (d), the analog frequency is pushed even higher to 0.95 of the sampling rate, with a mere 1.05
samples per sine wave cycle. Do these samples properly represent the data? No, they don't! The
samples represent a different sine wave from the one contained in the analog signal. In particular,
the original sine wave of 0.95 frequency misrepresents itself as a sine wave of 0.05 frequency in
the digital signal. This phenomenon of sinusoids changing frequency during sampling is
called aliasing. Just as a criminal might take on an assumed name or identity (an alias), the
sinusoid assumes another frequency that is not its own. Since the digital data is no longer uniquely
related to a particular analog signal, an unambiguous reconstruction is impossible. There is
nothing in the sampled data to suggest that the original analog signal had a frequency of 0.95
rather than 0.05. The sine wave has hidden its true identity completely; the perfect crime has been
committed! According to our definition, this is an example of improper sampling.
This line of reasoning leads to a milestone in DSP, the sampling theorem. Frequently this is
called the Shannon sampling theorem, or the Nyquist sampling theorem, after the authors of 1940s
papers on the topic. The sampling theorem indicates that a continuous signal can be properly
sampled, only if it does not contain frequency components above one-half of the sampling rate.
For instance, a sampling rate of 2,000 samples/second requires the analog signal to be composed
of frequencies below 1000 cycles/second. If frequencies above this limit are present in the signal,
they will be aliased to frequencies between 0 and 1000 cycles/second, combining with whatever
information that was legitimately there.
Two terms are widely used when discussing the sampling theorem: the Nyquist frequency and
the Nyquist rate. Unfortunately, their meaning is not standardized. To understand this, consider
an analog signal composed of frequencies between DC and 3 kHz. To properly digitize this signal
it must be sampled at 6,000 samples/sec (6 kHz) or higher. Suppose we choose to sample at 8,000
samples/sec (8 kHz), allowing frequencies between DC and 4 kHz to be properly represented. In
this situation their are four important frequencies: (1) the highest frequency in the signal, 3 kHz;
(2) twice this frequency, 6 kHz; (3) the sampling rate, 8 kHz; and (4) one-half the sampling rate, 4
kHz. Which of these four is the Nyquist frequency and which is the Nyquist rate? It depends who
you ask! All of the possible combinations are used. Fortunately, most authors are careful to define
how they are using the terms. In this book, they are both used to mean one-half the sampling rate.
Figure 3-4 shows how frequencies are changed during aliasing. The key point to remember is that
a digital signal cannot contain frequencies above one-half the sampling rate (i.e., the Nyquist
When the frequency of the continuous wave is below the Nyquist rate, the frequency of the
sampled data is a match. However, when the continuous signal's frequency is above the Nyquist
rate, aliasing changes the frequency into something that can be represented in the sampled data.
As shown by the zigzagging line in Fig. 3-4, every continuous frequency above the Nyquist rate
has a corresponding digital frequency between zero and one-half the sampling rate. It there
happens to be a sinusoid already at this lower frequency, the aliased signal will add to it, resulting
in a loss of information. Aliasing is a double curse; information can be lost about the
higher and the lower frequency. Suppose you are given a digital signal containing a frequency of
0.2 of the sampling rate. If this signal were obtained by proper sampling, the original analog
signal must have had a frequency of 0.2. If aliasing took place during sampling, the digital
frequency of 0.2 could have come from any one of an infinite number of frequencies in the analog
signal: 0.2, 0.8, 1.2, 1.8, 2.2, … .
Just as aliasing can change the frequency during sampling, it can also change the phase. For
example, look back at the aliased signal in Fig. 3-3d. The aliased digital signal is inverted from
the original analog signal; one is a sine wave while the other is a negative sine wave. In other
words, aliasing has changed the frequency and introduced a 180? phase shift. Only two phase
shifts are possible: 0? (no phase shift) and 180? (inversion). The zero phase shift occurs for analog
frequencies of 0 to 0.5, 1.0 to 1.5, 2.0 to 2.5, etc. An inverted phase occurs for analog frequencies
of 0.5 to 1.0, 1.5 to 2.0, 3.5 to 4.0, and so on.
Now we will dive into a more detailed analysis of sampling and how aliasing occurs. Our overall
goal is to understand what happens to the information when a signal is converted from a
continuous to a discrete form. The problem is, these are very different things; one is a continuous
waveform while the other is an array of numbers. This "apples-to-oranges" comparison makes the
analysis very difficult. The solution is to introduce a theoretical concept called the impulse train.
Figure 3-5a shows an example analog signal. Figure (c) shows the signal sampled by using
an impulse train. The impulse train is a continuous signal consisting of a series of narrow spikes
(impulses) that match the original signal at the sampling instants. Each impulse is infinitesimally
narrow, a concept that will be discussed in Chapter 13. Between these sampling times the value of
the waveform is zero. Keep in mind that the impulse train is a theoretical concept, not a waveform
that can exist in an electronic circuit. Since both the original analog signal and the impulse train
are continuous waveforms, we can make an "apples-apples" comparison between the two.
Now we need to examine the relationship between the impulse train and the discrete signal (an
array of numbers). This one is easy; in terms of information content, they are identical. If one is
known, it is trivial to calculate the other. Think of these as different ends of a bridge crossing
between the analog and digital worlds. This means we have achieved our overall goal once we
understand the consequences of changing the waveform in Fig. 3-5a into the waveform in Fig.
3.5c.
Three continuous waveforms are shown in the left-hand column in Fig. 3-5. The
corresponding frequency spectra of these signals are displayed in the right-hand column. This
should be a familiar concept from you knowledge of electronics; every waveform can be viewed
as being composed of sinusoids of varying amplitude and frequency. Later chapters will discuss
the frequency domain in detail. (You may want to revisit this discussion after becoming more
familiar with frequency spectra).
Figure (a) shows an analog signal we wish to sample. As indicated by its frequency spectrum in
(b), it is composed only of frequency components between 0 and about 0.33 f>s, where fs is the
sampling frequency we intend to use. For example, this might be a speech signal that has been
filtered to remove all frequencies above 3.3 kHz. Correspondingly, fs would be 10 kHz (10,000
samples/second), our intended sampling rate.
Sampling the signal in (a) by using an impulse train produces the signal shown in (c), and its
frequency spectrum shown in (d). This spectrum is a duplication of the spectrum of the original
signal. Each multiple of the sampling frequency, fs, 2fs, 3fs, 4fs, etc., has received a copy and
a left-for-right flipped copy of the original frequency spectrum. The copy is called the upper
sideband, while the flipped copy is called the lower sideband. Sampling has
generated new frequencies. Is this proper sampling? The answer is yes, because the signal in (c)
can be transformed back into the signal in (a) by eliminating all frequencies above ?fs. That is, an
analog low-pass filter will convert the impulse train, (b), back into the original analog signal, (a).
If you are already familiar with the basics of DSP, here is a more technical explanation of why
this spectral duplication occurs. (Ignore this paragraph if you are new to DSP). In the time
domain, sampling is achieved by multiplying the original signal by an impulse train of unity
amplitude spikes. The frequency spectrum of this unity amplitude impulse train is also a unity
amplitude impulse train, with the spikes occurring at multiples of the sampling frequency, fs, 2fs,
3fs, 4fs, etc. When two time domain signals are multiplied, their frequency spectra are convolved.
This results in the original spectrum being duplicated to the location of each spike in the impulse
train's spectrum. Viewing the original signal as composed of both positive and negative
frequencies accounts for the upper and lower sidebands, respectively. This is the same as
amplitude modulation, discussed in Chapter 10.
Figure (e) shows an example of improper sampling, resulting from too low of sampling rate. The
analog signal still contains frequencies up to 3.3 kHz, but the sampling rate has been lowered to 5
kHz. Notice that along the horizontal axis are spaced closer in (f) than in (d). The frequency
spectrum, (f), shows the problem: the duplicated portions of the spectrum have invaded the band
between zero and one-half of the sampling frequency. Although (f) shows these overlapping
frequencies as retaining their separate identity, in actual practice they add together forming a
single confused mess. Since there is no way to separate the overlapping frequencies, information
is lost, and the original signal cannot be reconstructed. This overlap occurs when the analog signal
contains frequencies greater than one-half the sampling rate, that is, we have proven the sampling
theorem.
Convolution is a mathematical way of combining two signals to form a third signal. It is the single
most important technique in Digital Signal Processing. Using the strategy of impulse
decomposition, systems are described by a signal called the impulse response. Convolution is
important because it relates the three signals of interest: the input signal, the output signal, and the
impulse response. This chapter presents convolution from two different viewpoints, called the
input side algorithm and the output side algorithm.
PROCEDURE:
t=-100:01:100;
fm=0.02;
x=cos(2*pi*t*fm); subplot(2,2,1);
plot(t,x);
xlabel('time in sec');
ylabel('x(t)');
title('continuous time signal');
fs1=0.02;
n=-2:2;
x1=cos(2*pi*fm*n/fs1);
subplot(2,2,2);
stem(n,x1);
hold on
subplot(2,2,2);
plot(n,x1,':');
title('discrete time signal x(n) with fs<2fm');
xlabel('n');
ylabel('x(n)');
fs2=0.04;
n1=-4:4;
x2=cos(2*pi*fm*n1/fs2);
subplot(2,2,3);
stem(n1,x2);
hold on
subplot(2,2,3);
plot(n1,x2,':');
title('discrete time signal x(n) with fs>2fm');
xlabel('n');
ylabel('x(n)');
n2=-50:50;
fs3=0.5;
x3=cos(2*pi*fm*n2/fs3);
subplot(2,2,4);
stem(n2,x3);
hold on
subplot(2,2,4);
plot(n2,x3,':');
xlabel('n');
ylabel('x(n)');
title('discrete time signal x(n) with fs=2fm');
Lab Procedure
a. Consider an analog signal x(t) consisting of three sinusoids of frequencies of 1 kHz, 4 kHz, and 6
kHz:
x(t)= cos(2πt)+ cos(8πt)+ cos(12πt)
where t is in milliseconds. Show that ifx(t)= cos(2πt)+
this signal cos(8πt)+
is sampled cos(12πt)
at a rate of fs = 5 kHz, it will be aliased
where
with the t is insignal,
following milliseconds. Show
in the sense that that
theirifsample
this signal
valuesiswill
sampled at a rate of fs = 5 kHz, it
be the same: will
be aliasedwith the following signal, in the sense that their sample values will be the same:
xa (t)= 3 cos(2πt)
xa(t)= 3 cos(2πt)
On the same graph, plot the two signals x(t) and xa(t) versus t in the range 0 ≤ t ≤ 2 msec.
On the same graph, plot the two signals x(t) and xa (t) versus t in the range 0 ≤ t ≤ 2 msec. To this
To this plot, add the time samples x(tn) and verify that x(t) and xa(t) intersect precisely at
plot, add the time samples x(tn ) and verify that x(t) and xa (t) intersect precisely at these samples.
These these samples.
samples These samples
can be evaluated can be
and plotted as evaluated
follows: and plotted as follows:
fs = 5; T = 1/fs;
fs = 5; T = 1/fs; tn = 0:T:2; xn = x(tn);
tn = 0:T:2; xn = x(tn); plot(tn, xn, ’.’);
2. Repeat part (a) with fs = 10 kHz. In this case, determine the signal xa(t) with which x(t) is
plot(tn, xn, ’.’);
aliased.
b. Repeat partPlot bothfx(t)
(a) with s = 10 kHz.
and In this
xa(t) case,
on the determine
same graph the
oversignal xa (t) range
the same 0 ≤ t x(t)
with which is aliased.
≤ 2 msec. Verify
Plot both x(t) x a (t)
again that the two signals intersect at the sampling instants.
and on the same graph over the same range 0 ≤ t ≤ 2 msec. Verify again that the
two signals intersect at the sampling instants.
fs = 5 kHz fs = 10 kHz
4 4
3 3
2 2
1 1
x (t), x a(t)
x (t), x a(t)
0 0
−1 −1
−2 −2
−3 −3
−4 −4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
ND
1 QUANTIZATION
SAMPLING AND QUANTIZATION t (msec) 8
t (msec) 8
c.iodic sawtooth
Consider wave x(t)
a periodic with period
sawtooth T0 =with
wave x(t) 1 sec shown
period T0below:
= 1 sec shown below:
3. Consider a periodic sawtooth wave x(t) with period T0 = 1 sec shown below:
y, x(t) is defined as follows over one period, that is, over the time interval 0 ≤ t ≤ 1:
Mathematically,
Mathematically, x(t) is definedx(t)
!
is defined
as follows overas follows
one period,over
that one period,
is, over thatinterval
the time is, over0 the
≤ t time
≤ 1: interval 0 ≤ t ≤ 1:
2t − 1, if 0!< t < 1
x(t)= 2t − 1, if 0 < t < 1 (1.1)
0, x(t)=
if t = 0 or t = 1 (1.1)
0, if t = 0 or t=1
where
nits of seconds. This tperiodic
is in units of seconds.
signal This periodic
admits a Fourier signal admits
series expansion a Fourier
containing only series expansion containing
where t is in units of seconds. This periodic signal admits a Fourier series expansion containing only
h harmonics atonly
the sine terms
frequencies with
fm = harmonics
m/T0 , m = at
1, the
2, 3,frequencies
4, . . . Hz: fm = m/T0
sine terms with harmonics at the frequencies fm = m/T0 , m = 1, 2, 3, 4, . . . Hz:
, m = 1, 2, 3, 4, . . . Hz:
∞
" ∞
" b1 sin(2πt)+b2 sin(4πt)+b3 sin(6πt)+ · · ·
x(t)= bm sin(2πmt)= (1.2)
m=1
x(t)= bm sin(2πmt)= b1 sin(2πt)+b2 sin(4πt)+b3 sin(6πt)+ · · · (1.2)
m=1
or knowledge of Fourier series, prove that the Fourier coefficients are given as follows:
Using your prior knowledge of Fourier series, prove that the Fourier coefficients are given as follows:
The reason why 2 the signal x(t) was defined to have the value zero at the discontinuity points has
bm = − , m = 1, 2, 32, 4, . . .
to do with a theorem
πm bthat
m =states
− that
, many = 1finite
, 2, 3, 4sum
, . . . of Fourier series terms will always pass
πm
through the mid-points of discontinuities.
hy the signal x(t) was defined to have the value zero at the discontinuity points has to
remThe reason
that why
states thatthe
any finitex(t)
signal sumwas definedseries
of Fourier to have the will
terms value zero at
always thethrough
pass discontinuity
the points has to
do with a 4. To
theorem
discontinuities. plot
that the above
states that sawtooth
any finite waveform
sum of in
Fourier MATLAB,
series terms you
will may use
always passthethrough
functions
the upulse and
mid-points ustep,
of discontinuities.
which can be downloaded from the lab web page (see the link for the supplementary M-
ries of Eq. (1.2) can be thought of as the limit (in the least-squares sense) as M → ∞ of
d. files.)
The consisting
Fourier series The following code segment
of aswill generate and plot the above
sense)sawtooth
as M → ∞ waveform over one
signal onlyofofEq.
the(1.2)
firstcan be thought
M harmonics: the limit (in the least-squares of
the followingperiod:
signal consisting only of the first M harmonics:
x = inline(’upulse(t-0.5,0,0.5,0) - upulse(t,0,0,0.5)’);
t = linspace(0,1,1001);
plot(t, x(t));
In order to plot three periods, use the following example code that adds three shifted copies of the
basic period defined in Eq. (1.1):
t = linspace(0,3,3001);
plot(t, x(t) + x(t-1) + x(t-2));
Repeat the above plot of the sawtooth waveform and xM(t) for the case M = 10, and then for M =
20.
The ripples that you see accumulating near the sawtooth discontinuities as M increases are an
example of the so-called Gibbs phenomenon in Fourier series.
PLING AND QUANTIZATION
5. The sawtooth waveform x(t) is now sampled at the rate of fs = 5 Hz and9 the resulting samples
are reconstructed by an ideal reconstructor. Using the methods of Example 1.4.6 of the text [1],
1 SAMPLING AND QUANTIZATION 9
show that the aliased signal xa(t) at the output of the reconstructor will have the form:
sawtooth waveform x(t) is now sampled at the rate of f = 5 Hz and the resulting samples are s
nstructed by an ideal reconstructor. Using the methods of Example 1.4.6 of the text [1], show that
liased signal xd.a (t) xa(t)=
Theatsawtooth a1the
sin(2πt)+a2
waveform
the output of sin(4πt)
x(t) is now
reconstructor sampled
will have at
thethe rate of fs = 5 Hz and the resulting samples are
form:
reconstructed by an ideal reconstructor. Using the methods of Example 1.4.6 of the text [1], show that
the aliased
Determine signal
the xa (t)
xcoefficients
(t)= at (the
a sin a1 output of(the
, a2 . On
2 πt)+a sin the reconstructor
same graph, plot
4πt) will have
one the form:
period of the sawtooth wave x(t)
a 1 2
together with xa(t). Verify that they agree at the five sampling time instants that lie within this
rmine the period. a1 , a2 . On the same graph,xaplot
coefficients (t)=one
a1 sin (2 πt)+a
period 2 sin
of the (4πt) wave x(t)
sawtooth
her with xa (t). Verify that they agree at the five sampling time instants that lie within this
d. Determine the coefficients a1 , a2 . On the same graph, plot one period of the sawtooth wave x(t)
6. together
Assume,withnext,x that the sawtooth waveform x(t) is sampled at the rate of fs = 10 Hz. Show now
a (t). Verify that they agree at the five sampling time instants that lie within this
me, next, thatthat
the the aliased
sawtooth
period. signal
waveform xa(t) will
x(t) is have the
sampled form:
at the rate of fs = 10 Hz. Show now that
liased signal xa (t) will have the form:
e. Assume, next, that the sawtooth waveform x(t) is sampled at the rate of fs = 10 Hz. Show now that
xa(t)= a1 sin(2πt)+a2 sin(4πt)+a3 sin(6πt)+a4 sin(8πt)
aliased signal xa (t) will have the form:
xathe
(t)= a1 sin(2 πt)+a2 sin(4πt)+a3 sin(6πt)+a4 sin(8 πt)
where the coefficients ai are obtained by the condition that the signals x(t) and xa(t) agree at the
xa (t)= a1 sinthat
(2 πt)+a
e the coefficients ai are obtained by the 2 sin(4πt)+a
x(t) 3 sin (6πt)+a4 sin (first
8 πt)
first four sampling instantscondition
tn= nT the signals
= n/10 Hz, for = 1,x2,
n and a (t) agree at
3, 4. Thesethefour conditions can be
sampling instants tn = nT = n/10 Hz, for n = 1, 2 , 3 , 4. These four conditions can be arranged
arranged into
where the
a 4×4 matrix equation
a 4×4 matrix
coefficients
of the form:
equation of the form:
ai are obtained by the condition that the signals x(t) and xa (t) agree at the first
four sampling instants tn = nT = n/10 Hz, for n = 1, 2 , 3 , 4. These four conditions can be arranged
⎡ ⎤⎡ ⎤ ⎡ ⎤
into a 4×4 matrix
∗ equation
∗ ∗ ∗of thea1form: ∗
∗⎥ ⎢ ⎥ ⎢∗⎥ ⎢∗
⎥ ⎢ a⎡
2 ⎥ ⎢
⎢ ⎥ ∗ ⎤⎡ ∗
⎤ ⎡ ⎤
⎥⎢
∗ ⎦ ⎣ a3 ⎦∗= ∗
⎥ ⎢
⎢ ∗ ⎥
⎣∗⎦ ∗
⎣∗ ∗ a1 ∗ ∗
⎢∗ ∗ ∗ ∗⎥⎢a ⎥ ⎢∗⎥
∗ a⎢
4
⎢ ∗∗ ∗ ⎥⎢ 2 ⎥ ⎢ ⎥
⎥⎢ ∗
⎥=⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ⎦ ⎣ a3 ⎦ ⎣ ∗ ⎦
Determine
rmine the numerical valuesthe numerical
of the valuesand
starred entries of explain
∗the ∗
starred
∗ ∗
your a4and
entries
approach. explain
∗usingyour
Then, approach.
MATLAB, Then, using
MATLAB,
this matrix equation solve
for the this matrix
coefficients ai . equation
Once ai areforknown,
the coefficients
the signal xaai(t)
. Once ai are known, the signal xa(t) is
is completely
Determine the numerical values of the starred entries and explain your approach. Then, using MATLAB,
ed. completely defined.
solve this matrix equation for the coefficients ai . Once ai are known, the signal xa (t) is completely
he same graph, plot one period of the sawtooth waveform x(t) together with xa (t). Verify that
defined.
agree at the 10On the same
sampling graph,
time plotthat
instants onelieperiod
withinof theperiod.
this sawtooth waveform x(t) together with xa(t). Verify that
On the same graph, plot one period of the sawtooth waveform x(t) together with xa (t). Verify that
they
make another plot agree at the 10 sampling
that displays x(t) and xa (t)
they agree at the 10 sampling
time instants that lie within this period.
over three periods.
time instants that lie within this period.
fs = 5 Hz
Also, make another plot that displays x(t) andfs x=a10 Hz over three periods.
(t)
1 1
fs = 5 Hz fs = 10 Hz
1 1
0.5 0.5
x (t), x a(t)
x (t), x a(t)
0.5 0.5
0 0
x (t), x a(t)
x (t), x a(t)
0 0
−0.5 −0.5
−0.5 −0.5
−1 −1
0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t (sec) −1 t (sec) −1
0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
e note the following caveat. In reducing the harmonics
t (sec) of the sawtooth waveform to the Nyquist
t (sec)
val [−5, 5] Hz, you may conclude that the Nyquist frequency f = 5 Hz should also be included
Please note the following caveat. In reducing the harmonics of the sawtooth waveform to the Nyquist
iii) Circular Convolution
clc;
clear all;
close all;
x=input('enter the sequence');
h=input('enter the imp response'); % no of samples in x
n1=length(x);
n2=length(h);
n3=n1+n2-1;
n=max(n1,n2);
if(n3>=0)
h=[h,zeros(1,n3)];
else
x=[n,zeros(1,-n3)]; end;
for a =1:n y(a)=0;
for i=1:n j=a-i+1;
if(j<=0) j=n+j;
end;
y(a)=y(a)+[x(i)*h(j)]; end;
end; t=1:n
stem(t,y);
disp(y);
title('circular convolution');
xlabel('samples');
ylabel('amplitude');
function[Xk]=dft(x,N);
x=[1 1 1 1 4 5 6 1]; N=8;
n=[0:1:N-1]; k=[0:1:N-1];
a=(-i*2*pi/N); WN=exp(a);
nk=n'*k;
WNnk=WN.^nk;
Xk=x*WNnk
RESULT:
Circular Convolution
1. Enter a sequence and impulse response to calculate the result. Perform it taking three
different cases
Sequence x[n] Impulse reponse h[n] Result y[n] GRAPH
Calculation of FFT
Change the values of X to find FFT of the signal. Do it for three different signals .
DISCUSSION:
EXPERIMENT: 3
TITLE:Filtering by Convolution:
A finite impulse response (FIR) filter is a filter structure that can be used to implement almost any
sort of frequency response digitally. An FIR filter is usually implemented by using a series of
delays, multipliers, and adders to create the filter's output.Figure 1 shows the basic block diagram
for an FIR filter of length N. The delays result in operating on prior input samples. The hk values
are the coefficients used for multiplication, so that the output at time n is the summation of all the
delayed samples multiplied by the appropriate coefficients.
The process of selecting the filter's length and coefficients is called filter design. The goal is to set
those parameters such that certain desired stopband and passband parameters will result from
• A frequency response plot, like the one shown in Figure 1, which verifies that the filter
meets the desired specifications, including ripple and transition bandwidth.
• The filter's length and coefficients.
The longer the filter (more taps), the more finely the response can be tuned.
With the length, N, and coefficients, float h[N] = { ... }, decided upon, the implementation of the
FIR filter is fairly straightforward. Listing 1 shows how it could be done in C. Running this code
on a processor with a multiply-and-accumulate instruction (and a compiler that knows how to use
it) is essential to achieving a large number of taps.
The impulse invariance method of IIR filter design is based upon the notion that we can design a
discrete filter whose time-domain impulse response is a sampled version of the impulse response
of a continuous analog filter. If that analog filter (often called the prototype filter) has some
desired frequency response, then our IIR filter will yield a discrete approximation of that desired
response. The impulse response equivalence of this design method is depicted in Figure 2, where
we use the conventional notation of d to represent an impulse function and hc(t) is the analog
filter's impulse response. We use the subscript "c" in Figure 2 (a) to emphasize the continuous
nature of the analog filter.
Figure 2(b) illustrates the definition of the discrete filter's impulse response: the filter's time-
domain output sequence when the input is a single unity-valued sample (impulse) preceded and
followed by all zero-valued samples. Our goal is to design a digital filter whose impulse response
is a sampled version of the analog filter's continuous impulse response. Implied in the
correspondence of the continuous and discrete impulse responses is the property that we can map
each pole on the s-plane for the analog filter's Hc(s) transfer function to a pole on the z-plane for
the discrete IIR filter's H(z) transfer function. The designers have found is that the impulse
invariance method does yield useful IIR filters, as long as the sampling rate is high relative to the
bandwidth of the signal to be filtered. In other words, IIR filters designed using the impulse
invariance method are susceptible to aliasing problems because practical analog filters cannot be
perfectly band-limited.
PROCEDURE:
i) Analog filter: Band pass filter
clear all;
rp = input(‘pass ripple freq’);
rs = input(‘stop ripple freq’);
fp = input(‘pass band freq’);
fs = input(‘stop band freq’);
f = input(‘sample freq’);
w1 = 2*fp/f;
w2= 2*fs/f;
[nJ= buttord(w1 ,w2,rp,rs); wn= [w1,w2];
[b,a]= butter(n,wn,’bandpass’); w=0:0.1:pi;
[h,p]= freqz(b,a,w);
g= 20*log10(abs(h)); A=angle(h);
subplot (2,2,1); plot(p/pi,g); ylabel(‘amp’);
xlabel(‘ferq’);
title(‘amp,freq’);
subplot (2,2,2); plot(p/pi,A); xlabel(‘normal. freq’); ylabel(‘phase’);
title(‘normal. freq,phas&);
ii) IIR filter using impulse invariance: a sixth-order analog Butterworth lowpass filter to a
digital filter using impulse invariance
f = 2;
fs = 10;
[b,a] = butter(6,2*pi*f,'s');
[bz,az] = impinvar(b,a,fs);
freqz(bz,az,1024,fs)
clc;
clear all;
wc=input('enter the value of cut off frequency');
N=input('enter the value of filter');
alpha=(N-1)/2;
eps=0.001;
%Rectangular Window
n=0:1:N-1;
hd=sin(wc*(n-alpha+eps))./(pi*(n-alpha+eps));
hn=hd
w=0:0.01:pi;
h=freqz(hn,1,w);
plot(w/pi,abs(h));
hold on
%Hamming Window
n=0:1:N-1;
wh=0.54-0.46*cos((2*pi*n)/(N-1));
hn=hd.*wh
w=0:0.01:pi;
h=freqz(hn,1,w);
plot(w/pi,abs(h),'ms');
hold off;
hold on
%Hanning Window
n=0:1:N-1;
hn=hd.*wh
w=0:0.01:pi;
h=freqz(hn,1,w);
plot(w/pi,abs(h),'blue');
hold off;
hold on
%Blackman Window
n=0:1:N-1; wh=0.42-0.5*cos((2*pi*n)/(N-1))+0.08*cos((4*pi*n)/(N-1));
hn=hd.*wh
w=0:0.01:pi;
h=freqz(hn,1,w);
plot(w/pi,abs(h),'green');
hold off;
RESULT:
DISCUSSION:
EXPERIMENT: 4
TITLE:Estimation of power spectral density
AIM: To calculate the power spectral density of a signal
THEORY:
Power spectral density function (PSD) shows the strength of the variations(energy) as a function
of frequency. In other words, it shows at which frequencies variations are strong and at which
frequencies variations are weak. The unit of PSD is energy per frequency(width) and you can
obtain energy within a specific frequency range by integrating PSD within that frequency range.
Computation of PSD is done directly by the method called FFT or computing autocorrelation
function and then transforming it.PSD is a very useful tool if you want to know frequencies and
amplitudes of oscillatory signals in your time series data.
PROCEDURE:
clc;
clear all;
%% Define Parameters
% sampling frequency (Hz)
fs=100e6;
% length of time-domain signal
L=30e3;
% desired power specral density (dBm/Hz)
Pd=-100;
% number of FFT points
nfft=2^nextpow2(L);
% frequency plotting vector
f=fs/2*[-1:2/nfft:1-2/nfft];
% create
s=wgn(L,1,Pd+10*log10(fs),1,[],'dBm','complex');
%% Analysis
% analyze spectrum
N=nfft/2+1:nfft;
S=fftshift(fft(s,nfft));
S=abs(S)/sqrt(L*fs);
% time-average for spectrum
Navg=4e2;
b(1:Navg)=1/Navg;
Sa=filtfilt(b,1,S);
% convert to dBm/Hz
S=20*log10(S)+30;
Sa=20*log10(Sa)+30;
%% Plot
figure(1)
plot(f(N)/1e6,S(N));
hold on
plot(f(N)/1e6,Sa(N),'r')
xlabel('Frequency (MHz)')
ylabel('Power Density (dBm/Hz)')
title('PowerSpectral Density')
legend('Noise Spectrum','Time-Averaged Spectrum')
axis([10e-4 fs/2/1e6 -120 -60])
grid on
hold off
Mean: It is the sum of the observations divided by the number of observations. It identifies the
central location of the data and sometimes referred to as the average. The mean is calculated as
follows:
∑$
%&' "#
M=
(
Where N and Xn are the length and nthsample value of EMG respectively.
Median Frequency: Median frequency (MDF) are estimated based on power spectrum of the
EMG signal and are expressed as
)
MDF= * ∑-
./) 𝑃𝑗
where Pj and fj are EMG power spectrum at frequency bin j respectively. M is the length of the
frequency bin.
Xi =∑61/) 𝑎1 𝑥3 + 𝑤3
PROCEDURE
2. Apply Onset Detection followed by suitable Filters done in the previous lab sessions to
obtain the pre processed signal.
3. Find the features from the signals using the following function:
function [FEA_ch1,FEA_ch2]=func(B,e)
% B=denoised reconstructed signal from wavelets
% e= count
ch1=B(:,1);
ch2=B(:,2);
% MEAN::
sum_ch1 = 0;
sum_ch2 = 0;
N = length(ch2);
fori = 1:n
sum_ch1 = sum_ch1 + ch1(i);
sum_ch2 = sum_ch2 + ch2(i);
end
mean_ch1 = (sum_ch1/N);
mean_ch2 = (sum_ch2/N);
% AUTO_REGRESSIVE COEFFICIENT::
p=4;
a1 = arburg(ch1, p);
a2 = arburg(ch2, p);
sum_y= 0;
sum_u = 0;
sum_ch1 = 0;
sum_ch2 = 0;
sum_1 = 0;
sum_2 = 0;
n = length(ch1);
fori = 1:n
sum_ch1 = sum_ch1 + ch1(i);
sum_ch2 = sum_ch2 + ch2(i);
end
q=A(:,1);
fori = 1:n
sum_1 = sum_1 + ch1(i)*q(i);
sum_2 = sum_2 + ch2(i)*q(i);
end
mmnf_ch1 = sum_1/sum_ch1;
mmnf_ch2 = sum_2/sum_ch2;
RESULT:
THEORY:
The hardware experiments in the DSP lab are carried out on the Texas Instruments
TMS320C6713 DSP Starter Kit (DSK), based on the TMS320C6713 floating point DSP running
at 225 MHz. The basic clock cycle instruction time is 1/(225 MHz)= 4.44 nanoseconds. During
each clock cycle, up to eight instructions can be carried out in parallel, achieving up to 8×225 =
1800 million instructions per second (MIPS). DSP Spectrum Analyzer Trainer has been specially
designed for learning how to use a Spectrum Analyzer for Frequency and Level Measurements
and Tracking Generator applications like HF Filters and Amplifier Response, Channel
Modulation, Mixers, study of Mobile and Cordless signals and study of Harmonics in sine, square
& triangular waves. TV & FM transmitted signals can be analyzed on Spectrum Analyzer using
this Trainer and audio output can be heard on the speaker mounted inside the trainer.
Application I
Filter Responses : Four filters, very precisely designed for checking filter responses on Spectrum
Analyzer.
Application II
Application III
Harmonic Display & Analysis : A Function Generator of frequency range approx. 40 KHz - 400
KHz (Low) and 200 KHz - 2 MHz (High) having sine, square, & triangular output with frequency
variation and output level variation (Max.1 Vpp). Any one output can be given to Spectrum
Analyzer for harmonic display.
Application V Live signal Application : Four types of signal can be seen on Spectrum Analyzer
1. Mobile phone forward and reverse link frequency 2. Cordless phone Transmitting and receiving
frequencies 3. FM Radio Reception (Demodulated Audio output) 4. TV signal Reception
(Demodulated Audio output) Demodulated output from Spectrum analyzer is given to the built- in
speaker via Audio amplifier. A built in telescopic Antenna is provided with Spectrum Analyzer
Power Consumption : 2 VA (approximately)
Connection Diagram:
Observing the Frequency response of Low Pass Filter:
Connection Diagram:
iii) Implementation of Digital FIR and IIR filters on DSP Starter Kit
Lab requirements:
i) DSK Board.
ii) CCS software installed on the computer.
iii) Oscilloscope.
THEORY
The moving average filter is widely used in DSP and arguably is the easiest of all digital filters to
understand. It is particularly effective at removing (high frequency) random noise from a signal or
at smoothing a signal. The moving average filter operates by taking the arithmetic mean of a
number of past input samples in order to produce each output sample. This may be represented by
the equation
Where x ( n ) represents the n th sample of an input signal and y ( n ) the n th sample of the filter
output. The moving average filter is an example of convolution using a very simple filter kernel or
impulse response comprising N coefficients each of value 1 /N. The above equation may be
thought of as a particularly simple case of the more general convolution sum implemented by a
finite impulse response filter; that is,
where the FIR filter coefficients h (i) are samples of the filter impulse response and in the case of
the moving average filter each is equal to 1 /N . As far as implementation is concerned, at the nth
sampling instant we multiply N past input samples individually by 1/N and sum the N products.
Program average.c, listed in Figure 4.1, uses this approach, even though it is not the most
computationally efficient. The value of N defined near the start of the source file determines the
number of previous input samples to be averaged. A more rigorous method of assessing the
magnitude frequency response of the filter is to use a signal generator and an oscilloscope or
spectrum analyzer to measure its gain at different individual frequencies. By using this method, it
is straightforward to identify the distinct notches in the magnitude frequency response at 1600 Hz
(corresponding to the 52 tone in test file mefsin.wav that is stored in folder average.c ) and at
3200 Hz.
The theoretical frequency response of the filter can be found using Matlab by running the
following two lines:
>> [H W]=freqz([0.2 0.2 0.2 0.2 0.2],1);
>> plot(W*4000/pi,20*log10(abs(H)))
PROCEDURE:
1. Design such a filter with MATLAB using the following values: fs = 8 kHz, f0 = 2 kHz,
and filter order M = 50. Then, using the built-in MATLAB function freqz, or the
textbook function dtft, calculate and plot in dB the magnitude response of the filter
over the frequency interval 0 ≤ f ≤ 4 kHz.
2. The designed 51-long impulse response coefficient vector h can be exported into a
data file, h.dat, in a form that is readable by a C program by the following MATLAB
command:
The following complete C program called firex.c implements this example on the C6713
processor. The program reads the impulse response vector from the data file h.dat, and defines a
linear 51- dimensional buffer array. The FIR filtering operation is based on the function fir().
void main() {
inti;
while(1);
Create and build a project for this program. You will need to add the file fir.c to the project. Using
the following MATLAB code (same as in the aliasing example of Lab-2), generate a signal
consisting of a 1-kHz segment, followed by a 3-kHz segment, followed by another 1-kHz
segment, where all segments have duration of 1 sec:
x1 = A * cos(2*pi*n*f1/fs);
x2 = A * cos(2*pi*n*f2/fs);
x3 = A * cos(2*pi*n*f3/fs);
sound([x1,x2,x3], fs);
First, set the parameter on=0 so that the filtering operation is bypassed. Send the above signal into
the line input of the DSK and listen to the output. Then, set on=1 to turn the filter on, recompile
and run the program, and send the same signal in. The middle 3-kHz segment should not be heard,
since it lies in the filter’s stopband.
c. Create breakpoints at the read_inputs and write_outputs lines of the isr() function, and start the
profile clock. Run the program and record the number of cycles between reading the input
samples and writing the computed outputs.
4. Uncommenttheappropriatelinesintheaboveprogramtoimplementthecircularbufferversionusi
ng the function firc(). You will need to add it to your project. Recompile and run your
program with the same input.
Then, repeat part (c) and record the number of computation cycles.
In comparing the computational costs of the various implementations, you will notice that
for this
example, the linear buffer version is far more efficient, in contrast to what you observed in the
case of multiple delays. The reason is that the function pwrap() gets called for each tap of the FIR
filter, whereas in the case of multiple delays, it was essentially called once.
The circular buffer implementation of FIR filters can indeed be made far more efficient than the
linear buffer case if one used an assembly language version that takes advantage of the built-in
circular addressing capability of the C6713 processor.
RESULT:
DISCUSSION:
CONTROL SYSTEM LABORATORY
Experiment No. 1: Using MATLAB for Control Systems.
Objectives: This lab provides an introduction to MATLAB in the first part. The lab
alsoprovides tutorial of vector, matrix, array and script writing and programming aspect of
MATLAB from control systems view point.
List of Equipment/Software
Following equipment/software is required:
• MATLAB
Category Soft-Experiment
Deliverables
A complete lab report including the following:
• Summarized learning outcomes.
• MATLAB scripts and their results should be reported properly.
Part I: Introduction to MATLAB
Objective: The objective of this exercise will be to introduce students to the concept
ofmathematical programming using the software called MATLAB. One shall study how to define
variables, matrices etc, see how one can plot results and write simple MATLAB codes.
Vectors
a=[1 2 3];
disp(a)
Matrices:
a=[1 2 3;4 5 6;7 8 9];
disp(a)
Matrix operations
i) Product of Matrices by a constant
a=[1 2 3;4 5 6;7 8 9];
b=4*a;
disp(b)
ii) Perform addition of two matrices.
a=[1 2;3 4];
b=[5 6;7 8];
c=a+b;
disp(c)
iii) Perform multiplication of two matrices.
a=[1 2 3;4 5 6;7 8 9];
b=[4 5 3;7 6 8;8 7 6];
c=a*b;
disp(c)
iv) Find inverse of square matrix.
a=[1 2 3;4 5 6;7 8 9];
b=inv(a);
disp(c)
The first line of a function M-file starts with the keyword ‘function’. It gives the function
name and order of arguments. In this case, there is one input arguments and one output
argument. The next several lines, up to the first blank or executable line, are comment
lines that provide the help text. These lines are printed when you type ‘help fact’. The first
line of the help text is the H1 line, which MATLAB displays when you use the ‘lookfor’
command or request help on a directory. The rest of the file is the executable MATLAB
code defining the function.
The variable n & f introduced in the body of the function as well as the variables on the
first line are all local to the function; they are separate from any variables in the MATLAB
workspace. This example illustrates one aspect of MATLAB functions that is not
ordinarily found in other programming languages—a variable number of arguments. Many
M-files work this way. If no output argument is supplied, the result is stored in ans. If the
second input argument is not supplied, the function computes a default value.
Flow Control:
Conditional Control – if, else, switch
This section covers those MATLAB functions that provide conditional program control. if,
else, and elseif. The if statement evaluates a logical expression and executes a group of
statements when the expression is true. The optional elseif and else keywords provide for the
execution of alternate groups of statements. An end keyword, which matches the if,
terminates the last group of statements.
The groups of statements are delineated by the four keywords-no braces or brackets are involved
as given below.
if <condition><statements>;
elseif<condition><statements>;
end
It is important to understand how relational operators and if statements work with matrices.
When you want to check for equality between two variables, you might use
if A == B, ...
This is valid MATLAB code, and does what you expect when A and B are scalars. But when
A and B are matrices, A == B does not test if they are equal, it tests where they are equal; the
result is another matrix of 0's and 1's showing element-by-element equality. (In fact, if A and
B are not the same size, then A == B is an error.)
Example of this type is
if A > B
'greater'
elseif A < B
'less'
elseif A == B
'equal'
else
error('Unexpected situation')
end
The ‘for’ loop, is used to repeat a group of statements for a fixed, predetermined number of times.
A matching ‘end’ delineates the statements. The syntax is as follows:
end
while:
The ‘while’ loop, repeats a group of statements indefinite number of times under control of a
logical condition. So a while loop executes atleast once before it checks the condition to stop the
execution of statements. A matching ‘end’ delineates the statements. The syntax of the ‘while’
loop is as follows:
while <condition>
<statements>;
end
continue:
The continue statement passes control to the next iteration of the for loop or while loop in which it
appears, skipping any remaining statements in the body of the loop. The same holds true for
continue statements in nested loops. That is, execution continues at the beginning of the loop in
which the continue statement was encountered.
Experiment No. 2: Response of second order systems using MATLAB:
Objective: The objective of this exercise will be to study the performance characteristics
ofsecond order systems using MATLAB.
List of Equipment/Software
The differential equation for the above Mass-Spring system can be derived as follows
89: 8:
𝑀 8; 9 + 𝐵 8; + 𝐾𝑥(𝑡) = 𝐹(𝑡) (i)
the transfer function representation of the system is given by
𝑜𝑢𝑡𝑝𝑢𝑡 𝑋(𝑠) 1
𝑇𝐹 = = = *
𝑖𝑛𝑝𝑢𝑡 𝐹(𝑠) 𝑀𝑠 + 𝐵𝑠 + 𝐾
The above system is known as a second order system.
The generalized notation for a second order system described above can be written as
w2 n
Y ( s) = R( s ) (ii)
s 2 + 2x wn + w2 n
The response of the second order system is shown below.
The performance measures could be described as follows:
Rise Time: The time for a system to respond to a step input and attains a response equal to a
percentage of the magnitude of the input. The 0-100% rise time, Tr, measures the time to
100% of the magnitude of the input. Alternatively, Tr1, measures the time from 10% to 90%
of the response to the step input.
Peak Time: The time for a system to respond to a step input and rise to peak response.
Overshoot:The amount by which the system output response proceeds beyond the
desiredresponse. It is calculated as
M p - fv
PO = X 100%
fv
where MPis the peak value of the time response, and fv is the final value of the response.
Settling Time: The time required for the system’s output to settle within a certain
percentage of the input amplitude (which is usually taken as 2%). Then, settling time, Ts, is
calculated as
4
Ts =
x wn
Ex. Effect of damping ratio ζ on performance measures. For a single-loop secondorder
feedback system given below
Find the step response of the system for values of ωn = 1 and ζ = 0.1, 0.4, 0.7, 1.0 and 2.0.
Plot all the results in the same figure window and fill the following table.
An important feature of a numerical simulation is the ease with which parameters can be
varied and the results observed directly. MATLAB is used in a supporting role to initialize
parameter values and to produce plots of the system response. Also MATLAB is used for
multiple runs for varying system parameters. Only a small subset of the functions of
MATLAB will be considered during these labs.
SIMULINK
Simulink provides access to an extensive set of blocks that accomplish a wide range of
functions useful for the simulation and analysis of dynamic systems. The blocks are
grouped into libraries, by general classes of functions.
• Mathematical functions such as summers and gains are in the Math library.
• Integrators are in the Continuous library.
• Constants, common input functions, and clock can all be found in the Sources
library.
• Scope, To Workspace blocks can be found in the Sinks library.
Simulink is a graphical interface that allows the user to create programs that are actually
run in MATLAB. When these programs run, they create arrays of the variables defined in
Simulink that can be made available to MATLAB for analysis and/or plotting. The
variables to be used in MATLAB must be identified by Simulink using a “To
Workspace” block, which is found in the Sinks library. (When using this block, open its
dialog box and specify that the save format should be Matrix, rather than the default,
which is called Structure.) The Sinks library also contains a Scope, which allows
variables to be displayed as the simulated system responds to an input. This is most
useful when studying responses to repetitive inputs.
Simulink uses blocks to write a program. Blocks are arranged in various libraries
according to their functions. Properties of the blocks and the values can be changed in the
associated dialog boxes.
GENERAL INSTRUCTIONS FOR WRITING A SIMULINK PROGRAM
To create a simulation in Simulink, follow the steps:
• Start MATLAB.
• Start Simulink.
• Open the libraries that contain the blocks you will need. These usually will
include the Sources, Sinks, Math and Continuous libraries, and possibly others.
• Open a new Simulink window.
• Drag the needed blocks from their library folders to that window. The Math
library, for example, contains the Gain and Sum blocks.
• Arrange these blocks in an orderly way corresponding to the equations to be solved.
• Interconnect the blocks by dragging the cursor from the output of one block to the
input of another block. Interconnecting branches can be made by right-clicking on
an existing branch.
• Double-click on any block having parameters that must be established, and set
these parameters. For example, the gain of all Gain blocks must be set. The number
and signs of the inputs to a Sum block must be established. The parameters of any
source blocks should also be set in this way.
• It is necessary to specify a stop time for the solution. This is done by clicking on
the Simulation > Parameters entry on the Simulink toolbar.
At the Simulation > Parameters entry, several parameters can be selected in this dialog
box, but the default values of all of them should be adequate for almost all of the
exercises. If the response before time zero is needed, it can be obtained by setting the
Start time to a negative value. It may be necessary in some problems to reduce the
maximum integration step size used by the numerical algorithm. If the plots of the results
of a simulation appear “choppy” or composed of straight-line segments when they should
be smooth, reducing the max step size permitted can solve this problem.
Mass-Spring System Model
Consider the Mass-Spring system used in the previous exercise as shown in the figure.
Where K is the spring constant, B is the viscous friction coefficient, x(t) is the displacement
and F(t) is the applied force:
The differential equation for the above Mass-Spring system can then be written as follows
d 2x dx
M 2 + B + Kx(t ) = F (t ) (1)
dt dt
Exercise 1:Modelling of a second order system
Construct a Simulink diagram to calculate the response of the Mass-Spring system. The
input force increases from 0 to 8 N at t = 1 s. The parameter values are M = 2 kg, K= 16
N/m, and B =4 N.s/m.
Steps:
• Draw the free body diagram.
• Write the modelling equation from the free body diagram
• Solve the equations for the highest derivative of the output.
• Draw a block diagram to represent this equation.
• Draw the corresponding Simulink diagram.
• Use Step block to provide the input F(t).
• In the Step block, set the initial and final values and the time at which the step
occurs.
• Use the “To Workspace” blocks for t, F(t), x, and v in order to allow
MATLAB to plot the desired responses. Set the save format to array in block
parameters.
• Select the duration of the simulation to be 10 seconds from the
Simulation > Parameters entry on the toolbar
Steps:
Perform the following steps. Use the same input force as in Exercise 1.
• Begin the simulation with B = 4 N-s/m, but with the input applied at t = 0
• Plot the result.
• Rerun it with B = 8 N.s/m.
• Hold the first plot active, by the command hold on
• Reissue the plot command plot(t,x), the second plot will superimpose on the first.
• Repeat for B = 12 N-s/m and for B = 25 N-s/m
• Release the plot by the command hold off
• Show your result.
Running SIMULINK from MATLAB command prompt
If a complex plot is desired, in which several runs are needed with different parameters,
thiscan using the command called “sim”. “sim” command will run the Simulink model
file fromthe Matlab command prompt. For multiple runs with several plot it can be
accomplished by executing ex1_model (to load parameters) followed by given M-file.
Entering the command ex1_plots in the command window results in multiple runs with
varying values if B and will plot the results.
Experiment No.4: Effect of Feedback on disturbance & Control System Design.
Objective:The objective of this exercise will be to study the effect of feedback on
theresponse of the system to step input and step disturbance taking the practical example of
English Channel boring machine and design a control system taking in account performance
measurement.
List of Equipment/Software
Following equipment/software is required:
• MATLAB
Category Soft - Experiment
Deliverables
A complete lab report including the following:
• Summarized learning outcomes.
• The Simulink model.
• MATLAB scripts and results for Exercise 1.
Overview:
The construction of the tunnel under the English Channel from France to the Great Britain
began in December 1987. The first connection of the boring tunnels from each country was
achieved in November 1990. The tunnel is 23.5 miles long and bored 200 feet below sea
level. Costing $14 billion, it was completed in 1992 making it possible for a train to travel
from London to Paris in three hours.
The machine operated from both ends of the channel, bored towards the middle. To link up
accurately in the middle of the channel, a laser guidance system kept the machines precisely
aligned. A model of the boring machine control is shown in the figure, where Y(s) is the
actual angle of direction of travel of the boring machine and R(s) is the desired angle. The
effect of load on the machine is represented by the disturbance, Td(s).
A model of the boring machine control is shown in the figure, where Y (s) is the actual
angle of direction of travel of the boring machine and R(s) is the desired angle. The effect
of load on the machine is represented by the disturbance Td(s).
Exercise 1.
Perform the following: a) Get the transfer function from R (s) to Y (s)
b) Get the transfer function from D (s) to Y (s)
c) Generate system response using Matlab; K= 10, 20, 50, 100; due to a unit step input–
r(t)
d) Generate system response using Matlab; K= 10, 20, 50, 100; due to a unit step
disturbance–d(t)
e) For each case find the percentage overshoot (% O. S.), rise time, settling time, steady
of y(t)
f) Compare the results of two cases.
g) Investigate the effect of changing the controller gain on the influence of the
disturbance on the system output.
Exercise 2.
Design a second order feedback system based on performances.
For the motor system given above, we need to design feedback such that the overshoot is
limited and there is less oscillatory nature in the response based on the specifications
provided in the table. Assume no disturbance (D(s)=0).
Ka 20 30 50 60 80
Percentage
overshoot
Settling time
Experiment No.5: Introduction to PID controller
Objective: Study the three term (PID) controller and its effects on the feedback loop response.
Investigate the characteristics of the each of proportional (P), the integral (I), and the derivative
(D) controls, and how to use them to obtain a desired response.
List of Equipment/Software
• MATLAB
Category Soft - Experiment
Deliverables A complete lab report including the following:
• Summarized learning outcomes.
• Controller design and parameters for each of the given exercises.
Introduction: Consider the following unity feedback system:
𝐾6 =Proportional gain
𝐾L =Integral gain
𝐾M =Derivative gain
First, let's take a look at how the PID controller works in a closed-loop system using the
schematic shown above. The variable (e) represents the tracking error, the difference
between the desired input value (R) and the actual output (Y). This error signal (e) will be
sent to the PID controller, and the controller computes both the derivative and the integral
of this error signal. The signal (u) just past the controller is now equal to the proportional
gain (KP) times the magnitude of the error plus the integral gain (KI) times the integral of
the error plus the derivative gain (KD) times the derivative of the error.
Example Problem:
Suppose we have a simple mass, spring, and damper problem.
The TF of a mass, spring and damper system is given by,
𝑋(𝑠) 1
= *
𝐹(𝑠) 𝑠 + 10𝑠 + 20
The goal of this problem is to show you how each of Kp, Ki and Kd contributes to obtain
The DC gain of the plant transfer function is 1/20, so 0.05 is the final value of the output to a
unit step input. This corresponds to the steady-state error of 0.95, quite large indeed.
Furthermore, the rise time is about one second, and the settling time is about 1.5 seconds.
Let's design a controller that will reduce the rise time, reduce the settling time, and
eliminates the steady-state error.
Proportional-Integral-Derivative control:
Now, let's take a look at a PID controller. The closed-loop transfer function of the given
system with a PID controller is:
𝐾L
𝐾6 + + 𝐾M 𝑠
𝑠
)
The plant is represented by,
P 9 Q)RPQ*R
𝑋(𝑠) 𝐾M 𝑠 * + 𝐾6 𝑠 + 𝐾L
=
𝐹(𝑠) 𝑠 S + (10 + 𝐾M )𝑠 + 20𝐾L
After several trial and error runs, the gains Kp=350, Ki=300, and Kd=50 provided the
desired response. To confirm, enter the following commands to an m-file and run it in the
command window. One should get the following step response.
Exercise.
Consider a process given below to be controlled by a PID controller,
400
Gp =
s( s + 48.5)
a) Obtain the unit step response of Gp(s).
b) Try PI controllers with (Kp=2, 10, 100), and Ki=Kp/10. Investigate the unit step
response in each case, compare the results and comment.
c) Let Kp=100, Ki=10, and add a derivative term with (Kd=0.1, 0.9, 2). Investigate the
unit step response in each case, compare the results and comment.
Based on your results in parts b) and c) above what do you conclude as a suitable
PID controller for this process and give your justification.
Experiment No.6. Use MATLAB to check the Controllability and Observability of the system
described by the following state space equation:
0 1 0 1 0
0 1 −1
𝑥̇ = U 0 0 1 Y 𝑥 + U 0 1Y 𝑢, 𝑦 = \ ]𝑥
1 2 1
−2 −4 −3 −1 1
Controllability:
Complete state controllability (or simply controllability if no other context is given) describes the
ability of an external input to move the internal state of a system from any initial state to any
other final state in a finite time interval.
A state x0 is controllable at time t0 if for some finite time t1 there exists an input u(t) that
transfers the state x(t) from x0 to the origin at time t1.
A system is called controllable at time t0 if every state x0 in the state-space is controllable.
Kalman’s Test for checking Controllability is as follows,
A linear time invariant continuous system described by the state equation
𝑥̇ = 𝐴𝑥 + 𝐵𝑈 (i)
𝑌 = 𝐶𝑥 (ii)
Is completely controllable if and only if the rank of the controllability matrix is defined as
Q= [B: AB: A2B:----------An-1B] is equal to rank ‘n’.
Observability:
The state-variables of a system might not be able to be measured for any of the following
reasons:
1. The location of the particular state variable might not be physically accessible (a
capacitor or a spring, for instance).
2. There are no appropriate instruments to measure the state variable, or the state-variable
might be measured in units for which there does not exist any measurement device.
3. The state-variable is a derived "dummy" variable that has no physical meaning.
If things cannot be directly observed, for any of the reasons above, it can be necessary to
calculate or estimate the values of the internal state variables, using only the input/output
relation of the system, and the output history of the system from the starting time. In other words,
we must ask whether or not it is possible to determine what the inside of the system (the internal
system states) is like, by only observing the outside performance of the system (input and
output)? We can provide the following formal definition of mathematical observability:
A system with an initial state, x(t0) is observable if and only if the value of the initial
state can be determined from the system output y(t) that has been observed through the
time interval t0<t<tf . If the initial state cannot be so determined, the system
is unobservable.
Complete Observability
A system is said to be completely observable if all the possible initial states of the
system can be observed. Systems that fail these criteria are said to be unobservable.
Observability Test:
The necessary and sufficient condition for the system to be completely observable is that the
nxnobservabilty matrix
V=[C; CA; -----------CAn-1] has rank equal to ‘n’.