0% found this document useful (0 votes)
15 views8 pages

Q - MSC EE - Lab Simulation 2

The document outlines a laboratory experiment for studying signal enhancement using LMS and NLMS algorithms in MATLAB. It includes objectives, equipment, theoretical background, and detailed programming steps for generating signals, creating noise, and applying adaptive filters. A rubric for evaluating analytical skills, proficiency in using tools, and observation abilities is also provided.

Uploaded by

aritra27021986
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views8 pages

Q - MSC EE - Lab Simulation 2

The document outlines a laboratory experiment for studying signal enhancement using LMS and NLMS algorithms in MATLAB. It includes objectives, equipment, theoretical background, and detailed programming steps for generating signals, creating noise, and applying adaptive filters. A rubric for evaluating analytical skills, proficiency in using tools, and observation abilities is also provided.

Uploaded by

aritra27021986
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

COURSE: Advanced Digital Signal Processing (MET 1113)

LAB EXPERIMENT 2 (CLO 2, PLO 1, PLO 2, PLO 6, P7): Study of Signal Enhancement
using LMS and NLMS Algorithms

Rubric for the lab report

Excellent Very Good Good Poor Very Poor


Criteria 5 4 3 2 1
Analytical Skills: Fully able to Reasonably Not quite Poor ability to Very poor
Ability to study of study of able to study able to study
study of ability
signal enhancement signal of signal of signal signal tostudy of
using LMS and enhancemen enhancemen enhancemen enhancement signal
NLMS algorithms. t using LMS t using LMS t using LMS using LMS enhancemen
and NLMS and NLMS and NLMS and NLMS t using LMS
algorithms. algorithms. algorithms. algorithms. and NLMS
algorithms.
Proficiency in Using Fully able to Reasonably Not quite Poor ability to Very poor
Tools: run the code able to run able to run run the code ability to run
Ability to run the program the code the code program using the code
code program using using the program program the MATLAB program
the MATLAB MATLAB using the using the software. using the
software. software. MATLAB MATLAB MATLAB
software. software. software.
Observation: Fully able to Reasonably Not quite Poor ability to Very poor
Ability to show the show the able to show able to show show the ability to
output results and output results the output the output output results show the
waveform. and results and results and and waveform. output results
waveform. waveform. waveform. and
waveform.
FACULTY OF ENGINEERING AND BUILT
ENVIRONMENT

MASTER OF SCIENCE IN ELECTRICALS AND


ELECTRONICS ENGINEERING

ADVANCED DIGITAL SIGNAL PROCESSING


MET 1113

LABORATORY MANUAL

STUDENT NAME :
STUDENT ID :
FACULTY OF ENGINEERING AND BUILT
ENVIRONMENT

EXPERIMENT 2 (CLO 2, PLO 1, PLO 2, PLO 6, P7):


STUDY OF SIGNAL ENHANCEMENT USING LMS AND NLMS ALGORITHMS

1.0 Objectives

1. To study the signal enhancement using LMS and NLMS algortihms.


2. To show the output waveform using the MATLAB software.

2.0 Equipment
Software: MATLAB

3.0 Theory

Using the least mean square (LMS) and normalized LMS algorithms, extract the desired
signal from a noise-corrupted signal by filtering out the noise.

4.0 Program
4.1 Create the signals for adaption
The desired signal (the output from the process) is a sinusoid with 1000 samples per
frame. Run the code program below:

>> sine = dsp.SineWave('Frequency',375,'SampleRate',8000,'SamplesPerFrame',1000)


>> s = sine();

To perform adaptation, the filter requires two signals:


1. A reference signal.
2. A noisy signal that contains both the desired signal and an added noise component.
4.2 Generate the Noise Signal
Create a noise signal with autoregressive noise (defined as v1). In autoregressive noise,
the noise at time t depends only on the previous values and a random disturbance. Run
the code program below:
>> v = 0.8*randn(sine.SamplesPerFrame,1); % Random noise part.
>> ar = [1,1/2]; % Autoregression coefficients.
>> ARfilt = dsp.IIRFilter('Numerator',1,'Denominator',ar)
>> v1 = ARfilt(v);
4.3 Corrupt the Desired Signal to Create a Noisy Signal
To generate the noisy signal that contains both the desired signal and the noise, add
the noise signal v1 to the desired signal s. The noise-corrupted sinusoid x is:

>> x = s + v1;

Adaptive filter processing seeks to recover s from x by removing v1. To complete the
signals needed to perform adaptive filtering, the adaptation process requires a reference
signal

4.4 Create a Reference Signal


To Define a moving average signal v2 that is correlated with v1. The signal v2 is the
reference signal for this program. Run the code program given.

>> ma = [1, -0.8, 0.4, -0.2];


>> MAfilt = dsp.FIRFilter('Numerator',ma)
>> v2 = MAfilt(v);

4.5 Construct Two Adaptive Filters


Two similar, sixth-order adaptive filters — LMS and NLMS — form the basis of this
example. Run the code program below:
>> L = 7;
>> lms = dsp.LMSFilter(L,'Method','LMS')
>> nlms = dsp.LMSFilter(L,'Method','Normalized LMS')

4.6 Choose the Step Size


LMS-like algorithms have a step size that determines the amount of correction applied as
the filter adapts from one iteration to the next. A step size that is too small increases the
time for the filter to converge on a set of coefficients. A step size that is too large might
cause the adapting filter to diverge and never reach convergence. In this case, the
resulting filter might not be stable.
As a rule of thumb, smaller step sizes improve the accuracy with which the filter
converges to match the characteristics of the unknown system, at the expense of the time
it takes to adapt.
The maxstep function of dsp.LMSFilter object determines the maximum step size
suitable for each LMS adaptive filter algorithm that ensures that the filter converges to a
solution. Often, the notation for the step size is µ. Run the code program below:
>> [mumaxlms,mumaxmselms] = maxstep(lms,x)
>> [mumaxnlms,mumaxmsenlms] = maxstep(nlms,x)
4.7 Set the Adapting Filter Step Size
The first output of the maxstep function is the value needed for the mean of the
coefficients to converge, while the second output is the value needed for the mean
squared coefficients to converge. Choosing a large step size often causes large variations
from the convergence values, so generally choose smaller step sizes. Run the code
program below:
>> lms.StepSize = mumaxmselms/30
>> nlms.StepSize = mumaxmsenlms/20

4.8 Filter with the Adaptive Filters


You have set up the parameters of the adaptive filters and are now ready to filter the
noisy signal. The reference signal v2 is the input to the adaptive filters. x is the desired
signal in this configuration.
Through adaptation, y, the output of the filters, tries to emulate x as closely as possible.
Since v2 is correlated only with the noise component v1 of x, it can only really emulate
v1. The error signal (the desired x), minus the actual output y, constitutes an estimate of
the part of x that is not correlated with v2 — s, the signal to extract from x. Run the code
program below:
>> [~,elms,wlms] = lms(v2,x);
>> [~,enlms,wnlms] = nlms(v2,x);

4.9 Compute the Optimal Solution


For comparison, compute the optimal FIR Wiener filter. Run the code program below:
>> reset(MAfilt);
>> bw = firwiener(L-1,v2,x); % Optimal FIR Wiener filter
>> MAfilt = dsp.FIRFilter('Numerator',bw)
>> yw = MAfilt(v2); % Estimate of x using Wiener filter
>> ew = x - yw; % Estimate of actual sinusoid

4.10 Plot the Results


Plot the resulting denoised sinusoid for each filter — the Wiener filter, the LMS adaptive
filter, and the NLMS adaptive filter — to compare the performance of the various
techniques. Run the code program below:
>> n = (1:1000)';
>> plot(n(900:end),[ew(900:end), elms(900:end),enlms(900:end)])
>> legend('Wiener filter denoised sinusoid',...
'LMS denoised sinusoid','NLMS denoised sinusoid')
>> xlabel('Time index (n)')
>> ylabel('Amplitude')
As a reference point, include the noisy signal as a dotted line in the plot. Run the code
program below:
>> hold on
>> plot(n(900:end),x(900:end),'k:')
>> xlabel('Time index (n)')
>> ylabel('Amplitude')
>> hold off

4.11 Compare theFinal Coefficients


Finally, compare the Wiener filter coefficients with the coefficients of the adaptive filters.
While adapting, the adaptive filters try to converge to the Wiener coefficients. Run the
code program below:
>> [bw.' wlms wnlms]

4.12 Reset the Filter Before Filtering


You can reset the internal filter states at any time by calling the reset function on the
filter object. For instance, these successive calls produce the same output after resetting
the object. Run the code program below:
>> [ylms,elms,wlms] = lms(v2,x);
>> [ynlms,enlms,wnlms] = nlms(v2,x);

If you do not reset the filter object, the filter uses the final states and coefficients from the
previous run as the initial conditions and data set for the next run.
4.13 Investigate Convergence Through Learning Curves
To analyze the convergence of the adaptive filters, use the learning curves. The toolbox
provides methods to generate the learning curves, but you need more than one iteration
of the experiment to obtain significant results.
This demonstration uses 25 sample realizations of the noisy sinusoids. Run the code
program below:
>> reset(ARfilt)
>> reset(sine);
>> release(sine);
>> n = (1:5000)';
>> sine.SamplesPerFrame = 5000

>> s = sine();
>> nr = 25;
>> v = 0.8*randn(sine.SamplesPerFrame,nr);
>> ARfilt = dsp.IIRFilter('Numerator',1,'Denominator',ar)
>> v1 = ARfilt(v);
>> x = repmat(s,1,nr) + v1;
>> reset(MAfilt);
>> MAfilt = dsp.FIRFilter('Numerator',ma)

>> v2 = MAfilt(v);

4.14 Compute the Learning Curves


Now compute mean squared error. To speed things up, compute the error every 10
samples.
First, reset the adaptive filters to avoid using the coefficients it has already computed
and the states it has stored. Then plot the learning curves for the LMS and NLMS
adaptive filters. Run the code program below:
>> reset(lms);
>> reset(nlms);
>> M = 10; % Decimation factor
>> mselms = msesim(lms,v2,x,M);
>> msenlms = msesim(nlms,v2,x,M);
>> plot(1:M:n(end),mselms,'b',1:M:n(end),msenlms,'g')
>> legend('LMS learning curve','NLMS learning curve')
>> xlabel('Time index (n)')
>> ylabel('MSE')

4.15 Compute the Theoretical Learning Curves


For the LMS and NLMS algorithms, functions in the toolbox help you compute the
theoretical learning curves, along with the minimum mean squared error (MMSE), the
excess mean squared error (EMSE), and the mean value of the coefficients.
MATLAB might take some time to calculate the curves. The figure shown after the code
plots the predicted and actual LMS curves. Run the code program below:
>> reset(lms);
>> [mmselms,emselms,meanwlms,pmselms] = msepred(lms,v2,x,M);
>> x = 1:M:n(end);
>> y1 = mmselms*ones(500,1);
>> y2 = emselms*ones(500,1);
>> y3 = pmselms;
>> y4 = mselms;
>> plot(x,y1,'m',x,y2,'b',x,y3,'k',x,y4,'g')
>> legend('MMSE','EMSE','Predicted LMS learning curve',...
'LMS learning curve')
>> xlabel('Time index (n)')
>> ylabel('MSE')
5.0 Observation
Output Results:

Program Display the output results


4.1

4.2

4.3

4.4

4.5

4.6

4.7

4.8
4.9

4.10
4.11

4.12
4.13

4.14
4.15

6.0 Conclusion
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________

You might also like