100%(1)100% found this document useful (1 vote) 553 views126 pagesC Algorithms For Real-Time DSP Paul Embree (Prentice Hall, 1995)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
melee Vem shnnloneeeLibrary of Congress Cataloging-in-Publication Data
Embree, Paul M.
igoithms for real-time DSP / Paul M. Embree
pom.
Tncludes bibliographical references
ISBN 0-13-337353-3
1, € (Computer program language) 2. Computer algorithms. 3. Real
-time data processing I. Title
QAT76.73.C15E63 1995
(621.382°2°028552—de20 95-407
cP
‘Acquisitions editor: Karen Gettman
Cover designe: Juith Leeds Design
‘Manufacturing buyer: Alexis R. Heyet
‘CompesiterfProduction services: Pine Tree Composition, Inc.
© 1995 by Prentice Hall PTR
Prentice-Hall, In,
‘A'Simon & Schuster Company
‘Upper Saddle River, New Jersey 07458,
All rights reserved, No part of this book may be reproduced,
inany form or by any means, without permission in writing
from the publisher.
‘The publisher offers discounts on this book when ordered in bulk quantities.
For more information contact
‘Corporate Sales Department
Prentice Hall PTR
One Lake Street
Upper Saddle River, New Jersey 07458
Phone: 800-382-3419
Fax: 201-236-7141
«ermal: Corpsales@ prenhall com
Printed in the United States of America
10987654321
ISBN: 0-13-337353-3
Preatice Hall Intemational (UK) Limited, London
Prentice Hall of Australia Py, Limited, Sydney
Prentice Hall Canada, Ine. Toronto
Prentice Hall Hispanoamericana,S.A., Mesico
Prentice Hall of Inia Private Limited, New Delhi
Prentice Hall of Japan, Ine, Tokyo
Simon & Schuster Asia Pte Ltd. Singapore
Editora Prentice Hall do Brasil, Lida, Rio de Janeiro
CONTENTS
PREFACE vii
CHAPTER 1 DIGITAL SIGNAL PROCESSING FUNDAMENTALS 1
1.1 SEQUENCES 2
1.1.1 The Sampling Function 3
1.1.2 Samples Signal Spectra 4
1.1.3 Spectra of Continuous Time and Discrete Time Signals 5
1.2 LINEAR TIME-INVARIANT OPERATORS 8
1.2.1 Causality 10
1.2.2 Difference Equations 10
1.2.3 The z-Transform Description of Linear Operators 11
1.24 Frequency Domain Transfer Function of an Operator 14
1.2.5 Frequency Response from the z-Transform Description 15
1.3 DIGITAL FILTERS 17
1.3.1 Finite Impulse Response (FIR) Filters 18
1.3.2 Infinite Impulse Response (IIR) Filters 21
1.3.3 Examples of Filter Responses 22
1.3.4 Filter Specifications 23
1.4 DISCRETE FOURIER TRANSFORMS 25
1.4.1 Form 25
1.42 Properties 26
1.4.3 Power Spectrum 27v Contents
1.4.4 Averaged Periodograms 28
1.4.5 The Fast Fourier Transform (FFT) 28
1.4.6 An Example of the FFT 30
1.5 NONLINEAR OPERATORS 32
1.5.1 p-Law and A-Law Compression 33
1.6 PROBABILITY AND RANDOM PROCESSES 35
1.6.1 Basic Probability 35
1.6.2 Random Variables 36
1.6.3 Mean, Variance, and Gaussian Random Variables 37
1.6.4 Quantization of Sequences 40
1.6.5 Random Processes, Autocorrelation, and Spectral Density 42
1.6.6 Modeling Real-World Signals with AR Processes 43
1.7 ADAPTIVE FILTERS AND SYSTEMS 46
1.7.1 Wiener Filter Theory 48
1.7.2 LMS Algorithms 50
1.8 REFERENCES 51
CHAPTER 2 C PROGRAMMING FUNDAMENTALS 53
2.1 THE ELEMENTS OF REAL-TIME DSP PROGRAMMING 53
2.2 VARIABLES AND DATA TYPES 56
2.2.1 Types of Numbers 56
2.2.2 Amays 58
2.3 OPERATORS 59
2.3.1 Assignment Operators 59
2.3.2 Arithmetic and Bitwise Operators 60
2.3.3 Combined Operators 61
2.3.4 Logical Operators 61
2.3.5 Operator Precedence and Type Conversion 62
24 PROGRAM CONTROL 63
2.4.1 Conditional Execution: 1£-e!
2.4.2 The switch Statement 64
2.4.3 Single-Line Conditional Expressions 65
2.4.4 Loops: while, do-while, and for 66
2.4.5 Program Jumps: break, continue, and goto 67
2.5 FUNCTIONS 69
2.5.1 Defining and Declaring Functions 69
2.5.2 Storage Class, Privacy, and Scope 71
2.5.3 Function Prototypes 73
2.6 MACROS AND THE C PREPROCESSOR 74
2.6.1 Conditional Preprocessor Directives 74
2.6.2 Aliases and Macros 75
se 63
Contents v
2.7 POINTERS AND ARRAYS 77
2.7.1 Special Pointer Operators 77
2.7.2 Pointers and Dynamic Memory Allocation 78
2.1.3 Arrays of Pointers 80
2.8 STRUCTURES 82
2.8.1 Declaring and Referencing Structures 82
2.8.2 Pointers to Structures 84
2.8.3 Complex Numbers 85
2.9 COMMON C PROGRAMMING PITFALLS 87
2.9.1 Array Indexing 87
2.9.2 Failure to Pass-by-Address 87
2.9.3 Misusing Pointers 88
2.10 NUMERICAL C EXTENSIONS 90
2.10.1 Complex Data Types 90
2.10.2 Iteration Operators 91
2.11 COMMENTS ON PROGRAMMING STYLE 92
2.111 Software Quality 93
2.11.2 Structured Programming 95
2.12 REFERENCES 97
CHAPTER 3 DSP MICROPROCESSORS
IN EMBEDDED SYSTEMS 98
3.1 TYPICAL FLOATING-POINT DIGITAL SIGNAL PROCESSORS 99
3.1.1 AT&T DSP32C and DSP3210 100
3.1.2 Analog Devices ADSP-210XX 104
3.1.3 Texas Instruments TMS320C3X and TMS320C40_ 108
3.2 TYPICAL PROGRAMMING TOOLS FOR DSP 111
3.2.1 Basic C Compiler Tools 111
3.2.2 Memory Map and Memory Bandwidth Considerations 113
3.2.3 Assembly Language Simulators and Emulators 114
3.3 ADVANCED C SOFTWARE TOOLS FOR DSP_ 117
3.3.1 Source Level Debuggers 117
3.3.2 Assembly-C Language Interfaces 120
3.3.3 Numeric C Compilers 121
3.4 REAL-TIME SYSTEM DESIGN CONSIDERATIONS 124
3.4.1 Physical InpuvOutput (Memory Mapped, Serial, Polled) 124
3.4.2 Interrupts and Interrupt-Driven VO 125
3.4.3 Efficiency of Real-Time Compiled Code 128
3.4.4 Multiprocessor Architectures 130vi Contents
CHAPTER 4 REAL-TIME FILTERING 132
4.1 REAL-TIME FIR AND IIR FILTERS 132
4.1.1 FIR Filter Function 134
4.1.2 FIR Filter Coefficient Calculation 136
4.1.3 IIR Filter Function 145
4.1.4 Real-Time Filtering Example 151
42 FILTERING TO REMOVE NOISE 158
4.2.1 Gaussian Noise Generation 158
4.2.2 Signal-to- Noise Ratio Improvement 160
43 SAMPLE RATE CONVERSION 160
4.3.1 FIR Interpolation 163
4.3.2 Real-Time Interpolation Followed by Decimation 163
4.3.3 Real-Time Sample Rate Conversion 167
4.4 FAST FILTERING ALGORITHMS 168
4.4.1 Fast Convolution Using FFT Methods 170
4.4.2 Interpolation Using the FFT 176
4.5 OSCILLATORS AND WAVEFORM SYNTHESIS 178
4.5.1 IIR Filters as Oscillators 178
4.5.2 Table-Generated Waveforms 179
46 REFERENCES 184
CHAPTER 5 REAL-TIME DSP APPLICATIONS 186
5.1 FFT POWER SPECTRUM ESTIMATION 186
5.1.1 Speech Spectrum Analysis 187
5.1.2 Doppler Radar Processing 190
5.2 PARAMETRIC SPECTRAL ESTIMATION 193
5.2.1 ARMA Modeling of Signals 193
2.2 AR Frequency Estimation 198
5.3 SPEECH PROCESSING 200
5.3.1 Speech Compression 201
5.3.2 ADPCM (G.722) 202
5.4 MUSIC PROCESSING 218
5.4.1 Equalization and Noise Removal 218
5.4.2 Pitch-Shifting 220
5.4.3 Music Synthesis 225
5.5 ADAPTIVE FILTER APPLICATIONS 228
5.5.1 LMS Signal Enhancement 228
5.5.2. Frequency Tracking with Noise 233
5.6 REFERENCES 237
APPENDIX—DSP FUNCTION LIBRARY AND PROGRAMS 238
INDEX 241
PREFACE
Digital signal processing techniques have become the method of choice in signal process-
ing as digital computers have increased in speed, convenience, and availability. As
microprocessors have become less expensive and more powerful, the number of DSP ap-
plications which have become commonly available has exploded. Thus, some DSP
microprocessors can now be considered commodity products. Perhaps the most visible
high volume DSP applications are the so called “multimedia” applications in digital
audio, speech processing, digital video, and digital communications. In many cases, these
applications contain embedded digital signal processors where a host CPU works in a
loosely coupled way with one or more DSPs to control the signal flow or DSP algorithm
behavior at a real-time rate. Unfortunately, the development of signal processing algo-
rithms for these specialized embedded DSPs is still difficult and often requires special-
ized training in a particular assembly language for the target DSP.
‘The tools for developing new DSP algorithms are slowly improving as the need to
design new DSP applications more quickly becomes important. The C language is prov-
ing itself to be a valuable programming tool for real-time computationally intensive soft-
ware tasks. C has high-level language capabilities (such as structures, arrays, and func
tions) as_well as low-level_assembly language capabilities (such as bit manipulation,
direct hardware input/output, and macros) which makes C_an ideal language for em
bedded DSP, Most of the manufacturers of digital signal processing devices (such as
Texas Instruments, AT&T, Motorola, and Analog Devices) provide C compilers, simula-
tors, and emulators for their parts. These C compilers offer standard C language with ex-
tensions for DSP to allow for very efficient code to be generated. For example, an inline
assembly language capability is usually provided in order to optimize the performance of
time critical parts of an application. Because the majority of the code is C, an application
can be transferred to another processor much more easily than an all assembly language
program,
This book is constructed in such a way that it will be most useful to the engineer
who is familiar with DSP and the C language, but who is not necessarily an expert in
both. All of the example programs in this book have been tested using standard C compil-
viivill Preface
ers in the UNIX and MS-DOS programming environments. In addition, the examples
have been compiled utilizing the real-time programing tools of specific real-time embed-
ded DSP microprocessors (Analog Devices’ ADSP-21020 and ADSP-21062; Texas
Instrument’s TMS320C30 and TMS320C40; and AT&T's DSP32C) and then tested with
real-time hardware using real world signals. All of the example programs presented in the
text are provided in source code form on the IBM PC floppy disk included with the book.
‘The text is divided into several sections. Chapters | and 2 cover the basic principles
of digital signal processing and C programming. Readers familiar with these topics may
wish to skip one or both chapters. Chapter 3 introduces the basic real-time DSP program-
ming techniques and typical programming environments which are used with DSP micro-
processors. Chapter 4 covers the basic real-time filtering techniques which are the comer-
stone of one-dimensional real-time digital signal processing. Finally, several real-time
DSP applications are presented in Chapter 5, including speech compression, music signal
processing, radar signal processing, and adaptive signal processing techniques.
‘The floppy disk included with this text contains C language source code for all of
the DSP programs discussed in this book. The floppy disk has a high density format and
was written by MS-DOS. The appendix and the [Link] files on the floppy disk pro-
vide more information about how to compile and run the C programs. These programs
have been tested using Borland’s TURBO C (version 3 and greater) as well as Microsoft C
(versions 6 and greater) for the IBM PC. Real-time DSP platforms using the Analog
Devices ADSP-21020 and the ADSP-21062, the Texas Instruments TMS320C30, and the
AT&T DSP32C have been used extensively to test the real-time performance of the
algorithms.
ACKNOWLEDGMENTS:
T thank the following people for their generous help: Laura Mercs for help in preparing
the electronic manuscript and the software for the DSP32C; the engineers at Analog
Devices (in particular Steve Cox, Mare Hoffman, and Hans Rempel) for their review of
the manuscript as well as hardware and software support; Texas Instruments for hardware
and software support; Jim Bridges at Communication Automation & Control, Inc., and
‘Talal Itani at Domain Technologies, Inc.
Paul M. Embree
TRADEMARKS:
IBM and IBM PC are trademarks of the International Business Machines Corporation.
‘MS-DOS and Mircosoft C are trademarks of the Microsoft Corporation.
‘TURBOC is a trademark of Borland Intemational.
UNIX is a trademark of American Telephone and Telegraph Corporation.
DSP32C and DSP3210 are trademarks of American Telephone and Telegraph Corporation,
‘TMS320C30, TMS320C31, and TMS320C40 are trademarks of Texas Instruments Incorporated.
ADSP-21020, ADSP-21060, and ADSP-21062 are trademarks of Analog Devices Incorporated.
CHAPTER 1
ee
DIGITAL SIGNAL PROCESSING
FUNDAMENTALS
Digital signal processing begins with a digital signal which appears to the computer as a
sequence of digital values. Figure 1.1 shows an example of a digital signal processing op-
eration or simple DSP system. There is an input sequence x(n), the operator 0{ } and an
output sequence, y(n). A complete digital signal processing system may consist of many
operations on the same sequence as well as operations on the result of operations
Because digital sequences are processed, all operators in DSP are discrete time-operators
(@s opposed to continuous time operators employed by analog systems). Discrete time op-
erators may be classified as time-varying or time-invariant and linear or nonlinear. Most
Of the operators described in this text will be time-invariant with the exception of adap-
tive filters which are discussed in Section 1.7. Linearity will be discussed in Section 1.2
and several nonlinear operators will be introduced in Section 1.5.
Operators are applied to sequences in order to effect the following results:
(J) Extract parameters or features from the sequenc
@) Produce a similar sequence with particular features enhanced or eliminated.
(3) Restore the sequence to some earlier state
(4) Encode or compress the sequence.
This chapter is divided into several sections. Section 1.1 deals with sequences of
numbers: where and how they originate, their spectra, and their relation to continuous
signals. Section 1.2 describes the common characteristics of linear time-invariant opera-
tors which are the most often used in DSP. Section 1.3 discusses the class of operators
called digital filters. Section 1.4 introduces the discrete Fourier transform (DFTs and2 Digital Signal Processing Fundamentals Chap. 1
DPS Operation
ar) x60)
+22), 1), 0) ae Of} vA.
FIGURE 1.1. DSP operation.
FFTs). Section 1.5 describes the properties of commonly used nonlinear operators.
Section 1.6 covers basic probability theory and random processes and discusses their ap~
plication to signal processing. Finally, Section 1.7 discusses the subject of adaptive di
tal filters.
1.1 SEQUENCES
In order for the digital computer to manipulate a signal, the signal must have been sam-
pled at some interval. Figure 1.2 shows an example of a continuous function of time
which has been sampled at intervals of T seconds. The resulting set of numbers is called a
sequence. If the continuous time function was x(1), then the samples would be x(n7) for m,
an integer extending over some finite range of values. It is common practice to normalize
the sample interval to 1 and drop it from the equations. The sequence then becomes x(n).
Care must be taken, however, when calculating power or energy from the sequences. The
sample interval, including units of time, must be reinserted at the appropriate points in the
power or energy calculations.
A sequence as a representation of a continuous time signal has the following impor-
tant characteristics:
xo
oT of af 47 57 67 77 8f oT
FIGURE 12 Sampling.
Sec. 1.1 Sequences 3a
(1) The signal is sampled. It has finite value at only discrete points in time.
(2) The signal is truncated outside some finite length representing a finite time interval.
(3) The signal is quantized. It is limited to discrete steps in amplitude, where the step
size and, therefore, the accuracy (or signal fidelity) depends on how many steps are
available in the A/D converter and on the arithmetic precision (number of bits) of,
the digital signal processor or computer.
In order to understand the nature of the results that DSP operators produce, these
‘characteristics must be taken into account. The effect of sampling will be considered in
Section 1.1.1. Truncation will be considered in the section on the discrete Fourier trans-
form (Section 1.4) and quantization will be discussed in Section 1.7.4.
1.1.1 The Sampling Function
‘The sampling function is the key to traveling between the continuous time and discrete
time worlds. It is called by various names: the Dirac delta function, the sifting function,
the singularity function, and the sampling function among them. It has the following
properties:
Property 1. [7(B(t~ nde = fC). ay
Property 2. f8(¢—t)dr=1 2)
In the equations above, + can be any real number.
“To see how this function can be thought of as the ideal sampling function, first con-
sider the realizable sampling function, A(0), illustrated in Figure 1.3. Its pulse width is one
unit of time and its amplitude is one unit of amplitude. It clearly exhibits Property 2 of
the sampling function. When A(t) is multiplied by the function to be sampled, however,
the A(#) sampling function chooses not a single instant in time but a range from —' to
+14, As a result, Property 1 of the sampling function is not met. Instead the following in-
tegral would result:
[Lfooae-nar= [Frode (13)
This can be thought of as a kind of smearing of the sampling process across a band which
is related to the pulse width of A(®). A better approximation to the sampling function
would be a function A( with a narrower pulse width. As the pulse width is narrowed,
however, the amplitude must be increased. In the limit, the ideal sampling function must
have infinitely narrow pulse width so that it samples at a single instant in time, and infi
nitely large amplitude so that the sampled signal still contains the same finite energy.
Figure 1.2 illustrates the sampling process at sample intervais of T. The resulting
time waveform can be written4 Digital Signal Processing Fundamentals Chap. 1
ay
FIGURE 1.3. Realizable sampling function,
20> J) x(05(e—n7). a4)
‘The waveform that results from this process is impossible to visualize due to the infinite
amplitude and zero width of the ideal sampling function. It may be easier to picture a
‘somewhat less than ideal sampling function (one with very small width and very large
amplitude) multiplying the continuous time waveform,
It should be emphasized that x:(¢) is a continuous time waveform made from the su-
perposition of an infinite set of continuous time signals x(t)8(t - n7). It can also be writ-
ten
x= DY xa7)8e—nT) as)
since the sampling function gives a nonzero multiplier only at the values ¢ = nT. In this
last equation, the sequence x(n7) makes its appearance. This is the set of numbers or sam-
ples on which almost all DSP is based.
1.1.2 Sampled Signal Spectra
Using Fourier transform theory, the frequency spectrum of the continuous time wave-
form x) can be written
XN = [ame Mar (16)
——
Sec. 11 Sequences 5
and the time waveform can be expressed in terms of its spectrum as
a= [xe ap. an
Since this is true for any continuous function of time, x(0), it is also true for x,(0)
X= [ene ae as)
Replacing x,(0) by the sampling representation
x= fo [ SY x08 noma as)
‘The order of the summation and integration can be interchanged and Property 1 of the
sampling function applied to give
X= Seine?” (0)
This equation is the exact form of a Fourier series representation of X,(f), a periodic
function of frequency having period 1/T. The coefficients of the Fourier series are x(n7)
and they can be calculated from the following integral
an) = TP? Xe" ap aay
‘The last two equations are a Fourier series pair which allow calculation of either the time
signal or frequency spectrum in terms of the opposite member of the pair. Notice that the
use of the problematic signal x,() is eliminated and the sequence x(n7) can be used instead,
1.1.3 Spectra of Continuous Time
and Discrete Time Signals
By evaluating Equation (1.7) at t = nT” and setting the result equal to the right-hand side
of Equation (1.11) the following relationship between the two spectra is obtained:
aat)= J" Xe af = Tht x nena, (12)
‘The right-hand side of Equation (1.7) can be expressed as the infinite sum of a set of inte-
‘grals with finite limits
SrfFunerory io
x(nT)6 Digital Signal Proce:
aFundamentals Chap. 1
By changing variables to =f mT (substituting f= 1+ m/T and d= dh)
Sm. naa
x= YY PL xas Meron p, (14
Moving the summation inside the integral, recognizing that e® (for all integers m and
‘n) is equal to 1, and equating everything inside the integral to the similar part of
(1.11) give the following relation: a part of Equation
x= > XD. (1.45)
Equation (1.15) shows that the sampled time frequency spectrum is equal to an infinite
sum of shifted replicas of the continuous time frequency spectrum overlaid on each
other. The shift of the replicas is equal to the sample frequency, 'p. Its interesting to ex-
amine the conditions under which the two spectra are equal to each other, at least for
a limited range of frequencies. In the case where there are no spectral components of
frequency greater than ‘pr in the original continuous time waveform, the two spectra
lant
+
—_L_1 _.;
(@) Input spectrum
Ie(fyt
+
HEA
fe
ze
(b) Sampled spectrum
1G, (f)!
+
te
3 tb) Sampled spectrum
(©) Reconstructured spectrum
{ec} Reconstructed spectrum.
poets eseaenmeneremereen en
Sec. 1.1 Sequences 7
are equal over the frequency range f= —'hr to f= +r. OF course, the sampled time spec~
trum will repeat this same set of amplitudes periodically for all frequencies, while the
continuous time spectrum is identically zero for all frequencies outside the specified
range.
‘The Nyquist sampling criterion is based on the derivation just presented and asserts
that a continuous time waveform, when sampled at a frequency greater than twice the
maximum frequency component in its spectrum, can be reconstructed completely from
the sampled waveform. Conversely, if a continuous time waveform is sampled at a
frequency lower than twice its maximum frequency component a phenomenon called
aliasing occurs. If a continuous time signal is reconstructed from an aliased representa-
tion, distortions will be introduced into the result and the degree of distortion is depen-
dent on the degree of aliasing. Figure 1.4 shows the spectra of sampled signals without
aliasing and with aliasing. Figure 1.5 shows the reconstructed waveforms of an aliased
signal.
at)
4
Wht t
{@)_ Input continuous time signal
ot)
4
PUG U ;
{b) Sampled signal
git)
4
FIGURE 1.5. Aliasing in the time
domain. (a) Input continuous time
ignal. (b) Sampled signal.
(6) Reconstructed signal
(©) Reconstructed signal8 Digital Signal Processing Fundamentals Chap. 1
1.2 LINEAR TIME-NVARIANT OPERATORS
‘The most commonly used DSP operators are linear and time-invariant (or LI). The lin-
arity property is stated as follows:
Given x(n), a finite sequence, and Of }, an operator in n-space, let
a) = Ofx(0)} 1.16)
i
2(n) = a(n) + bx (a) a7)
where a and b are constant with respect to m, then, if ©{ } is a linear operator
yn) = a {x(n} + B(x, (n)}. (2.18)
‘The time-invariant property means that if
wn) = O{x(n)}
then the shifted version gives the same response or
n= m)=Olx(n—m)). (19)
Another way to state this property is that if x(n) is periodic with period N such that
x(n N) = x(n)
then if ©{ } is a time-invariant operator in n space
Ofx(n+ N)) = Ox(n)}.
Next, the LTI properties of the operator O{ } will be used to derive an expression and
method of calculation for O{x(n)}. First, the impulse sequence can be used to represent
x(v) in a different manner,
x(n) =D) x(muy n=). (1.20)
This is because
mm (2
%(e—m) otherwise. ”
The impulse sequence acts as a sampling or sifting function on the function x(m), using
the dummy variable m to sift through and find the single desired value x(n). Now this
somewhat devious representation of x(n) is substituted into the operator Equation (1.16):
See. 1.2 Linear Time-Invariant Operators 9
v0)=o Sxtmonin-n 1.22)
Recalling that 6{ } operates only on functions of n and using the linearity property
v0) = J) x(m)0 (uy(n—m)} (1.23)
Every operator has a set of outputs that are its response when an impulse sequence is ap-
plied to its input. The impulse response is represented by fi(n) so that
Hn) = Olug(n)} (1.24)
This impulse response is a sequence that has special significance for O{ ), since it is the
sequence that occurs at the output of the block labeled ©{ } in Figure 1.1 when an im-
pulse sequence is applied at the input. By time invariance it must be true that
HWn~m) = Ofug(n—m)} (1.25)
so that
a) =) x(myi(n~m). (1.26)
Equation (1.26) states that y(n) is equal to the convolution of x(n) with the impulse re-
sponse h(n). By substituting m = n — p into Equation (1.26) an equivalent form is derived
yen)= Dip) x(n — 27
It must be remembered that m and p are dummy variables and are used for purposes of
the summation only. From the equations just derived it is clear that the impulse response
completely characterizes the operator O{ } and can be used to label the block representing
the operator as in Figure 1.6.
x(n ni) eye
FIGURE 1.6 Impulse response repre
sentation of an operator.10 Digital
ignal Processing Fundamentals Chap. 1
1.2.1 Causality
In the mathematical descriptions of sequences and operators thus far, it was assumed that
the impulse responses of operators may include values that occur before any applied
input stimulus. This is the most general form of the equations and has been suitable for
the development of the theory to this point. However, it is clear that no physical system
‘can produce an output in response to an input that has not yet been applied. Since DSP
operators and sequences have their basis in physical systems, itis more useful to consider
that subset of operators and sequences that can exist in the real world.
The first step in representing realizable sequences is to acknowledge that any se-
quence must have started at some time, Thus, it is assumed that any element of a se~
quence in a realizable system whose time index is less than zero has a value of zero.
Sequences which start at times later than this can still be represented, since an arbitrary
‘number of their beginning values can also be zero. However, the earliest true value of any
sequence must be at a value of n that is greater than or equal to zero. This attribute of se-
‘quences and operators is called causality, since it allows all attributes of the sequence to
be caused by some physical phenomenon. Clearly, a sequence that has already existed for
infinite time lacks a cause, as the term is generally defined.
‘Thus, the convolution relation for causal operators becomes:
vn) = J h(n) x(n m). (1.28)
md
This form follows naturally since the impulse response is a sequence and can have no
values for m less than zero.
1.2.2 Difference Equations
All discrete time, linear, causal, time-invariant operators can be described in theory by
the Nth order difference equation
et
Denv(n-m = ¥v,x(n~ py (1.29)
mo m=
where x(n) is the stimulus for the operator and y(n) is the results or output of the operator.
‘The equation remains completely general if all coefficients are normalized by the value
of ap giving
0
n+ Y aq y(n~m) = b,x(n— p) (1.30)
poo
and the equivalent form
got ve
yn) =F by x(n— p)— Yi aqy(n—m) (3p
a} =
Sec. 1.2 Linear Time-tnvariant Operators n
or
y(n) = by x(n) + x(n = 1) + Byx(n-2)..
+ byx(n—N +1)~ayy(n—1)—a,y(n~2) (1.32)
mo. dy.y(n— N41),
To represent an operator properly may require a very high value of N, and for some com-
plex operators N may have to be infinite. In practice, the value of N is kept within limits
manageable by a computer; there are often approximations made of a particular operator
to make N an acceptable size.
In Equations (1.30) and (1.31) the terms y(n ~ m) and x(n — p) are shifted or de-
layed versions of the functions y(n) and x(n), respectively. For instance, Figure 1.7 shows
a sequence x(n) and x(n ~ 3), which is the same sequence delayed by three sample peri-
‘ods. Using this delaying property and Equation (1.32), a structure or flow graph can be
constructed for the general form of a discrete time LTI operator. This structure is shown
in Figure 1.8. Each of the boxes is a delay element with unity gain. The coefficients are
shown next to the legs of the flow graph to which they apply. The circles enclosing the
summation symbol (&) are adder elements.
1.2.3 The z-Transform Description of Linear Operators
‘There is a linear transform—called the z-transform—which is as useful to discrete time
analysis as the Laplace transform is to continuous time analysis. Its definition is
R(xln)) = YF xempe* 4.33)
=
where the symbol Z{ } stands for “z-transform of,” and the z in the equation is a complex
‘number. One of the most important properties of the z-transform is its relationship to time
x(n)
x(n-3)
(Oper tela 94a ol Gre)
FIGURE 1.7 Shifting of a sequence.R Digital Signal Processing Fundamentals Chap. 1
240) Any
FIGURE 1.8 Flow graph structure of linear operators.
delay in sequences. To show this property take a sequence, x(n), with a z-transform as
follows:
Dx (1.34)
=
A shifted version of this sequence has a z-transform:
Rxn~ py} =P (n= pe. (1.35)
By leting m = n~ p substitution gives:
Zlale= py} = Y omer” (1.36)
m0
Po
Sec. 1.2 Linear Time-Invariant Operators 13
137)
‘But comparing the summation in this last equation to Equation (1.33) for the z-transform
of x(n), it can be seen that
Rlaln—~ py} = zPRxkn)}
PX(2). (138)
‘This property of the z-transform can be applied to the general equation for LTI operators
as follows:
ahr +Yayn- of
mi
7 i
sono} 39
Since the z-transform is a linear transform, it possesses the distributive and associative
properties. Equation (1.39) can be simplified as follows:
ROW Ya, Zovn~ py) = Vv, Tala p)} (1.40)
pal P=)
Using the shift property of the z-transform (Equation (1.38))
V+ Dayz) = Y" d,2°4X(2) aan
m=
~
(14a)
Finally, Equation (1.42) can be rearranged to give the transfer function in the z-transform
domain:
het
=1@ dre
FO San
HQ) (1.43)
Using Equation (1.41), Figure 1.8 can be redrawn in the z-transform domain and this
structure is shown in Figure 1.9. The flow graphs are identical if it is understood that a
‘multiplication by z~! in the transform domain is equivalent to a delay of one sampling
time interval in the time domain,4 Digital Signal Processing Fundamentals Chap. 1
2)
FIGURE 1.9 Flow graph structure for the z-transform of an operator.
1.2.4 Frequency Domain Transfer Function of an Operat
‘Taking the Fourier transform of both sides of Equation (1.28) (which describes any LTI
causal operator) results inthe following:
FOO) = LY homF (x(n - m)). (1.44)
=
‘Using one of the properties of the Fourier transform
Flx(n = m)}= € PM Fi xKn)}, (1.45)
From Equation (1.45) it follows that
YA) =D hlmye PM XY), (1.46)
=
Sec. 1.2 Linear Time-Invariant Operators 6
or dividing both sides by X(f)
FDS home
xy me ap
which is easily recognized as the Fourier transform of the series h(m). Rewriting this equation
1D. = Hg) = F den 1.48)
XA HS) = Fam), (1.48)
Figure 1.10 shows the time domain block diagram of Equation (1.48) and Figure 1.11
shows the Fourier transform (or frequency domain) block diagram and equation. The fre-
quency domain description of a linear operator is often used to describe the operator.
‘Most often it is shown as an amplitude and a phase angle plot as a function of the variable
F (sometimes normalized with respect to the sampling rate, 1/7).
1.2.5 Frequency Response
from the z-Transform Description
Recall the Fourier transform pair
X= Yo xa Pr (149)
and
adot)= fx, Ne™ ap. (1.50)
Linear Time Invariant
x——e] im) Le yn)
oy= & thm xn—m) FIGURE 1.10 Time domain block dis-
‘gram of LTI system,
Linear Time Invariant
XN——>} HN) Ln
FIGURE 1.11. Fr
YUf) = HUF) X(F) gram of LTI syste
wency block dia16 Digital Signal Processing Fundamentals Chap. 1
In order to simplify the notation, the value of T, the period of the sampling waveform, is
normalized to be equal to one.
‘Now compare Equation (1.49) to the equation for the z-transform of x(n) as fol-
lows:
X@= PY xz. 51)
Equations (1.49) and (1.51) are equal for sequences x(n) which are causal (i¢., x(n) = 0
for all n <0) if zis set as follows:
eae, (152)
A plot of the locus of values for z in the complex plane described by Equation (1.52) is
shown in Figure 1.12. The plot is a circle of unit radius. Thus, the z-transform of a causal
sequence, x(n), when evaluated on the unit circle in the complex plane, is equivalent to
the frequency domain representation of the sequence. This is one of the properties of the
z-transform which make it very useful for discrete signal analysis.
FIGURE 1.12 The unit circle in the xplane.
Sec. 1.3 Digital Filters a
Summarizing the last few paragraphs, the impulse response of an operator is simply
4 sequence, f(r), and the Fourier transform of this sequence is the frequency response of
the operator. The z-transform of the sequence h(m), called H(z), can be evaluated on the
unit circle to yield the frequency domain representation of the sequence. This can be writ-
ten as follows:
HO)
eae HS). (153)
1.3 DIGITAL FILTERS
‘The linear operators that have been presented and analyzed in the previous sections can
be thought of as digital filters. The concept of filtering is an analogy between the action
of a physical strainer or sifter and the action of a linear operator on sequences when the
‘operator is viewed in the frequency domain. Such a filter might allow certain frequency
components of the input to pass unchanged to the output while blocking other compo-
nents. Naturally, any such action will have its corresponding result in the time domain.
This view of linear operators opens a wide area of theoretical analysis and provides in-
creased understanding of the action of digital systems.
There are two broad classes of digital filters. Recall the difference equation for a
general operator:
et pt
y(n) Db xn-4)- Ya,yn -p). (1.54)
= m=
Notice that the infinite sums have been replaced with finite sums. This is necessary in
order that the filters can be physically realizable.
‘The first class of digital filters have 4, equal to 0 for all p. The common name for
filters of this type is finite impulse response (FIR) filters, since their response to an im-
pulse dies away in a finite number of samples. These filters are also called moving aver-
age (or MA) filters, since the output is simply a weighted average of the input values,
ee
w= Y bgx(2-9). (1.55)
oo
There is a window of these weights (b,) that takes exactly the Q most recent values of
x(n) and combines them to produce the output.
The second class of digital filters are infinite impulse response (UR) filters, This
lass includes both autoregressive (AR) filters and the most general form, autoregressive
‘moving average (ARMA) filters, In the AR case all b for q = 1 to Q~ 1 are set to 0.
et
y(n) = x(n) ~ J" a,yn~ p) (1.56)
=8 Digital Signal Processing Fundamentals Chap. 1
For ARMA filters, the more general Equation (1.54) applies. In either type of IR filter, a
single-impulse response at the input can continue to provide output of infinite duration
with a given set of coefficients. Stability can be a problem for IIR filters, since with
poorly chosen coefficients, the output can grow without bound for some inputs.
1.3.1 Finite Impulse Response (FIR)
Iters
Restating the general equation for FIR filters
os
y(n) =F d,atn~ 4). as?)
-
‘Comparing this equation with the convolution relation for linear operators
yn) = J hm) x(n— m),
=
one can see that the coefficients in an FIR filter are identical to the elements in the im-
pulse response sequence if this impulse response is finite in length.
b,=h(g) for g=0,1,2,3,..4. 0-1
‘This means that if one is given the impulse response sequence for a linear operator with a
finite impulse response one can immediately write down the FIR filter coefficients.
However, as was mentioned at the start of this section, filter theory looks at linear opera-
tors primarily from the frequency domain point of view. Therefore, one is most often
given the desired frequency domain response and asked to determine the FIR filter coeffi-
cients,
There are a number of methods for determining the coefficients for FIR filters
given the frequency domain response. The two most popular FIR filter design methods
are listed and described briefly below.
L. Use of the DFT on the sampled frequency response. In this method the required
frequency response of the filter is sampled at a frequency interval of I/T where T is the
time between samples in the DSP system. The inverse discrete Fourier transform (see
section 1.4) is then applied to this sampled response to produce the impulse response of
the filter. Best results are usually achieved if a smoothing window is applied to the fre-
quency response before the inverse DFT is performed. A simple method to obtain FIR fil-
ter coefficients based on the Kaiser window is described in section 4.1.2 in chapter 4.
2. Optimal mini-max approximation using linear programming techniques. There is
a well-known program written by Parks and McClellan (1973) that uses the REMEZ ex-
change algorithm to produce an optimal set of FIR filter coefficients, given the required
frequency response of the filter. The Parks-MeClellan program is available on the IEEE
digital signal processing tape ot as part of many of the filter design packages available for
personal computers. The program is also printed in several DSP texts (see Elliot 1987 of
Sec. 1.3 Digital Filters 19
Rabiner and Gold 1975). The program REMEZ.C is a C language implementation of the
Parks-McClellan program and is included on the enclosed disk. An example of a filter de-
signed using the REMEZ program is shown at the end of section 4.1.2 in chapter 4.
The design of digital filters will not be considered in detail here. Interested readers
may wish to consult references listed at the end of this chapter giving complete descrip-
tions of all the popular techniques.
The frequency response of FIR filters can be investigated by using the transfer
function developed for a general linear operator:
HO=F
(1.58)
Notice that the sums have been made finite to make the filter realizable. Since for FIR fil-
ters the a, are all equal to 0, the equation becomes:
et
¥@) 2
= bot 1.59)
=F 2 az (1.59)
‘The Fourier transform or frequency response of the transfer function is obtained by let-
ting z = e/2", which gives
HD= HON aw = Doge. (4.60)
:
This is a polynomial in powers of z“! or a sum of products of the form
H(z) = by +B! +z? +z? +... tb ZO,
‘There is an important class of FIR filters for which this polynomial can be factored into a
product of sums from
ws wa
HG@)=[] 2 + one" + Be] [1 +7.) 61)
nd
0
This expression for the transfer function makes explicit the values of the variable z~! which
cause H(z) to become zero. These points are simply the roots of the quadratic equation
o=
$02 +B as
which in general provides complex conjugate zero pairs, and the values 7, which provide
single zeros.20 Digital Signal Processing Fundamentals Chap. 1
In many communication and image processing applications it is essential to have
filters whose transfer functions exhibit a phase characteristic that changes linearly with a
change in frequency. This characteristic is important because it is the phase transfer rela.
tionship that gives minimum distortion to a signal passing through the filter. A very use.
{ul feature of FIR filters is that fora simple relationship ofthe coefficients, 8, the result-
ing filter is guaranteed to have a linear phase response. The derivation of the felationship
which provides a linear phase filter follows.
A linear phase relationship to frequency means that
HA) =|H Pel,
‘where oc and B are constants. [f the transfer function of a filter can be separated into a real
function of f multiplied by a phase factor e/l%+ 1, then this transfer function will exhibit
linear phase,
‘Taking the FIR filter transfer function:
H(z) = by +z" + by”? + yz +... bg yz OY
‘and replacing z by e/?* to give the frequency response
HE) =by te PH 4 by OD 4. by PRODI
Factoring out the factor e?2-N? and letting { equal (Q ~ 1)/2 gives
AA)
PHI PME 4 ye PACNY 4 plBR-D/
tot bg ge MO ge PY}
Combining the coefficients with complex conjugate phases and placing them together in
brackets,
Af) =
PH Lape + bye PA]
+ [oye7®-F 4 bg ge IRE m7]
#fbre?A 4 py er 2Ae>1]
+}
If cach pair of coefficients inside the brackets is set equal as follows
by =bo-4
b= boo
by =bo.5, ete.
Each term in brackets becomes a cosine function and the linear phase relationship is
achieved. This is a common characteristic of FIR filter coefficients.
Sec. 1.3 Digital Filters 21
1.3.2 Infinite Impulse Response (IIR) Filters
Repeating the general equation for IR filters
ot et
He)= Y byx(n—4)-) a,y(n= p).
”
a
The z-transform of the transfer function of an IIR filter is
ot
Dee"
He 2-2
OS ae
ml
No simple relationship exists between the coefficients of the IIR filter and the im-
pulse response sequence such as that which exists in the FIR case. Also, obtaining linear
phase IIR filters is not a straightforward coefficient relationship as is the case for FIR fil-
ters. However, TIR filters have an important advantage over FIR structures: In general,
HR filters require fewer coefficients to approximate a given filter frequency response
than do FIR filters. This means that results can be computed faster on a general purpose
computer or with less hardware in a special purpose design. In other words, IIR filters are
computationally efficient. The disadvantage of the recursive realization is that IR filters
are much more difficult to design and implement. Stability, roundoff noise, and some-
times phase nontinearity must be considered carefully in all but the most trivial IIR filter
designs.
The direct form IIR filter realization shown in Figure 1.9, though simple in appear-
ance, can have severe response sensitivity problems because of coefficient quantization,
especially as the order of the filter increases. To reduce these effects, the transfer function
is usually decomposed into second order sections and then realized as cascade sections.
‘The C language implementation given in section 4.1.3 uses single precision floating-point
numbers in order to avoid coefficient quantization effects associated with fixed-point im-
plementations that can cause instability and significant changes in the transfer function.
IIR digital filters can be designed in many ways, but by far the most common IIR
design method is the bilinear transform. This method relies on the existence of a known
s-domain transfer function (or Laplace transform) of the filter 10 be designed, The
s-domain filter coefficients are transformed into equivalent z-domain coefficients for use
in an UR digital filter. This might seem like a problem, since s-domain transfer functions
are just as hard to determine as z-domain transfer functions. Fortunately, Laplace trans-
form methods and s-domain transfer functions were developed many years ago for de-
signing analog filters as well as for modeling mechanical and even biological systems.
Thus, many tables of s-domain filter coefficients are available for almost any type of fil-
ter function (see the references for a few examples). Also, computer programs are avail-
able to generate coefficients for many of the common filter types (sce the books by Jong,
Anoutino, Stearns (1993), Embree (1991), or one of the many filter design packages2 Digital Signal Processing Fundamentals Chap. 1
available for personal computers). Because of the vast array of available filter tables, the
large number of filter types, and because the design and selection of a filter requires care-
ful examination of all the requirements (passband ripple, stopband attenuation as well as
phase response in some cases), the subject of s-domain IIR filter design will not be cov-
ered in this book. However, several IIR filter designs with exact z-domain coefficients
are given in the examples in section 4.1 and on the enclosed disk.
1.3.3 Examples of Fitter Responses
As an cxample of the frequency response of an FIR filter with very simple coefficients,
take the following moving average difference equation:
2) = 0.11 x(n) +0.22 x(n 1) +034 x(n 2)
+022 x(n-3)4 0.11 x(n—4).
One would suspect that this filter would be a lowpass type by inspection of the coefficients,
since a constant (DC) value at the input will produce that same value at the output. Also,
since all coefficients are positive, it will end to average adjacent values of the signal.
FIR Filter Frequency Response
Ope
s
to
3 |
2 a5
&
2
20
2s
ol ___\
‘0005 01 01S 02025. 03 035 04 04 05
Frequency (ffs)
FIGURE 1.13 FIR low pass response.
Sec. 1.3 Digital Filters 23
‘The response of this FIR filter is shown in Figure 1.13. It is indeed lowpass and the
nulls in the stop band are characteristic of discrete time filters in general.
‘Asan example of the simplest IIR filter, take the following difference equation:
Xn) = x(n) + (n=),
‘Some contemplation of this filter's response to some simple inputs (like constant values,
0, 1, and so on) will lead to the conclusion that it is an integrator. For zero input, the out-
put holds at a constant value forever. For any constant positive input greater than zero,
the output grows linearly with time. For any constant negative input, the output decreases
linearly with time. The frequency response of this filter is shown in Figure 1.14.
1.3.4 Filter Specifications
‘As mentioned previously, filters are generally specified by their performance in the fre-
quency domain, both amplitude and phase response as a function of frequency. Fig-
ure 1.15 shows a lowpass filter magnitude response characteristic. The filter gain has
IIR Filter Frequency Response
20; ae
15]
10} |
‘Magnitude (dB)
Frequency ({/fs)
FIGURE 1.14 IIR integrator response.Magnitude
24 Digital Signal Processing Fundamentals Chap. 1
been normalized to be roughly 1.0 at low frequencies and the sampling rate is normalized
to unity. The figure illustrates the most important terms associated with filter specifica-
tions.
The region where the filter allows the input signal to pass to the output with litle or
no attenuation is called the passband. In a lowpass filter, the passband extends from fre-
quency f = 0 to the star of the transition band, marked as frequency fng,,in Figure 1.15.
The transition band is that region where the filter smoothly changes from passing the sig-
nal to stopping the signal. The end of the transition band occurs at the stopband fre-
‘quency, fyygp: The stopband is the range of frequencies over which the filter is specified to
attenuate the signal by a given factor. Typically, a filter will be specified by the following
parameters:
(1) Passband ripple—26 in the figure,
(2) Stopband attenuation—1/A,
(3) Transition start and stop frequencies— fs 04 ye
(4) Cutoff frequency— fa... The frequency at which the filter gain is some given fac-
tor Jower than the nominal passband gain. This may be -1 dB, ~3 dB or other gain
value close to the passband gain,
Computer programs that calculate filter coefficients from frequency domain magni-
tude response parameters use the above list or some variation as the program input.
Transition
Band
!
1
I
I
1
|
'
1
I
0 fpass 'stop 0.5f5
Frequency
FIGURE 1.15 Magnitude response of normalized lowpass filter.
Sec. 1.4
iscrete Fourier Transforms 25
1.4 DISCRETE FOURIER TRANSFORMS
‘So far, the Fourier transform has been used several times to develop the characteristics of
sequences and linear operators. The Fourier transform of a causal sequence is:
Fra) = XN = Ye P* (1.62)
ns
Where the sample time period has been normalized to 1 (T'= 1). If the sequence is of lim-
ited duration (as must be true to be of use in a computer) then
wet
X= Yxmere (1.63)
Fa}
where the sampled time domain waveform is N samples long. The inverse Fourier trans-
form is
ral, ua apn,
FR) = xm) = [xe Pay (1.64)
since X(f) is periodic with period 1/7'= 1, the integral can be taken over any full period.
Therefore,
a(n)
|, Ne Pap. (1.65)
1.4.1 Form
‘These representations for the Fourier transform are accurate but they have a major drawback
for digital applications—the frequency variable is continuous, not discrete, To overcome this
Problem, both the time and frequency representations of the signal must be approximated.
To create a discrete Fourier transform (DFT) a sampled version of the frequency
‘waveform is used. This sampling in the frequency domain is equivalent to convolution in
the time domain with the following time waveform:
A= Y8e-rD.
This creates duplicates of the sampled time domain waveform ‘that repeats with period 7.
‘This 7 is equal to the T used above in the time domain ‘sequence. Next, by using the same
‘number of samples in one period of the repeating frequency domain waveform as in one pe-
riod of the time domain waveform, a DFT pair is obtained that is a good approximation to
the continuous variable Fourier transform Pair. The forward discrete Fourier transform is
x®
wea
Yamerwn (1.66)
=26 Digital Signal Processing Fundamentals Chap. 1
and the inverse discrete Fourier transform is,
wa
1 -/Arin N (1.67)
x= X(be7
Y Ne :
For a complete development of the DFT by both graphical and theoretical means, see the
text by Brigham (chapter 6)
1.42 Properties
‘This section describes some of the properties of the DFT. The corresponding paragraph
numbers in the book The Fast Fourier Transform by Brigham (1974) are indicated, Due to
the sampling theorem it is clear that no frequency higher than ¥h7 can be represented by
X(k). However, the values of k extend to N-1, which corresponds to a frequency neatly
equal to the sampling frequency Yj This means that for a real sequence, the values of k
from N/2 to N~1 are aliased and, in fact, the amplitudes of these values of X(K) are
1X()1=1X(N=1) | for k= N210N 1. (1.68)
This corresponds to Properties 8-11 and 8-14 in Brigham. ;
‘The DFT is a linear transform as is the z-transform so that the following relation-
ships hold:
It
x(n) = a(n) +B (m),
where ot and B are constants, then
X(k)= 0 A(k) +B BK),
where A(K) and B(k) are the DFTs of the time functions a(n) and b(n), respectively. This
corresponds to Property 8-1 in Brigham.
‘The DFT also displays a similar attribute under time shifting as the z-transform. If
X(Q) is the DFT of x(n) then
et
DET{x(n— p)} = Yan pe
Now define a new variable m= r ~ p so that n = m + p. This gives
mN-l-p
DFT(x(n~p)}= Dy x(me Mee PPM,
oe
which is equivalent to the following:
DFT(x(n~ p)} =e PMY X(E). (1.69)
Sec. 1.4 Discrete Fourier Transforms a
‘This corresponds to Property 8-5 in Brigham. Remember that for the DFT it is assumed
that the sequence x(m) goes on forever repeating its values based on the period n = 0 to
N= 1. So the meaning of the negative time arguments is simply that
x(-p)=x(W =p), forp=0t0 N=.
1.4.3 Power Spectrum
‘The DFT is often used as an analysis tool for determining the spectra of input sequences.
Most often the amplitude of a particular frequency component in the input signal is de-
sired. The DFT can be broken into amplitude and phase components as follows:
XD) = Xeni f+ j Ximag (A (1.70)
XN=1XPlePP a7
where | X(f)1= X21 + Xeang
X,
and @(f)= tan] <=
Xeat
‘The power spectrum of the signal can be determined using the signal spectrum times its
conjugate as follows:
X(COX*(K)=1 X(K)P= Xieat + Xing (1.72)
‘There are some problems with using the DFT as a spectrum analysis tool, however. The
problem of interest here concerns the assumption made in deriving the DFT that the se-
quence was a single period of a periodically repeating waveform. For almost all se-
quences there will be a discontinuity in the time waveform at the boundaries between
these pseudo periods. This discontinuity will result in very high-frequency components in
the resulting waveform. Since these components can be much higher than the sampling
theorem limit of ‘yr (or half the sampling frequency) they may be aliased into the middle
of the spectrum developed by the DFT.
‘The technique used to overcome this difficulty is called windowing. The problem to
be overcome is the possible discontinuity at the edges of each period of the waveform.
Since for a general purpose DFT algorithm there is no way to know the degree of discon-
tinuity at the boundaries, the windowing technique simply reduces the sequence ampli-
tude at the boundaries. It does this in a gradual and smooth manner so that no new dis-
continuities are produced, and the result is a substantial reduction in the aliased frequency
components. This improvement does not come without a cost. Because the window is
modifying the sequence before a DFT is performed, some reduction in the fidelity of the
spectral representation must be expected. The result is somewhat reduced resolution of
closely spaced frequency components. The best windows achieve the maximum reduc-
tion of spurious (or aliased) signals with the minimum degradation of spectral resolution.
‘There are a variety of windows, but they all work essentially the same way:28 Digital Signal Processing Fundamentals Chap. 1
Attenuate the sequence elements near the boundaries (near n = 0 and n = N 1) and com.
ppensate by increasing the values that are far away from the boundaries. Each window has
its own individual transition from the center region to the outer elements. For a compari-
son of window performance see the references listed at the end of this chapter. (For ex-
ample, see Harris (1983)
1.4.4 Averaged Periodograms.
Because signals are always associated with noise—either due to some physical attribute of
the signal generator or extemal noise picked up by the signal source—the DFT of a single
sequence from a continuous time process is often not a good indication of the true spec-
‘rum of the signal. The solution to this dilemma is to take multiple DFTs from successive
sequences from the same signal source and take the time average of the power spectrum.
Ifa new DFT is taken each NT seconds and successive DFTs are labeled with superscripts:
Power Spectrum = Sb + Kings)” (1.73)
Clearly, the spectrum of the signal cannot be allowed to change significantly during the
interval = 0 to t= M (NT).
1.4.5 The Fast Fourier Transform (FFT)
The fast Fourier transform (ot FFT) is a very efficient algorithm for computing the DFT
of a sequence. It takes advantage of the fact that many computations are repeated in the
DFT due to the periodic nature of the discrete Fourier kernel: e7?"!". The form of the
DFTis
x
X= D xe 74)
&
By letting W" = e/2%"/V Equation (1.74) becomes
wt
x@= Dw (1.75)
=
Now, WN + aN + -N) = Wok for all q, r that are integers due to the periodicity of the
Fourier kernel,
Next break the DFT into two parts as follows:
Nat Nat
X= SY xamwyet + Der@n+nwee, (1.76)
= =
where the subscript N on the Fourier kemel represents the size of the sequence.
Sec. 1.4 Discrete Fourier Transforms 29
By representing the even elements of the sequence x(n) by x,, and the odd elements
by xq, the equation can be rewritten
M
X0)= Sx OW + Wii rea (DWa- «m7
=
=
Now there are two expressions in the form of DFTs so Equation (1.77) can be simplified
as follows:
X(K) = X_(1) + Wain XQ4(n). (1.78)
that only DFTs of 1/2 points need be calculated to find the value of X(k). Since
the index k must go to N ~ 1, however, the periodic property of the even and odd DFTs is
used. In other words,
Xey(b) = Xk for $k
2 4 6 8 10 14
05
FIGURE 1.18 Output of 16-point FFT.
{[Link] be shown that the integrand in the two integrals above integrates to 0 unless the ar-
gument of the exponential is 0. Ifthe argument of the exponential is zero, the result is
two infinite spikes, one at f = fo and the other at f= fy. These are delta functions in the
massa tue results, and remembering that the impulse sequence is the digital
analog of the delta function, the results for the FFT seem more plausible. Its still eft to
explain why k = 12 should be equivalent to f= —fy. Referring back to the development of
the DFT, it was necessary at one point for the frequency spectrum to become periodic
with period f,. Also, in the DFT only positive indices are used. Combining these two facts
one can obtain the results shown in Figure 1.18,
1.5 NONLINEAR OPERATORS
Most of this book is devoted to linear operators and linear-signal processing because
these are the most commonly used techniques in DSP. However, there are several nontin-
ear operators that are very useful in one-dimensional DSP, This section introduces the
simple class of nonlinear operators that compress or clip the input to derive the output se-
quence.
There is often a need to reduce the number of significant bits in a quantized se-
uence. This is sometimes done by truncation of the least significant bits. This process is
advantageous because it is linear: The quantization error is increased uniformly over the
entire range of values of the sequence. There are many applications, however, where the
need for accuracy in quantization is considerably less at high-signal values than at low
signal values. This is true in telephone voice communications where the human car's
Sec. 1.8 Nonlinear Operators 33
ability to differentiate between amplitudes of sound waves decreases with the amplitude
of the sound. In these cases, a nonlinear function is applied to the signal and the resulting
‘output range of values is quantized uniformly with the available bits.
This process is illustrated in Figure 1.19. First, the input signal is shown in Figure
1.19(@). The accuracy is 12 bits and the range is 0 to 4.095 volts, so each quantization
level represents 1 mV. It is necessary because of some system consideration (such as
transmission bandwidth) to reduce the number bits in each word to 8. Figure 1.19(b)
shows that the resulting quantization levels are 16 times as coarse. Figure 1.19(c) shows
the result of applying a linear-logarithmic compression to the input signal. In this type of
compression the low-level signals (out to some specified value) are unchanged from the
input values. Beginning at a selected level, say f,, = a, a logarithmic function is applied.
The form of the function might be
Sout = 4+ AlOBi9(1+ fq a)
0 that at f,, = a the output also equals a and A is chosen to place the maximum value of
Sous at the desired point.
A simpler version of the same process is shown in Figure 1.20. Instead of applying
a logarithmic function from the point f = a onward, the output values for f > a are all the
same. This is an example of clipping. A region of interest is defined and any values out.
side the region are given a constant output.
1.5.1 j1-Law and A-Law Compression
There are two other compression laws worth listing because of their use in telephony —
the p-law and A-law conversions. The p-law conversion is defined as follows:
(1.80)
In + HLfal)
ENC Fin) In(l+y)
‘where sgn() is a function that takes the sign of its argument, and jis the compression pa-
ameter (255 for North American telephone transmission). The input value fj, must be
normalized to lie between —I and +1, The A-law conversion equations are as follows:
Atl
1 In(a)
for fgl between 0 and 1/A and
L+In(AinD
T+ in(a)
for igh between /A and 1.
Sona
Sean = 580 Fin
(1.81)
Sour = 58 Fin)
In these equations, A is the compression parameter (87.6 for European telephone trans-
mission).
‘An extreme version of clipping is used in some applications of image processing to
Produce binary pictures. In this technique a threshold is chosen (usually based on a his.34
Signal Processing Fundamentals Chap. 1
Output (Integer)
Input (Votts)
Input FIGURE 1.19. (a) Linear 12-bit ADC.
{b) Linear 8-bit ADC. (e} Nonlinear
togram of the picture elements) and any image element with a value higher than threshold
is set to 1 and any element with a value lower than threshold is set to zero. In this way the
significant bits are reduced to only one. Pictures properly thresholded can produce excel-
lent outlines of the most interesting objects in the image, which simplifies further pro-
cessing considerably.
Sec. 1.6 Probability and Random Processes 38
Output (Integer)
Input (V)
FIGURE 1.20 Clipping to 8 bits.
1.6 PROBABILITY AND RANDOM PROCESSES
‘The signals of interest in most signal-processing problems are embedded in an environ-
ment of noise and interference. The noise may be due to spurious signals picked up dur-
ing transmission (interference), or due to the noise characteristics of the electronics that
receives the signal or a number of other sources. To deal effectively with noise in a sig-
nal, some model of the noise or of the signal plus noise must be used. Most often a proba-
bilistic model is used, since the noise is, by nature, unpredictable. This section introduces
the concepts of probability and randomness that are basic to digital signal processing and
gives some examples of the way a composite signal of interest plus noise is modeled.
1.6.1 Basic Probability
Probability begins by defining the probability of an event labeled A as P(A). Event A can
be the result of a coin toss, the outcome of a horse race, or any other result of an activity
that is not completely predictable. There are three attributes of this probability P(A):
(1 P(A) > = 0. This simply means that any result will either have a positive chance of
occurrence or no chance of occurrence.
(2) P (All possible outcomes) = 1. This indicates that some result among those possible
is bound to occur, a probability of 1 being certainty.
(3) For (A,}, where (4; 0 A)) = 0, P(UA)) = 3, P(A). For a set of events, (A,}, where
the events are mutually disjoint (no two can occur as the result of a single trial of36
tal Signal Processing Fundamentals Chap. 1
the activity), the probability of any one of the events occurring is equal to the sum
of their individual probabilities.
With probability defined in this way, the discussion can be extended to joint and
conditional probabilities. Joint probability is defined as the probability of occurrence of a
specific set of two or more events as the result ofa single trial of an activity. For instance,
the probability that horse A will finish third and horse B will finish first in a particular
horse race is a joint probability. This is written:
P(A B) = P(A and B) = PAB), (1.82)
Conditional probability is defined as the probability of occurrence of an event A given
that B has occurred. The probability assigned to event A is conditioned by some knowl.
edge of event B. This is written
P(A given B) = P(AIB). (1.83)
If this conditional probability, P(A|B), and the probability of B are both known, the proba-
bility of both of these events occurring (joint probability) is
P(AB) = P(AIB)P(B). (1.84)
So the conditional probability is multiplied by the probability of the condition (event B)
to get the joint probability. Another way to write this equation is
P(AB)
(Aj) = “AD 1.85)
OD eam (1.85)
This is another way to define conditional probability once joint probability is understood.
1.6.2 Random Variables
In signal processing, the probability of a signal taking on a certain value or lying in a cer-
tain range of values is often desired. The signal in this case can be thought of as a random
variable (an element whose set of possible values is the set of outcomes of the activity),
For instance, for the random variable X, the following set of events, which could occur,
may exist:
Event A X takes on the value of 5 (X= 5)
Event BX=19
Event CX= 1.66
ete
This is a useful set of events for discrete variables that can only take on certain specified
values. A more practical set of events for continuous variables associates each event with
Sec. 1.6 Probability and Random Processes Ey
the variable lying within a range of values, A cumulative di
for a random variable can be defined as follows:
F(x) = P(X S x). (1.86)
This cumulative distribution, function, then, is a monotonically increasing function of the
independent variable x and is valid only for the particular random variable, X. Figure 1.21
shows an example of a distribution function for a random variable. If F(x) is differenti-
ated with respect to x the probability density function (or PDF) for X is obtained, repre-
sented as follows
Fes)
1.8
pay= 20 (1.87)
Integrating p(x) gives the distribution function back again as follows:
F= [pa (2.88)
Since F(x) is always monotonically increasing, p(x) must be always positive or zero.
Figure 1.22 shows the density function for the distribution of Figure 1.21. The utility of
these functions can be illustrated by determining the probability that the random variable
X lies between a and b. By using probability Property 3 from above
P(X Sb)= Pla
t
FIGURE 1.24 Quantization and reconstruction of a signal,
>
t
~Y
Sec. 1.6 Probability and Random Processes a
64 au
63 a = Reconstruction
111110 —-— = —“ Levels
2
== 3t ral
11410 of aa
on ‘a
ay
Original Quantization Digital Quantized
Sample Decision Code ‘Sample
Levels
FIGURE 1.25 Quantization operation showing decision and reconstruction levels.
e=e{F- A} =f" r-P pa,
and if the signal range is broken up into the segments between decision levels d, and dj,
then
= e{r-F¥} Sf" - 1)? otpiat.
‘Numerical solutions can be determined that minimize € for several common probability
densities. The most common assumption is a uniform density (p(f) equals U/N for all val-
ues of f, where NV is the number of decision intervals). In this case, the decision levels are
uniformly spaced throughout the interval and the reconstruction levels are centered be-
‘tween decision levels. This method of quantization is almost universal in commercial
analog-to-digital converters. For this case the error in the analog-to-digital converter out-
put is uniformly distributed from —"h of the least significant bit to +4 of the least signifi-
cant bit. If itis assumed that the value of the least significant bit is unity, then the mean
squared error due to this uniform quantization is given by:
2 - BY ppd = [Pap =
variel= f42
igital Signal Processing Fundamentals Chap. 1
since p(f) = 1 from -% to +. This mean squared error gives the equivalent variance, or
noise power, added to the original continuous analog samples as a result of the uniform
quantization. If it is further assumed that the quantization error can be modeled as a sta-
tionary, uncorrelated white noise process (which is a good approximation when the number
of quantization levels is greater than 16), then a maximum signal-to-noise ratio (SNR)
‘can be defined for a quantization process of B bits (28 quantization levels) as follows:
SNR = 10log,o(V?/ var{e}) = 10logi9(12¥*),
where V? is the total signal power. For example, if a sinusoid is sampled with a peak am-
plitude of 25-1, then V2 = 228/8 giving the signal to noise ratio for a full scale sinusoid as
SNR = 10108; 9(1.5)(2?*)) = 6.02 + 1.76.
‘This value of SNR is often referred to as the theoretical signal-to-noise ratio for a B bit
analog-to-digital converter. Because the analog circuits in a practical analog-to-digital
converter always add some additional noise, the SNR of a real-world converter is always
Tess than this value.
1.6.5 Random Processes, Autocorrelation,
and Spectral Density
‘A random process is a function composed of random variables. An example is the ran-
dom process f(t). For each value of t, the process f(#) can be considered a random vari
able, For 1= a there is a random variable f(a) that has a probability density, an expected
value (or mean), and a variance as defined in section 1.6.3. In a two-dimensional image,
the function would be f(x,y), where x and y are spatial variables. A two-dimensional ran-
dom process is usually called a random field. Each f(a,b) is a random variable.
‘One of the important aspects of a random process is the way in which the random
variables at different points in the process are related to each other. The concept of joint
probability is extended to distribution and density functions, A joint probability distribu.
tion is defined as
F(s,1) = P(S <5, T $1) (where Sand T are some constants),
and the corresponding density function is defined as
PFs)
asor
‘The integral relationship between distribution and density in this case is
(s,0 (1.96)
F.0= J" [pa Baa. as
In section 1.6.3 it was shown that the correlation of two random variables is the expected
value of their product. The autocorrelation of a random process is the expected value of
Sec. 1.6 Probability and Random Processes “3
the products of the random variables which make up the process. The symbol for autocor-
relation is Ry(ty, #2) for the function f(0) and the definition is
Ryle) = ALG) F)] (1.98)
= J JeB vo Birt. a8, (1.99)
where py (a, Bi t,t) isthe joint probability density f(y) and jt). By including 0: and
in the parentheses the dependence of pron these variables is made explicit.
In the general case, the autocorrelation can have different values for each value of
1, and f), However, there is an important special class of random processes called station-
ary processes for which the form of the autocorrelation is somewhat simpler. In station-
ary random processes, the autocorrelation is only a function of the difference between the
two time variables. For stationary processes
Ry ltg 4) = Ry @)= A SC-HFO} (1.100)
In section 1.6.6 the continuous variable theory presented here is extended to discrete vari-
ables and the concept of modeling real world signals is introduced.
1.6.6 Modeling Real-World Signals with AR Processes
By its nature, a noise process cannot be specified as a function of time in the way a deter-
ministic signal can, Usually a noise process can be described with a probability function
and the first and second moments of the process. Although this is only a partial character-
ization, a considerable amount of analysis can be performed using moment parameters
alone. The first moment of a process is simply its average or mean value. In this section,
all processes will have zero mean, simplifying the algebra and derivations but providing
results for the most common set of processes.
‘The second moment is the autocorrelation of the process
rn,n—k)= Efunin—K, for k=0,41,42,....
‘The processes considered here are stationary to second order. This means that the first
and second order statistics do not change with time, This allows the autocorrelation to be
represented by
rnjn-k=r(k), — fork=
£142,
since it is a function only of the time difference between samples and not the time vari-
able itself. In any process, an important member of the set of autocorrelation values is
(0), which is
HO) = Efunseten} = Ewin}, to“4 Digital Signal Processing Fundamentals Chay
which is the mean square value of the process. For a zero mean process this is equal to
the variance of the signal
(0) = varlu). (1.102)
‘The process can be represented by a vector u(n) where
‘u(n)
un)
u(n) =| u(n~2) (1.103)
u(n—~M +1)
‘Then the autocorrelation can be represented in matrix form
R
{a(n (n)} (1.104)
(0) ra) 7Q).. (m=)
1) 0) rO.. :
w-2) rt) 1(0)...
7 n-1)
70)
HOM+D OM 42) no)
The second moment of a noise process is important because it is directly related to
the power spectrum of the process. The relationship is
wet
sN= YirdoeP*, 1.105)
site
which is the discrete Fourier transform (DFT) of the autocorrelation of the process (r(k)).
‘Thus, the autocorrelation is the time domain description of the second order statistics, and
the power spectral density, S(f), is the frequency domain representation. This power
spectral density can be modified by discrete time filters.
Discrete time filters may be classified as autoregressive (AR), moving average
(MA), of a combination of the nwo (ARMA). Examples of these filter structures and the
z-transforms of each of their impulse responses are shown in Figure 1.26, It is theoreti-
cally possible to create any arbitrary output stochastic process from an input white noise
Gaussian process using a filter of sufficiently high (possibly infinite) order.
Referring again to the three filter structures in Figure 1.26, it is possible to create
any arbitrary transfer function H(z) with any one of the three structures. However, the or-
ders of the realizations will be very different for one structure as compared to another.
For instance, an infinite order MA filter may be required to duplicate an M" order AR
filter.
‘One of the most basic theorems of adaptive and optimal filter theory is the Wold
Sec. 1.6 Probability and Random Processes
AR Filter
MA Filter
xn) wan)
ARMA Filter
FIGURE 1.26 AR, MA, and ARMA fitter structures.6 Digital Signal Processing Fundamentals Chap. 1
decomposition. This theorem states that any real-world process can be decomposed into a
deterministic Component (such as a sum of sine waves at specified amplitudes, phases,
and frequencies) and a noise process. In addition, the theorem states that the noise
process can be modeled as the output of a linear filter excited at its input by a white noise
signal.
\DAPTIVE FILTERS AND SYSTEMS:
‘The problem of determining the optimum linear filter was solved by Norbert Wiener and
others. The solution is referred to as the Wiener filter and is discussed in section 1.7.1,
Adaptive filters and adaptive systems attempt to find an optimum set of filter parameters
(often by approximating the Wiener optimum filter) based on the time varying input and
‘output signals. In this section, adaptive filters and their application in closed loop adap-
tive systems are discussed briefly. Closed-loop adaptive systems are distinguished from
‘open-loop systems by the fact that in a closed-loop system the adaptive processor is con-
trolled based on information obtained from the input signal and the output signal of the
processor. Figure 1.27 illustrates a basic adaptive system consisting of a processor that is
controlled by an adaptive algorithm, which is in tum controlled by a performance calcula-
tion algorithm that has direct knowledge of the input and output signals.
Closed-Loop adaptive systems have the advantage that the performance calculation
algorithm can continuously monitor the input signal (4) and the output signal (y) and de-
termine if the performance of the system is within acceptable limits. However, because
several feedback loops may exist in this adaptive structure, the automatic optimization al-
gorithm may be difficult to design, the system may become unstable or may result in
nonunique and/or nonoptimum solutions. In other situations, the adaptation process may
not converge and lead to a system with grossly poor performance. In spite of these possi-
ble drawbacks, closed-loop adaptive systems are widely used in communications, digital
storage systems, radar, sonar, and biomedical systems.
‘The general adaptive system shown in Figure 1.27(a) can be applied in several
‘ways. The most common application is prediction, where the desired signal (d) is the ap-
plication provided input signal and a delayed version of the input signal is provided to the
input of the adaptive processor (x) as shown in Figure 1.27(b). The adaptive processor
‘must then try to predict the current input signal in order to reduce the error signal (€) to-
ward a mean squared value of zero, Prediction is often used in signal encoding (for exam-
ple, speech compression), because if the next values of a signal can be accurately pre-
dicted, then these samples need not be transmitted or stored. Prediction can also be used
to reduce noise or interference and therefore enhance the signal quality if the adaptive
processor is designed to only predict the signal and ignore random noise elements or
known interference pattems.
‘As shown in Figure 1.27(c), another application of adaptive systems is system
modeling of an unknown or difficult to characterize system. The desired signal (d) is the
unknown system’s output and the input to the unknown system and the adaptive proces-
sor (x) is a broadband test signal (perhaps white Gaussian noise). After adaptation, the
Sec. 1.7 Adaptive Filters and Systems
L
d (desired output)
4
x (input) ut) —
(oO 6 pracensor PME 5G cleron
‘Adapive
algorithm: l*
@
s a
wt
Ooty || frowessor [> @)
»
s d
Plant
FIGURE 1.27. (a) Closed-loop adaptive system; (b} prediction; (c) system
modeling.
©
a74g Digital Signal Processing Fundamentals Chap. 1
‘unknown system is modeled by the final transfer function of the adaptive processor. By
using an AR, MA, or ARMA adaptive processor, different system models can be ob.
tained. The magnitude of the error (e) can be used to judge the relative success of each
model.
1.7.1 Wiener Filter Theory
The problem of determining the optimum linear filter given the structure shown in Figure
1.28 was solved by Norbert Wiener and others. The solution is referred to as the Wiener
filter. The statement of the problem is as follows:
Determine a set of coefficients, w, that minimize the mean of the squared error of
the filtered output as compared to some desired output. The error is written
én) = din) -¥ whut —K+, (1.106)
or in vector form
e(n) = d(n)— wun) (1.107)
‘The mean squared error is a function of the tap weight vector w chosen and is written
Jew) = Efe(n)er(n)}. (1.108)
Xn)
—
+ d(n)
e(n)
FIGURE 1.28 Wiener filter problem.
Sec. 1.7 Adoptive Fiters and Systems 2°
‘Substituting in the expression for e() gives
Jom) = Bf denn) — deny (ny
(1.109)
whan yarn) + wu(nya” (nyw}
Jo) = var(d}— pw — wp + w" Rw, (.110)
where p = E(u(n)d*(n)}, the vector that is the product of the cross correlation between
the desired signal and each element of the input vector.
In order to minimize J(w) with respect to w, the tap weight vector, one must set the de-
rivative of J(w) with respect to w equal to zero. This will give an equation which, when
solved for w, gives Wo, the optimum value of w. Setting the total derivative equal to zer0 gives
~2p+2Rwo =0 (uy
Rw, =p. (uty
If the matrix R is invertible (nonsingular) then wg can be solved as
(1.113)
So the optimum tap weight vector depends on the autocorrelation of the input
Process and the cross correlation between the input process and the desired output.
Equation (1.113) is called the normal equation because a filter derived from this equation
will produce an error that is orthogonal (or normal) to each element of the input vector.
This can be written
E{u(neo(n)} =0. a4)
[tis helpful at this point to consider what must be known to solve the Wiener filter
problem:
(Q) The M x M autocorrelation matrix of u(n), the input vector
(2) The cross correlation vector between u(n) and d(n) the desired response.
tis clear that knowledge of any individual u(n) will not be sufficient to calculate
these statistics. One must take the ensemble average, E{ }, to form both the autocorrela-
tion and the cross correlation. In practice, a model is developed for the input process and
from this model the second order statistics are detived.
A legitimate question at this point is: What is d(n)?It depends on the problem. One ex-
ample of the use of Wiener filter theory is in linear predictive filtering. In this case, the de-
sired signal is the next value of u(n), the input. The actual u(n) is always available one sam-
Pile after the prediction is made and this gives the ideal check on the quality of the prediction.52 Digital Signal Processing Fundamentals. Chap. 1
EMBREE, P. and KIMBLE, B. (1991). C Language Algorithms for Digital Signal Processing,
Englewood Cliffs, NJ: Prentice Hall.
HARRIS, F. (1978). On the Use of Windows for Harmonic Analysis with the Discrete Fourier
‘Transform. Proceedings of the IEEE., 66, (1), 51-83,
HAYKIN, S. (1986). Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice Hall.
MCCLELLAN, J., PARKS, T. and RABINER, LR, (1973). A Computer Program for Designing
Optimum FIR Linear Phase Digital Filters. IEEE Transactions on Audio and Electro-acoustcs,
AU-21. (6), 505-526.
MoLer, C., LITTLe, J. and BANGERT, S. (1987). PC-MATLAB User's Guide. Sherbourne, MA: The
‘Math Works.
OPPENHEIM, A. and SCHAFER, R. (1975). Digital Signal Processing. Englewood Cliffs, NJ:
Prentice Hall
‘OPPENHEIM, A. and SCHAFER, R. (1989). Discrete-time Signal Processing. Englewood Cliffs, NJ
Prentice Hall
PAPOULIS, A. (1965). Probability, Random Variables and Stochastic Processes. New York
‘McGraw-Hill,
RABINER, L. and GOLD, B. (1975). Theory and Application of Digital Signal Processing
Englewood Cliffs, NJ: Prentice Hall.
STEARNS, S. and DAVID, R. (1988). Signal Processing Algorithms. Englewood Cliffs, NJ: Prentice
Hall
STEARNS, S. and Davip, R. (1993). Signal Processing Algorithms in FORTRAN and C. Englewood
Cliffs, NJ: Prentice Hall
VAIDYANATHAN, P. (1993). Multirate Systems and Filter Banks. Englewood Cliffs, NJ: Prentice
Hall
WIDROW, B. and STEARNS, S. (1985). Adaptive Signal Processing, Englewood Cliffs, NJ: Prentice
Hall
CHAPTER 2
C PROGRAMMING
FUNDAMENTALS
‘The purpose of this chapter is to provide the programmer with a complete overview of
the fundamentals of the C programming language that are important in DSP applications,
In particular, text manipulation, bitfields, enumerated data types, and unions are not dis-
cussed, because they have limited utility in the majority of DSP programs. Readers with
C programming experience may wish to skip the bulk of this chapter with the possible
exception of the more advanced concepts related to pointers and structures presented in
sections 2.7 and 2.8, The proper use of pointers and data structures in C can make a DSP
program easier to write and much easier for others to understand. Example DSP programs
in this chapter and those which follow will clarify the importance of pointers and data
structures in DSP programs,
2.1 THE ELEMENTS OF REAL-TIME DSP PROGRAMMING
‘The purpose of a programming language is to provide a tool so that a programmer can
easily solve a problem involving the manipulation of some type of information. Based on
this definition, the purpose of a DSP program is to manipulate a signal (a special kind of
information) in such a way that the program solves a signal-processing problem. To do
this, a DSP programming language must have five basic elements:
(A) A method of organizing different types of data (variables and data types)
(2) A method of describing the operations to be done (operators)
(3) A method of controlling the operations performed based on the results of operations
(program control)
5350 Digital Signal Processing Fundamentals Chap. 1
1.7.2. LMS Algorithms
‘The LMS algorithm isthe simplest and most used adaptive algorithm in use today. In this
brief section, the LMS algorithm as itis applied to the adaptation of time-varying FIR fit
ters (MA systems) and IR filters (adaptive recursive filters or ARMA systems) is de.
scribed. A detailed derivation, justification and convergence properties can be found ig
the references,
‘or the adaptive FIR system the transfer funetion is described by
eo
wn)= Pb Wx(n—4), (us)
7
where b(K) indicates the time-varying coefficients of the filter. With an FIR filter the
mean squared error performance surface in the multidimensional space of the filter coer
ficients is « quadratic function and has a single minimum mean squared error (MMSE),
‘The coefficient values at the optimal solution is called the MMSE solution. The goat of
the adaptive process isto adjust the filter coefficients in such a way that they move from
‘heir current postion toward the MMSE solution. Ifthe input signal changes with time,
the adaptive system must continually adjust the coefficients to follow the MMSE solu
tion. In practice, the MMSE solution is often never reached.
The LMS algorithm updates the filter coefficients based on the method of steepest
descent. This can be described in vector notation as follows:
Bua =
BV, (116)
where B, is the coefficient column vector, itis a parameter that controls the rate of con:
vergence and the gradient is approximated as
are]
OB,
Where X; is the input signal column vector and eis the error signal as shown on Figure
1.27. Thus, the basic LMS algorithm can be written as
y,
AX, (uy
Buy =B, +2h6)X, 118)
‘The selection of the convergence parameter must be done carefully, because if itis
‘oo small the coefficient vector will adapt very stowly and may not react to changes in the
input signal. If the convergence parameter is too large, the system will adapt to noise in
the signal and may never converge to the MMSE solution.
For the adaptive IR system the transfer function is described by
ot Pt
vin) = 2, xn g)- Fa, (K)y(n- p), G.u19
os
=
Sec.1.8 References 51
Where 6k) and a(k) indicate the time-varying coefficients of the filter. With an TIR filter,
the mean squared error performance surface in the multidimensional space of the filter
coefficients is not a quadratic function and can have multiple minimums that may cause
the adaptive algorithm to never reach the MMSE solution. Because the IR system has
poles, the system can become unstable if the poles ever move outside the unit circle dur,
ing the adaptive process. These two potential problems are serious disadvantages of adap-
tive recursive filters that limit their application and complexity. For this reason, most ap-
Plications are limited to a small number of poles. The LMS algorithm can again be used
{0 update the filter coefficients based on the method of steepest descent. This can be de-
scribed in vector notation as follows:
Win = We-MY,, 2.120)
where W, is the coefficient column vector containing the a and b coefficients, M is a di-
agonal matrix containing convergence parameters for the a coefficients and vg through
Yp-1 that controls the rate of convergence of the b coefficients, In this case, the gradient.
is approximated as,
2¢,[09...c19.8;...Bp]", (2
where €, is the error signal as shown in Figure 1.27, and
on
On (k) = x(k—n) + 9 bya, (k-9) (1.122)
s
Ba(K)= km) + S24 (0)8, (kp. (1.123)
~
‘The selection of the convergence parameters must be done carefully because if they
ate (00 small the coefficient vector will adapt very slowly and may not react to changes in
the input signal. If the convergence parameters are (00 large the system will adapt to
noise in the signal or may become unstable, The proposed new location of the poles
should also be tested before each update to determine if an unstable adaptive filter is
about to be used. If an unstable pole location is found the update should not take place
and the next update value may lead to a better solution,
1.8 REFERENCES
BRIGHAM, E. (1974). The Fast Fourier Transform. Englewood Cliffs, NJ: Prentice Hall
CLARKSON, P. (1993). Optimal and Adaptive Signal Processing. FL: CRC Press.
Buott, DE. (Ed). (1987). Handbook of Digital Signal Processing. San Diego, CA: Academic
Press54 C Programming Fundamentals Chap. 2
(4) A method of organizing the data and the operations so that a sequence of program
steps can be executed from anywhere in the program (functions and data structures)
and
() A method to move data back and forth between the outside world and the program
input/output)
“These five elements are required for efficient programming of DSP algorithms. Their im-
plementation in Cis described in the remainder of this chapter.
‘As a preview of the C programming language, a simple real-time DSP program is
shown in Listing 2.1. It illustrates each of the five elements of DSP programming. The
listing is divided into six sections as indicated by the comments in the program. This sim-
ple DSP program gets a series of numbers from an input source such as an A/D converter
(the function getimput () is not shown, since it would be hardware specific) and deter-
mines the average and variance of the numbers which were sampled. In signal-processing
terms, the output of the program is the DC level and total AC power of the signal.
‘The first line of Listing 2.1, main (), declares that the program called main, which
has no arguments, will be defined after the next left brace ({ on the next line). The main
program (called main because it is executed first and is responsible for the main control
of the program) is declared in the same way as the functions. Between the left brace on
the second line and the right brace half way down the page (before the line that starts
float average ....) are the statements that form the main program. As shown in this
example, all statements in C end in a semicolon (7) and may be placed anywhere on the
input line. In fact, all spaces and carriage control characters are ignored by most C com-
pilers. Listing 2.1 is shown in a format intended to make it easier to follow and modify.
‘The third and fourth lines of Listing 2.1 are statements declaring the functions
(average, variance, sqrt) that will be used in the rest of the main program (the
function sqrt () is defined in the standard C library as discussed in the Appendix. This
first section of Listing 2.1 relates to program organization (element four of the above
list). The beginning of each section of the program is indicated by comments in the pro-
‘gram source code (i.e.,/* section 1 */). Most C compilers allow any sequence of
characters (including multiple fines and, in some cases, nested comments) between the
7 and */ delimiters.
Section two of the program declares the variables to be used. Some variables are
declared as single floating-point numbers (such as ave and var); some variables are de-
clared as single integers (such as i, count, and number); and some variables are ar-
rays (such as signal [100]). This program section relates to element one, data organi-
zation,
Section three reads 100 floating-point values into an array called signal using a for
loop (similar to a DO loop in FORTRAN). This loop is inside an infinite while loop
that is common in real-time programs. For every 100 samples, the program will display
the results and then get another 100 samples. Thus, the results are displayed in real-time.
This section relates to element five (input/output) and element three (program control).
Section four of the example program uses the functions average and variance
Sec.2.1 The Elements of Real-Time DSP Programming 55
main() /* section 1 */
float average(),variance(),sart(}; /* declare functions */
float signal [100] ,ave,var; /*section 2 */
int count, i; /* declare variables */
while(1) (
for(count = 0 ; count < 100 ; count++) ( /* section 3 */
signal[count] = getinput(); /* read input signal */
?
ave = average(signal count): /* section 4 */
var = variance(signal, count) ; /* calculate results */
print£(*\n\nAverage = $f",ave); /* section 5 */
print£(" variance = $£*,var); /* print results */
float average(float array[],int size) /* section 6 */
¢ /* function calculates average */
int i;
Eloat sum = 0.0; /* intialize and declare sum */
for(i = 0; i< size ; iss)
‘sum 7* calculate sun */
return(sun/size) ; /* return average */
)
float variance(float array{],int size) /* function calculates variance */
c
int i; /* declare local variables */
float ave:
float sum = 0.0; /* intialize sum of signal */
float sum2 = 0.0; /* sum of signal squared */
for(i = 0; i < size ; itt) (
sum = sum + arraylil:
sun2 = sun? + arrayli]*array{il; /* calculate both sums */
)
ave = sum/size; /* calculate average */
return((sun2 - sumtave)/(size-1)); /* return variance */
Listing 2.1 Example C program that calculates the average and variance of
a sequence of numbers.
to calculate the statistics to be printed. The variables ave and var are used to store the
results and the library function printé is used to display the results. This part of the
program relates to element four (functions and data structures) because the operations de-
fined in functions average and variance are executed and stored.56 C Programming Fundamentals Chap, 2
Section five uses the library function print to display the results ave, var,
and also calls the function sqxt in order to display the standard deviation, This art
of the program relates to element four (functions) and element five (input/output), be
cause the operations defined in function sqrt are executed and the results ae also die
played.
The two functions, average and variance, are defined in the remaining part of
{isting 2.1. This last section relates primarily to element two (operators), since the dar
tailed operation of each function is defined inthe same way that the main program wes
defined. The function and argument types are defined and the local variables to be used ig
each Function are declared, The operations required by each function are then defined fol
lowed by a retum statement that passes the result back to the main program.
2.2 VARIABLES AND DATA TYPES
All programs work by manipulating some kind of information. A variable in C is defined
by declaring that sequence of characters (the variable identifier or name) are to be
\weated as a particular predefined type of data, An identifier may be any sequence of char
acters (usually with some length restrictions) that obeys the following three rules:
(A) All identifiers start with a letter or an underscore (_)
(2) The rest ofthe identifier can consist of letters, underscores, and/or digits
) The rest ofthe identifier does not match any of the C keywords. (Check compiler
implementation fora list of these.)
Jn particular, C is case sensitive; making the variables Average, AVERAGE, and
AVeRaGe all different.
The (C language supports several different data types that represent integers
(eclared int), floating-point numbers (declared €Loat: or double), and text data (de.
clared chaz). Also, arrays of each variable type and pointers of each type may be de.
lated. The first two types of numbers will be covered first followed by a brief introdue.
tion to arrays (covered in more detail with pointers in section 2.7). The special treatment
of text using character arrays and strings will be discussed in Section 2.2.3,
2.2.1 Types of Numbers
AC program must declare the variable before itis used in the program. There are several
{yes of numbers used depending on the format in which the numbers are stored (Mloat-
ing:point format or integer format) and the accuracy ofthe numbers (single-precision ver
Sus double precision floating-point, for example). The following example program illus.
trates the use of five different types of numbers:
Sec. 2.2 Variables and Data Types 87
main()
{
int i; /* size dependent on implementation */
short 5; /* 16 bit integer */
long k? /* 32 bit integer */
float a; /* single precision floating-point */
double b; /* double precision floating-point */
72000;
printf (*\nt]d ¥4 €d\n¥20.15£\n820.15£",k, 5,i,b,a);
)
‘Three types of integer numbers (int, short int, and long int) and two types of
floating-point numbers (€Loat and double) are illustrated in this example. The actual
sizes (in terms of the number of bytes used to store the variable) of these five types de-
pends upon the implementation; all that is guaranteed is that a short int variable will
not be larger than a Long int and a doubLe will be twice as lage as a float. The
size of a variable declared as just Amt. depends on the compiler implementation. It is nor-
mally the size most conveniently manipulated by the target computer, thereby making
Programs using Amts the most efficient on a particular machine. However, ifthe size of
the integer representation is important in a program (as it often is) then declaring vari-
ables as int could make the program behave differently on different machines. For ex-
ample, on a 16-bit machine, the above program would produce the following results:
72000 6464 6464
0.100000000000000
0.100000001490116
But on a 32-bit machine (using 32-bit ints), the output would be as follows:
72000 6464 72000
0.100000000000000
0.100000001490116
‘Note that in both cases the short and Long variables, i and 3, (the first two numbers dis-
played) are the same, while the third number, indicating the Amt, differs. In both cases,
the value 6464 is obtained by masking the lower 16 bits of the 32-bit k value. Also, in both
cases, the floating-point representation of 0.1 with 32 bits (ELoat) is accurate to eight dec-
imal places (seven places is typical). With 64 biti is accurate to at least 15 places
Thus, to make a program truly portable, the program should contain only short
Ant and Long int declarations (these may be abbreviated short and long). In addi-58 C Programming Fundamentals Chap, 2
tion to the five types illustrated above, the three ints can be declared as unsigned by
preceding the declaration with unsigned. Also, as will be discussed in more detail in the
next section concerning text data, a variable may be declared to be only one byte long by
declaring it a char (signed or unsigned). The following table gives the typical sizes
and ranges of the different variable types for a 32-bit machine (such as a VAX) and a 16-
bit machine (such as the IBM PC).
1601 16-bit st 32
Variable Machine Machine Machine Machine
Declaration its) Range Size (its) Range
char 8 =n | 8 =128 10 127
unsigned char 3 00255 8 01255
int 6 “32768 10 32767 2 218
usted lat 16 010 65535 2 0104369
short 16 32768 10 32767 16 ~32768 10 32767
nsigned short 16 | Owoassis 16 01065535
er = 169 2 210
unsigned hog, 2 0104369 2 0104369
oat 2 0638 2 este
double 64 1.002306 a “e08
2.2.2 Arrays
Almost all high-level languages allow the definition of indexed lists of a given data type,
commonly referred to as arrays. In C, all data types can be declared as an array simply by
placing the number of elements (0 be assigned to the array in brackets after the array
name. Multidimensional arrays can be defined simply by appending more brackets con-
taining the array size in each dimension, Any N-dimensional array is defined as follows:
type name[sizel] [size2] ... [sizeN];
For example, each of the following statements are valid array definitions:
unsigned int List {10};
double input [5];
short int x{2000];
char input_buffer{20);
unsigned char image{256] (2561;
int matrix(41 [3] (21;
Sec.2.3 Operators 59
Note that the array definition unsigned char image [256] [256] could define an
8-bit, 256 by 256 image plane where a grey scale image is represented by values from 0
to 255, The last definition defines a three-dimensional matrix in a similar fashion. One
> arithmetic shift right (number of bits is operand)
The unary bitwise NOT operator, which inverts all the bits in the operand, is imple-
mented with the ~ symbol. For example, if 4 is declared as an unsigned int, then
4 = -07 sets 4 to the maximum integer value for an unsigned int.
2.3.3 Combined Operators
C allows operators to be combined with the assignment operator (=) so that almost any
statement of the form
=
can be replaced with
=
where represents the same variable name in all cases. For example, the
following pairs of expressions involving x and y perform the same function:
RXR KRM HH KK
KRX KX KH KKK
In many cases, the left-hand column of statements will result in a more readable and eas-
jer to understand program. For this reason, use of combined operators is often avoided.
Unfortunately, some compiler implementations may generate mofe efficient code if the
combined operator is used.
2.3.4 Logical Operators
Like all C expressions, an expression involving a logical operator also has a value. A log-
ical operator is any operator that gives a result of true or false. This could be a compari
son between two values, or the result of a series of ANDs and ORs. If the result of a logi-
cal operation is true, it has a nonzero value; if it is false, it has the value 0. Loops and if62 C Programming Fundamentals Chap. 2
statements (covered in section 2.4) check the result of logical operations and change pro.
‘gram flow accordingly. The nine logical operators are as follows:
< —_lessthan
<= less than or equal to
== equalto
>= greater than or equal to
> greater than
t= notequal to
&& logical AND
F 1 logical OR
! logical NOT (unary operator)
Note that == can easily be confused with the assignment operator (=) and will result in a
valid expression because the assignment also has a value, which is then interpreted as
true or false. Also, && and | | should not be confused with their bitwise counterparts (&
and |) as this may result in hard to find logic problems, because the bitwise results may
not give true or false when expected.
2.3.5 Operator Precedence and Type Conversion
Like all computer languages, C has an operator precedence that defines which operators in
an expression are evaluated first. If this order is not desired, then parentheses can be used
to change the order. Thus, things in parentheses are evaluated first and items of equal
precedence are evaluated from left to right. The operators contained in the parentheses or
expression are evaluated in the following order (listed by decreasing precedence):
tne increment, decrement
- unary minus
multiplication, division, modulus
addition, subtraction
shift left, shift right
relational with less than or greater than
equal, not equal
bitwise AND
* bitwise exclusive OR
bitwise OR
ae logical AND
" logical OR
Statements and expressions using the operators just described should normally use vari-
ables and constants of the same type. If, however, you mix types, C doesn’t stop dead
Sec. 2.4 Program Control 63
(like Pascal) or produce a strange unexpected result (like FORTRAN). Instead, C uses a
set of rules to make type conversions automatically. The two basic rules are:
(J) If an operation involves two types, the value with a lower rank is converted to the
type of higher rank. This process is called promotion and the ranking from highest
to lowest type is double, float, long, int, short, and char. Unsigned of each of the
types outranks the individual signed type.
@ In an assignment statement, the final result is converted to the type of the variable
that is being assigned. This may result in promotion or demotion where the value is,
truncated to a lower ranking type.
Usually these rules work quite well, but sometimes the conversions must be stated
explicitly in order to demand that a conversion be done in a certain way. This is accom-
plished by type casting the quantity by placing the name of the desired type in parenthe-
ses before the variable or expression. Thus, if 4 is an dmt, then the statement
4=10* (1.55+1.67) 7 would set 4 to 32 (the truncation of 32.2), while the statement
4=10*( (int)1.55+1.67); would set i to 26 (the truncation of 26.7 since
(dnt) 1.55 is truncated to 1).
2.4 PROGRAM CONTROL
The large set of operators in C allows a great deal of programming flexibility for DSP ap-
plications. Programs that must perform fast binary or logical operations can do so without
using special functions to do the bitwise operations. C also has a complete set of program
control features that allow conditional execution or repetition of statements based on the
result of an expression. Proper use of these control structures is discussed in section
2.11.2, where structured programming techniques are considered,
2.4.1 Conditional Execution: if-else
In C, as in many languages, the 4 statement is used to conditionally execute a series of
statements based on the result of an expression, The 4f statement has the following
generic format:
if (value)
statement;
else
statement2;
where value is any expression that results in (or can be converted to) an integer value.
Af value is nonzero (indicating a true result), then statement+ is executed; otherwise,
statement2 is executed. Note that the result of an expression used for value need
‘not be the result of a logical operation—all that is required is that the expression results in
a zero value when statement 2 should be executed instead of statement. Also, the