0% found this document useful (0 votes)
18 views56 pages

09 Filter Design

The document discusses digital filter design, focusing on methods to convert analog filters to digital filters, including impulse invariance and bilinear transformation. It outlines the importance of quantization, stability, and noise in filter design, along with techniques for FIR and IIR filter design from specifications. Additionally, it covers classic filter types such as Butterworth, Chebyshev, and Elliptic filters, along with their characteristics and MATLAB implementation commands.

Uploaded by

Abayomi Ayodele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views56 pages

09 Filter Design

The document discusses digital filter design, focusing on methods to convert analog filters to digital filters, including impulse invariance and bilinear transformation. It outlines the importance of quantization, stability, and noise in filter design, along with techniques for FIR and IIR filter design from specifications. Additionally, it covers classic filter types such as Butterworth, Chebyshev, and Elliptic filters, along with their characteristics and MATLAB implementation commands.

Uploaded by

Abayomi Ayodele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Filter Design

Jose Krause Perin

Stanford University

July 30, 2018

1/57
Last lecture
I Two’s complement is a fixed-point representation that represents fractions as integers
I There’s an inherent trade-off between roundoff noise and overflow/clipping
I FIR systems remain stable after coefficient quantization
I Linear phase FIR systems remain linear phase after coefficient quantization, since the
impulse response remains symmetric
I Coefficient quantization may lead to instability in IIR systems, as poles may move outside
the unit circle
I Similarly to quantization noise, roundoff noise is modeled by an additive uniformly
distributed white noise that is independent of the input signal (the linear noise model).
I Roundoff noise is minimized by performing quantization only after accumulation, but this
requires (2B + 1)-bit adders
I In FIR structures the equivalent roundoff noise at the output is white
I IIR structures lead to roundoff noise shaping
I The least noisy IIR structure depends on the system
I Cascade and parallel forms are used to mitigate total roundoff noise
2/57
Practice and theory

In practice
xc (t) x[n] Digital Signal y[n] yc (t)
ADC DAC
Processor

DSP theory

xc (t) x[n] LTI y[n] yr (t)


C-to-D D-to-C
Xc (jΩ) X(ejω ) System Y (ejω ) Yr (jΩ)

Sampling h[n] ↔ H(ejω ) Reconstruction

Heq (jΩ)

3/57
Digital filter design

We’ll cover two different design problems


1. Digital filter design from analog filter
Given a continuous-time LTI filter defined by heq (t) ⇐⇒ Heq (s), how to obtain the
corresponding discrete-time filter h[n] ⇐⇒ H(z) such that

H(ejΩT ) ≈ Heq (jΩ), |Ω| < Ωs /2

Design techniques:
I Impulse invariance

I Bilinear transformation

Design by impulse invariance can result in either FIR or IIR filters, whereas bilinear
transformation generally results in IIR filters.

4/57
Digital filter design
2. Digital FIR filter design from specifications
How to find FIR H(z) such that H(ejω ) best approximates a desired frequency response
Hd (ejω )? Essentially a polynomial curve fitting problem.

Hd (ejω )

1 + δ1 Design techniques:
1 − δ1 I Window method
I Optimal filter design
I Parks-McClellan algorithm
Passband Transition Stopband I Least-squares algorithm

δ2
ωp ωs πω 5/57
Outline

Outline

Design from Analog Filter


Impulse Invariance
Bilinear Transformation
Classic filters

Design from Specifications


Window method
Optimal FIR filter design
Parks-McClellan Algorithm
Least-squares Algorithm
Examples

6/57
Digital processing of analog signals

xc (t) x[n] LTI y[n] yr (t)


C-to-D D-to-C
Xc (jΩ) X(ejω ) System Y (ejω ) Yr (jΩ)

Sampling h[n] ↔ H(ejω ) Reconstruction

Heq (jΩ)

As long as there is no aliasing and that the reconstruction filter is the ideal lowpass filter these
equalities hold:
(
H(ejΩT ), |Ω| < π/T
Heq (jΩ) = (from DSP to analog)
0, |Ω| > π/T

H(ejω ) = Heq (jω/T ), |ω| < π (from analog to DSP)


In practice, these are good approximations. 7/57
Impulse invariance
Question: How to design h[n] ←→ H(z) if we know heq (t) ←→ Heq (s)?

xc (t) x[n] LTI y[n] yr (t)


C-to-D D-to-C
Xc (jΩ) X(ejω ) System Y (ejω ) Yr (jΩ)

Sampling h[n] ↔ H(ejω ) Reconstruction

Heq (jΩ)

Design h[n] by sampling heq (t) with period T .

h[n] = T hc (nT ) (impulse invariance)

The scaling factor T compensates for the 1/T attenuation in the frequency domain due to
sampling
The resulting h[n] depends on the sampling period T .
8/57
Impulse invariance example: lowpass Butterworth filter

Butterworth filters are maximally flat in the passband and are monotonic overall. The
downside of Butterworth filters is their relatively slow roll-off.
For this example, consider the following 6th-order continuous-time lowpass Butterworth filter:

0.12093
Heq (s) =
(s2 + 0.364s + 0.4945)(s2 + 0.9945s + 0.4945)(s2 + 1.3385 + 0.4945)

9/57
Impulse invariance example: lowpass Butterworth filter
To design an FIR filter by impulse invariance we must
1. Obtain the continuous-time impulse response heq (t) ←→ Heq (s) (impulse in Matlab)
2. Sample and scale heq (t) with period T and record only M + 1 first samples
(
T heq (nT ), n = 0, . . . , M
h[n] = , (for causal heq (t))
0, otherwise

h[n] is the FIR filter coefficients. M is typically chosen to satisfy some energy criterion.
For instance, samples must contain 95% of the signal energy.
heq (t)

T 5T 24T t

10/57
Impulse invariance example: lowpass Butterworth filter
Magnitude Phase
dB rad

π/4 π/2 3π/4 π ω π/4 π/2 3π/4 π ω

Heq (jΩ)
−10 −2
H(ejω ) FIR 25 samples
Heq (jΩ)
−20 H(ejω ) FIR 25 samples
−4

−30
−6
−40

−8
−50

−60 −10

Questions:
1. What would happen if we take fewer samples (smaller M )?
2. What would happen if we decrease the sampling period e.g., T2 = 0.5T ?
11/57
Impulse invariance example: lowpass Butterworth filter

I Designing FIR filters by impulse invariance is straightforward. Plus, FIR systems have the
implementation advantages discussed in lectures 7 and 8
I Problem: it may require prohibitively many samples to achieve good accuracy
I IIR systems generally offer better accuracy while requiring fewer operations (coefficients)
To design an IIR filter by impulse invariance we must
1. Invert the Laplace transform Heq (s) using partial fraction expansion to obtain heq (t)
analytically. Function residue in Matlab
2. Sample heq (t): h[n] = T heq (nT )
3. Calculate the z-transform H(z) of h[n]

12/57
Impulse invariance example: lowpass Butterworth filter

For the 6th-order Butterworth example:

Im{z}

0.2871 − 0.4466z −1
H(z) =
1 − 1.2971z −1 + 0.6949z −2 0.5
−2.1428 + 1.1455z −1
+
1 − 1.0691z −1 + 0.3699z −2
1.8557 − 0.6303z −1 −1 −0.5 0.5 1 Re{z}
+
1 − 0.9972−1 + 0.2570z −2
−0.5

−1

13/57
Impulse invariance example: lowpass Butterworth filter
Magnitude Phase
dB rad

π/4 π/2 3π/4 π ω π/4 π/2 3π/4 π ω

Heq (jΩ)
−10 −2
H(ejω ) IIR
Heq (jΩ)
−20 H(ejω ) IIR
−4

−30
−6
−40

−8
−50

−60 −10

I IIR systems achieve better accuracy while requiring fewer operations (coefficients) than
FIR systems.
I Similarly to FIR systems, if we change the sampling frequency the behavior of the filter
changes.
14/57
Bilinear transformation
Another way to answer the question: How to design h[n] ←→ H(z) given heq (t) ←→ Heq (s)?
The bilinear transformation maps the left-hand side of the s-plane into the unit circle in the
z-plane.

2 1 − z −1
 
s= (Bilinear transformation)
T 1 + z −1
Ω Im{z}

σ 1 Re{z}

s-plane z-plane
15/57
Bilinear transformation

To design a digital filter from an analog filter using the bilinear transformation, we simply make
the following change of variables:

H(z) = Heq (s) 2 1 − z −1


s=
T 1 + z −1
The resulting H(z) generally is IIR.
The bilinear transformation method is easier and more systematic than the impulse invariance
method.
In Matlab: [bz, az] = bilinear(bs, as, 1/T)

16/57
Frequency warping
Evaluating z on the unit circle is equivalent to evaluating s on the imaginary axis jΩ:
2 1 − e−jω
 
2
jΩ = = j tan ω/2
T 1 + e−jω T
This results in the following relation
ω = 2 arctan(ΩT /2) (frequency warping)
Problem: with the bilinear transformation we no longer have the linear relation ω = ΩT . This
is known as frequency warping.
ω = 2 arctan(ΩT /2)

−π − π2 π π ΩT
2

-π 17/57
Bilinear transformation example: lowpass Butterworth filter
Revisiting the example of the 6th-order lowpass Butterworth filter
To obtain H(z) we simply make:

H(z) = Heq (s) 2 1 − z −1


s=
T 1 + z −1
Pole-zero diagram
Im{z}

0.5

×6

−1 −0.5 0.5 1 Re{z}

−0.5

−1
18/57
Bilinear transformation example: lowpass Butterworth filter
Magnitude Phase
dB rad

π/4 π/2 3π/4 π ω π/4 π/2 3π/4 π ω

−10 −2
Heq (jΩ)
−20 H(ejω ) bilinear T = 0.5
−4 H(ejω ) bilinear T = 2

−30
−6
−40
Heq (jΩ)
H(ejω ) bilinear T = 0.5 −8
−50
H(ejω ) bilinear T = 2

−60 −10

I Similarly to impulse invariance, the resulting frequency response depends on the sampling
period T .
I Frequency warping leads to the disagreement between continuous-time and discrete-time
filters for ω > 0.3π
19/57
Frequency pre-warping

Frequency pre-warping mitigates the distortion caused by frequency warping by scaling s so


that H(ejΩp T ) = Heq (jΩp ) (no distortion) at some specified frequency Ωp .

H(z) = Heq (s) 1 − z −1


Ωp
s= tan(Ωp T /2) 1 + z −1
(bilinear transformation with frequency pre-warping)

Ωp is chosen so that H(ejω ) will preserve a particular characteristic of Heq (jΩ) e.g., Ωp is
made equal to the 3-dB bandwidth.

In Matlab: [bz, az] = bilinear(bs, as, 1/T, Wp/(2*pi))

20/57
Bilinear transformation example: lowpass Butterworth filter
Example of bilinear transformation with frequency pre-warping
I Ωp = 0.6π for T = 2
I Ωp = 0.2π for T = 0.5.

Magnitude Phase
dB rad

π/4 π/2 3π/4 π ω π/4 π/2 3π/4 π ω

−10 −2
Ωp = 0.6π
Heq (jΩ)
−20 H(ejω ) bilinear T = 0.5, Ωp = 0.2π
−4 H(ejω ) bilinear T = 2, Ωp = 0.6π

−30
Heq (jΩ) Ωp = 0.6π
−6
H(ejω ) bilinear T = 0.5, Ωp = 0.2π
−40
H(ejω ) bilinear T = 2, Ωp = 0.6π
−8
−50

−60 −10
21/57
Common terminology
Hd (ejω )

1 + δ1

1 − δ1

Passband Transition Stopband

δ2

ωp ωs π ω

Terminology
I The filter order is equal to the largest power of z −1 or z
I δ1 passband ripple
I δ2 stopband ripple (stopband attenuation)
I ωp passband edge frequency
I ωs stopband edge frequency 22/57
Classic filters

I Butterworth: It’s monotonic in the passband and in the stopband.


Matlab: butter(order, w3dB/pi)
I Chebyshev type I: It has equiripple frequency response in the passband and varies
monotonically in stopband.
Matlab: cheby1(order, passband ripple, wp/pi)
I Chebyshev type II: It has equiripple frequency response in the stopband and varies
monotonically in the passband.
Matlab: cheby2(order, stopband attenuation, ws/pi)
I Elliptic: It has equiripple frequency response in both the passband and the stopband.
Matlab: ellip(order, passband ripple, stopband attenuation, wp/pi)
I Bessel: It has maximally linear phase response (constant group delay).
Matlab function besself (only for continuous time)
In general (and in Matlab) these filters are first designed in continuous-time H(s), and then
converted to discrete-time H(z) using the bilinear transformation with frequency pre-warping.

23/57
Comparison of classic filters
IAll are 6th-order filters designed to have 3-dB bandwidth of ≈ π/2.
IRipple was set to 1 dB in passband
I Stopband attenuation was 30 dB.

Magnitude
dB
π/4 π/2 3π/4 π
ω

Butterworth
−10 π/4 π/2 Chebyshev I
Chebyshev II
Elliptic
−1 Bessel
−20

−30

−40
24/57
Comparison of classic filters
I All are 6th-order filters designed to have 3-dB bandwidth of ≈ π/2.
I Ripple was set to 1dB in passband and stopband
I Stopband attenuation was 30 dB.

Phase
rad

π/4 π/2 3π/4 π


ω

−2

−4

Butterworth
−6
Chebyshev I
Chebyshev II
−8 Elliptic
Bessel

−10 25/57
From lowpass to highpass, bandpass, and bandstop

26/57
Outline

Outline

Design from Analog Filter


Impulse Invariance
Bilinear Transformation
Classic filters

Design from Specifications


Window method
Optimal FIR filter design
Parks-McClellan Algorithm
Least-squares Algorithm
Examples

27/57
Digital FIR filter design from specifications
How to find FIR H(z) such that H(ejω ) best approximates a desired frequency response
Hd (ejω )? Essentially a polynomial curve fitting problem.
Hd (ejω )

1 + δ1

1 − δ1

Passband Transition Stopband

δ2

ωp ωs π ω

Design techniques:
I Window method
I Optimal filter design
I Parks-McClellan algorithm
I Least squares 28/57
Window method
An easy way to design an FIR filter to match a desired frequency response Hd (ejω ) is to
calculate the inverse DTFT of Hd (ejω ) and truncate the result to a reasonable number of
samples (similar to impulse invariance):
Z π
1
hd [n] = Hd (ejω )ejωn dω (inverse DTFT)
2π −π
Then we truncate it to have at most M + 1 samples
(
hd [n], n = 0, 1, . . . , M
h[n] = (truncated sequence)
0, otherwise

Another way to write truncation is


(
1, n = 0, 1, . . . , M
h[n] = w[n]hd [n], where w[n] = (truncated sequence)
0, otherwise

w[n] is the window sequence, which in this case is the rectangular window.
29/57
Window method

Representing truncation as h[n] = w[n]hd [n], gives us an easy way to understand what
happens in the frequency domain.

Multiplication in time domain means convolution in the frequency domain:


1
H(ejω ) = W (ejω ) ∗ Hd (ejω )
2π Z
π
1
= Hd (ejθ )W (ej(ω−θ) )dθ (convolution)
2π −π

Problem: H(ejω ) will not be equal to Hd (ejω ). Instead, it will be a smeared version of the
desired response Hd (ejω ).

30/57
Revisiting the Gibbs phenomenon
Time domain Frequency domain
M
sin ωc n ωc ω 
c jω
X sin ωc n −jωn
hlpf [n] = = sinc n HM (e ) = e
πn π π πn
n= −M

hlpf [n] Hlpf (ejω )

ωc
π 1 M =7

−M M

n ω
−π −ωc ωc π

30/57
Revisiting the Gibbs phenomenon
Time domain Frequency domain
M
sin ωc n ωc ω 
c jω
X sin ωc n −jωn
hlpf [n] = = sinc n HM (e ) = e
πn π π πn
n= −M

hlpf [n] Hlpf (ejω )

ωc
π 1 M = 19

−M M

n ω
−π −ωc ωc π

30/57
Revisiting the Gibbs phenomenon
I The Gibbs phenomenon appears when we truncate the impulse response of the ideal
lowpass filter (or any discontinuous DTFT).
I In lecture 1, we attributed this to convergence issues of the DTFT for non-absolute
summable sequences. The DTFT of the sinc converges only in the mean square sense, and
not uniformly
I Another way to view the Gibbs phenomenon is as a result of windowing.
1
H(ejω ) = W (ejω ) ∗ Hd (ejω ) (convolution)

I In this case the desired response Hd (ejω ) is the ideal lowpass filter, and the window
function is
(
1, n = −M, −M + 1, . . . , M − 1, M
w[n] =
0, otherwise
sin(ω(2M + 1)/2)
⇐⇒ W (ejω ) =
sin(ω/2)
31/57
Rectangular window
W (ejω )

M =7
M = 19

−π −π/2 π/2 π ω

32/57
Rectangular window

From Fourier transform theory, we can show that the rectangular window produces H(ejω )
that best matches Hd (ejω ) in the mean-square sense. That is,
Z π
1
|H(ejω ) − Hd (ejω )|2 dω, (mean-square error)
2π −π
is minimized when w[n] is the rectangular window.
Question: are there other windows w[n] that minimize issues with discontinuities without
excessively increasing the mean-square error?

33/57
Commonly used windows
Rectangular: 
1, 0 ≤ n ≤ M
w[n] =
0, otherwise
Bartlett (triangular):

2n/M, 0 ≤ n ≤ M/2, M even
w[n] = 2 − 2n/M, M/2 < n ≤ M
0, otherwise

Hann: 
0.5 − 0.5 cos(2πn/M ), 0 ≤ n ≤ M,
w[n] =
0, otherwise
Hamming: 
0.54 − 0.46 cos(2πn/M ), 0 ≤ n ≤ M,
w[n] =
0, otherwise
Blackman:

0.42 − 0.5 cos(2πn/M ) + 0.08 cos(4πn/M ), 0 ≤ n ≤ M,
w[n] =
0, otherwise
34/57
Commonly used windows
Time domain
All windows are symmetric about M/2.
w[n]
Rectangular
1

Blackman

Hamming

Hann

Bartlett (triangular)

M M n
2

Note: n is discrete. These curves were plotted as continuous functions just for easier
visualization.
We will revisit windows when talking about spectrum analysis (lecture 12) 35/57
Linear phase in filters designed by windowing

If the window is causal and symmetric and if the desired impulse response hd [n] is causal and
symmetric, then it follows

w[n] = ±w[M − n] (causal and symmetric window)


hd [n] = ±hd [M − n] (causal and symmetric hd [n])
h[n] = w[n]hd [n] = ±w[M − n]hd [M − n] = ±h[M − n] (causal and symmetric h[n])

Therefore, h[n] is either even or odd symmetric and consequently H(ejω ) has generalized linear
phase.

36/57
Kaiser window

It’s typically desired that the window be maximally concentrated around ω = 0 (small sidelobe
area).
The Kaiser window offers a nearly optimal trade-off between main-lobe width and side-lobe
area.   p 
2 2
 I0 β 1 − (n − α) /α

w[n] = , 0≤n≤L−1,
 I0 (β)
0, otherwise

where α = (L − 1)/2, β is a design parameter, and I0 (·) is the modified Bessel function of
first kind and order 0.
See section 7.5.3 of the textbook for recommendations on values of β for lowpass filter design.

37/57
Summary on FIR filter design by the window method

1. From the desired frequency response Hd (ejω ) calculate the desired impulse response hd [n].
2. Choose the filter order M and the window w[n]. Then,
(
hd [n]w[n], n = 0, . . . , M
h[n] = (for hd [n] causal)
0, otherwise

Kaiser window depends on parameters β and M . Other windows only depend on M .


3. Linear phase is guaranteed if hd [n] and w[n] are symmetric
In Matlab:
fir1 uses Hamming window by default. Other windows can be passed as parameters:

>> fir1(M, wc/pi, ‘lowpass’, kaiser(M+1, beta))

designs a lowpass FIR filter of order M and cutoff frequency ωc using the window method with
Kaiser window with parameter β
38/57
Optimal FIR filter design

I Though straightforward, filter design by windowing is sub-optimal in the sense that it


compromises accuracy for better handling of discontinuities in Hd (ejω ).
I More importantly, there was no well-defined metric to evaluate filters

A sensible choice for evaluation metric is the weighted error:


 
E(ω) = W (ω) Hd (ejω ) − H(ejω ) , (weighted error)

where 0 ≤ W (ω) ≤ 1 is the weight function.

I Generally, we choose either W (ω) = 1 or W (ω) = 0 over a certain frequency band.


I Making W (ω) = 0 over a certain band means that we don’t care about the error in that
band. Generally, we choose W (ω) = 0 around discontinuities of Hd (ejω ) i.e., transition
bands.

39/57
Optimal FIR filter design

Hd (ejω )

1 + δ1

1 − δ1

Passband Don’t care Stopband

δ2

ωp ωs π ω
W (ω ≤ ωp ) = 1 W (ωp < ω < ωs ) = 0 W (ω ≥ ωs ) = 1
40/57
Matrix notation

It is hard to build efficient algorithms to deal with continuous ω.


We will sample the weighted error E(ω) for a set of N frequencies {ω1 , . . . , ωN } and write
everything in matrix notation:

 
E(ω) = W (ω) Hd (ejω ) − H(ejω ) (continuous weighted error)
e = W (d − Qh) (matrix notation)

I e is the error vector ei = E(ωi )


I W is a diagonal matrix defined as Wii = W (ωi )
I d is the desired frequency response vector: di = Hd (ejωi )
I h is the FIR filter coefficients vector hi = h[i]. This is the vector we want to find.
Note: If the filter has linear phase, h[n] is symmetric, so we only need to compute the
coefficients h[0], . . . , h[bM/2c]

41/57
Matrix notation
I Q is the matrix:
2 cos(ω1 ( M 2 cos(ω1 ( M
 
2 )) 2 − 1)) ... 2 cos(ω1 ) 1
 2 cos(ω2 ( M )) 2 cos(ω2 ( M
2 2 − 1)) ... 2 cos(ω2 ) 1
Q=
 
.. .. .. .. .. 
 . . . . .
2 cos(ωN ( M
2 )) 2 cos(ωN ( M
2 − 1)) ... 2 cos(ωN ) 1 N × M +1
2

for h[n] even symmetric and M even.


This comes from the relation
M
M  2 −1 
−jω M
X X
jω jωm
H(e ) = h[m]e =e 2 1+ 2h[n] cos(ω(M/2 − n)) .
m=0 n=0
(DTFT of symmetryic FIR h[n])
Note: in matrix Q we have disregarded the term ejωM/2 . This way matrix Q will be
purely real. Ignoring the terms ejωM/2 is equivalent to disregarding the constraint that the
filter must be causal. This is not a problem because we can always time-shift the result and
make it causal. Questions: how would matrix Q change for h[n] even symmetric and M odd?
What about h[n] odd symmetric? 42/57
Generalized linear phase in optimal FIR filter design

 h[M − n]
Even symmetry h[n] =
M
  2 −1
X 
−jω M

e 1+ 2h[n] cos(ω(M/2 − n)) , M even

 2

jω n=0
H(e ) = M −1
 X2
M

−jω
e 2h[n] cos(ω(M/2 − n)), M odd


 2

n=0

 −h[M − n]M
Odd symmetry h[n] =
  2 −1
X 
−jω M

e 1 + 2jh[n] sin(ω(M/2 − n)) , M even

 2


jω n=0
H(e ) = M −1
 X2
M

−jω 2
e 2jh[n] sin(ω(M/2 − n)), M odd



n=0

43/57
Optimal FIR filter design
Question: how to find the coefficients h[0], . . . , h[b M
2 c] (the vector h)?

Two algorithms:
1. Parks-McClellan algorithm: minimizes the maximum weighted error

min max E(ω) (min-max problem)


h[n] ω

min max |ei | (in matrix notation)


h i

firpm in Matlab.
2. Least squares: minimizes the mean-square weighted error
Z π
1
min |E(ω)|2 dω (least squares)
h[n] 2π −π

min||e||22 (in matrix notation)


h

firls in Matlab.
44/57
Parks-McClellan algorithm

The Parks-McClellan algorithm finds the filter coefficients that minimize the maximum
weighted error:

min max E(ω) (min-max problem)


h[n] ω

min max |ei | (in matrix notation)


h i

I This problem is also known as the Chebyshev approximation problem


I Traditionally, this problem is solved by using the alternation theorem and the Remez
exchange algorithm to iteratively find the impulse response that minimizes the maximum
weighted error over a set of closed intervals in the frequency domain.
I We can also recast this problem as a linear program and use standard convex
optimization packages to solve it.

45/57
Parks-McClellan algorithm as a linear program

min max |ei | (min-max problem)


h i
We can rewrite this optimization problem as
min u
u
subject to −u ≤ ei ≤ u, i = 1, . . . , N (equivalent linear program)
u is just a dummy scalar variable, and e = W (d − Qh).
In CVX for Matlab:
cvx begin
variable u(1)
variable h(floor(M/2)+1)
minimize u
subject to -u <= W*(d - Q*h) <= u
cvx end
It will return u and the vector h, which is what we really want. 46/57
Least-squares algorithm

The least-squares algorithm finds the filter coefficients that minimize the mean-square
weighted error:

Z π
1
min |E(ω)|2 dω (mean square weighted error)
h[n] 2π −π
min ||W (d − Qh)||22 (in matrix notation)
h
min ||Ah − b||22 (change of variables A = W Q and b = W d)
h

Problems of the form minh ||Ah − b||22 are referred to as least-squares problems and they
have analytical solution:

h = A† b (least-squares solution)
† H −1 H
A = (A A) A is the Moore-Penrose pseudoinverse (pinv in Matlab).
Note: AH = (A∗ )T is the Hermitian (conjugate transpose matrix), since A could be complex.
47/57
Example: optimal bandpass FIR design
We want to design an FIR bandpass filter with the following desired response Hd (ejω )
The weight function is zero in the transition bands. Hence, we don’t care about the error in
those regions.

Hd (ejω )

W (ω) = 0

∆ω ∆ω

0.5π 0.7π π ω

See code on Canvas/Files/Matlab/optimal fir design example.m. 48/57


Non-linear phase FIR filter design using least squares
Many applications do not require linear phase FIR filters. In fact, in some applications the filter
must have non-linear phase e.g., linear equalization (HW#5)
To design non-linear phase FIR filters using the least-squares algorithm, we just need to
redefine matrix Q:

1 e−jω1 e−j2ω1 . . . e−jM ω1


 
1 e−jω2 e−j2ω2 . . . e−jM ω2 
Q = .
 
.. .. .. ..
 ..

. . . . 
1 e−jωN e−j2ωN ... e−jM ωN N ×M +1

where ω1 , . . . , ωN are evenly spaced frequencies in the interval [−π, π].


Note that
M
X
(Qh)k = h[m]e−jωk m = H(ejωk m ) (the DTFT of h[m] at frequency ωk )
m=0

Therefore, the matrix-vector product Qh gives H(ejω ) at N frequencies ω1 , . . . , ωN .


49/57
Non-linear phase FIR filter design using least squares

Now that we have the redefined matrix Q, we can apply the least-squares algorithm as usual

h = A† b (least squares solution)


where A = W Q and b = W d.

Important: d and W have to be defined for the same frequencies used in calculating Q.
If Hd (ejω ) is Hermitian symmetric i.e., Hd (ejω ) = Hd∗ (e−jω ), then h will be purely real.

50/57
Example: predicting band-limited signals
Question: how to predict the next sample from previous samples?

x[n]

?
1

1 2 3 4 5 6 7 8 9 10 11 n

−1

−2

51/57
Example: predicting band-limited signals
Suppose our band-limited signal is such that
|X(ejω )|
H(z)

−π −ωc ωc π ω

Mathematically, X
M
e[n] = h[m]x[n − m] (filter output)
m=0
M
X
0 ≈ h[0]x[n] + h[m]x[n − m]
m=1
M
1 X
x[n] ≈ − h[m]x[n − m] (prediction based on M previous samples)
h[0] m=1

Conclusion: designing a good predictive filter for band-limited signals boils down to
designing a good high-pass filter.
This method was first proposed by Vaidyanathan in 1987 52/57
Example: predicting band-limited noise
This is an example of prediction of a Gaussian noise with PSD:
(
jω 1, |ω| ≤ 0.7π
Φxx (e ) ≈
0, 0.7 < |ω| ≤ π

Least-squares high-pass filter with M = 30


Magnitude (dB) 50

-50

-100
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
/
Predictions
0.04
noise
0.02 prediction
Sample

-0.02

-0.04

-0.06
0 10 20 30 40 50 60 70 80 90 100
n
53/57
Summary
Impulse invariance
I The impulse response of the continuous-time system is sampled and scaled by T . In FIR
implementations the impulse response is truncated up to a specified number of samples.
In IIR implementations the discrete-time system is obtained analytically.
Bilinear transformation
I The bilinear transformation maps the left-hand side of the s-plane into the unit circle in
the z-plane. This non-linear mapping leads to frequency warping, which can be mitigated
by frequency pre-warping. Oversampling also mitigates frequency warping.
FIR filter design by windowing
I Design by windowing is almost an art form
I The Kaiser window is a nearly optimal choice
Optimal FIR filter design
I Optimal FIR filters minimize some characteristic of the weighted error
I The Parks-McClellan method minimizes the maximum weighted error
I The least-squares method minimizes the mean-square weighted error
54/57

You might also like