DSP Lab
DSP Lab
Markus Kuhn
Computer Laboratory
http://www.cl.cam.ac.uk/teaching/0809/DSP/
Signals
→ flow of information
Electronics (unlike optics) can only deal easily with time-dependent signals, therefore spatial
signals, such as images, are typically first converted into a time signal with a scanning process
(TV, fax, etc.).
2
Signal processing
Signals may have to be transformed in order to
→ amplify or filter out embedded information
→ detect patterns
→ prepare the signal to survive a transmission channel
→ prevent interference with other signals sharing a medium
→ undo distortions contributed by a transmission channel
→ compensate for sensor deficiencies
→ find information encoded in a different domain
To do so, we also need
→ methods to measure, characterise, model and simulate trans-
mission channels
→ mathematical tools that split common channels and transfor-
mations into easily manipulated building blocks
3
Analog electronics
4
Digital signal processing
Analog/digital and digital/analog converter, CPU, DSP, ASIC, FPGA.
Advantages:
→ noise is easy to control after initial quantization
→ highly linear (within limited dynamic range)
→ complex algorithms fit into a single chip
→ flexibility, parameters can easily be varied in software
→ digital processing is insensitive to component tolerances, aging,
environmental conditions, electromagnetic interference
But:
→ discrete-time processing artifacts (aliasing)
→ can require significantly more power (battery, cooling)
→ digital clock and switching cause interference
5
8
Objectives
By the end of the course, you should be able to
→ apply basic properties of time-invariant linear systems
→ understand sampling, aliasing, convolution, filtering, the pitfalls of
spectral estimation
→ explain the above in time and frequency domain representations
→ use filter-design software
→ visualise and discuss digital filters in the z-domain
→ use the FFT for convolution, deconvolution, filtering
→ implement, apply and evaluate simple DSP applications in MATLAB
→ apply transforms that reduce correlation between several signal sources
→ understand and explain limits in human perception that are ex-
ploited by lossy compression techniques
→ provide a good overview of the principles and characteristics of sev-
eral widely-used compression techniques and standards for audio-
visual signals
9
Textbooks
→ R.G. Lyons: Understanding digital signal processing. Prentice-
Hall, 2004. (£45)
→ A.V. Oppenheim, R.W. Schafer: Discrete-time signal process-
ing. 2nd ed., Prentice-Hall, 1999. (£47)
→ J. Stein: Digital signal processing – a computer science per-
spective. Wiley, 2000. (£74)
→ S.W. Smith: Digital signal processing – a practical guide for
engineers and scientists. Newness, 2003. (£40)
→ K. Steiglitz: A digital signal processing primer – with appli-
cations to digital audio and computer music. Addison-Wesley,
1996. (£40)
→ Sanjit K. Mitra: Digital signal processing – a computer-based
approach. McGraw-Hill, 2002. (£38)
10
Sequences and systems
A discrete sequence {xn }∞
n=−∞ is a sequence of numbers
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
xn = x(ts · n) = x(n/fs ),
11
Properties of sequences
A sequence {xn } is
∞
X
absolutely summable ⇔ |xn | < ∞
n=−∞
X∞
square summable ⇔ |xn |2 < ∞
n=−∞
periodic ⇔ ∃k > 0 : ∀n ∈ Z : xn = xn+k
A square-summable sequence is also called an energy signal, and
∞
X
|xn |2
n=−∞
exists.
Special sequences
Unit-step sequence:
0, n < 0
un =
1, n ≥ 0
Impulse sequence:
1, n = 0
δn =
0, n 6= 0
= un − un−1
13
15
16
Constant-coefficient difference equations
Of particular practical interest are causal linear time-invariant systems
of the form
xn b0 yn
N
X
yn = b0 · xn − ak · yn−k −a1
z −1
k=1 yn−1
z −1
−a2
Block diagram representation yn−2
of sequence operations:
x′n z −1
−a3
xn xn + x′n yn−3
Addition:
Multiplication xn a axn
by constant:
The ak and bm are
constant coefficients.
xn xn−1
Delay: z −1
17
or
xn xn−1 xn−2 xn−3
M z −1 z −1 z −1
X
yn = bm · xn−m b0 b1 b2 b3
m=0
yn
z −1 z −1
b1 −a1
N
X M
X xn−1 yn−1
ak · yn−k = bm · xn−m
k=0 m=0 z −1 z −1
b2 −a2
xn−2 yn−2
z −1 z −1
b3 −a3
xn−3 yn−3
The MATLAB function filter is an efficient implementation of the last variant.
18
Convolution
All linear time-invariant (LTI) systems can be represented in the form
∞
X
yn = ak · xn−k
k=−∞
19
Convolution examples
A B C D
E F A∗B A∗C
20
Properties of convolution
For arbitrary sequences {pn }, {qn }, {rn } and scalars a, b:
→ Convolution is associative
({pn } ∗ {qn }) ∗ {rn } = {pn } ∗ ({qn } ∗ {rn })
→ Convolution is commutative
{pn } ∗ {qn } = {qn } ∗ {pn }
→ Convolution is linear
{pn } ∗ {a · qn + b · rn } = a · ({pn } ∗ {qn }) + b · ({pn } ∗ {rn })
→ The impulse sequence (slide 13) is neutral under convolution
{pn } ∗ {δn } = {δn } ∗ {pn } = {pn }
→ Sequence shifting is equivalent to convolving with a shifted
impulse
{pn−d } = {pn } ∗ {δn−d }
21
Exercise 2
Prove that convolution is (a) commutative and (b) associative.
23
Exercise 5 (a) Find a pair of sequences {an } and {bn }, where each one
contains at least three different values and where the convolution {an }∗{bn }
results in an all-zero sequence.
(b) Does every LTI system T have an inverse LTI system T −1 such that
{xn } = T −1 T {xn } for all sequences {xn }? Why?
24
Direct form I and II implementations
xn b0 a−1
0 yn xn a−1
0 b0 yn
z −1 z −1 z −1
b1 −a1 −a1 b1
xn−1 yn−1
z −1 z −1
= z −1
b2 −a2 −a2 b2
xn−2 yn−2
z −1 z −1 z −1
b3 −a3 −a3 b3
xn−3 yn−3
∗ =
as
Point-spread function h (disk, r = 2f
): image plane focal plane
1
x2 + y 2 ≤ r 2
r2 π
,
h(x, y) =
0, x2 + y 2 > r 2
a
Original image I, blurred image B = I ∗ h, i.e.
ZZ
B(x, y) = I(x − x′ , y − y ′ ) · h(x′ , y ′ ) · dx′ dy ′ s
f
26
Convolution: electronics example
R Uin
Uin U
√in
2
Uout
Uin C Uout
Uout
0
t 0 1/RC ω (= 2πf)
Any passive network (R, L, C) convolves its input voltage Uin with an
impulse response function h, leading to Uout = Uin ∗ h, that is
Z ∞
Uout (t) = Uin (t − τ ) · h(τ ) · dτ
−∞
In this example:
−t
Uin − Uout dUout 1
· e RC , t ≥ 0
=C· , h(t) = RC
R dt 0, t<0
27
A1 sin ϕ1 + A2 sin ϕ2
tan ϕ =
A1 cos ϕ1 + A2 cos ϕ2 A
A2 A2 · sin(ϕ2 )
ϕ2
Sine waves of any phase can be
A1 A2 · cos(ϕ2 )
formed from sin and cos alone:
ωt ϕ A1 · sin(ϕ1 )
ϕ1
A · sin(ωt + ϕ) = A1 · cos(ϕ1 )
a · sin(ωt) + b · cos(ωt)
√
with a = A · cos(ϕ), b = A · sin(ϕ) and A = a2 + b2 , tan ϕ = ab .
28
Note: Convolution of a discrete sequence {xn } with another sequence
{yn } is nothing but adding together scaled and delayed copies of {xn }.
(Think of {yn } decomposed into a sum of impulses.)
If {xn } is a sampled sine wave of frequency f , so is {xn } ∗ {yn }!
=⇒ Sine-wave sequences form a family of discrete sequences
that is closed under convolution with arbitrary sequences.
The same applies for continuous sine waves and convolution.
⇐⇒ ω1 6= ω2 ∨ ϕ1 − ϕ2 = (2k + 1)π/2 (k ∈ Z)
They can be used to form an orthogonal function basis for a transform.
The term “orthogonal” is used here in the context of an (infinitely dimensional) vector space,
where the “vectors” Rare functions of the form f : R → R (or f : R → C) and the scalar product
∞
is defined as f · g = −∞ f (t) · g(t) dt.
29
32
Complex phasors
Amplitude and phase are two distinct characteristics of a sine function
that are inconvenient to keep separate notationally.
Complex functions (and discrete sequences) of the form
A · e j(ωt+ϕ) = A · [cos(ωt + ϕ) + j · sin(ωt + ϕ)]
(where j2 = −1) are able to represent both amplitude and phase in
one single algebraic object.
Thanks to complex multiplication, we can also incorporate in one single
factor both a multiplicative change of amplitude and an additive change
of phase of such a function. This makes discrete sequences of the form
xn = e jωn
eigensequences with respect to an LTI system T , because for each ω,
there is a complex number (eigenvalue) H(ω) such that
T {xn } = H(ω) · {xn }
In the notation of slide 30, where the argument of H is the base, we would write H(e jω ).
33
34
Another notation is in the continuous case
Z ∞
F{h(t)}(ω) = H(e ) =jω
h(t) · e− jωt dt
−∞
∞
1
Z
F −1 jω
{H(e )}(t) = h(t) = H(e jω ) · e jωt dω
2π −∞
1 π
Z
F −1 jω
{H(e )}(t) = hn = H(e jω ) · e jωn dω
2π −π
35
Time scaling:
1 f
x(at) •−◦ X
|a| a
Frequency scaling:
1 t
x •−◦ X(af )
|a| a
36
Time shifting:
Frequency shifting:
37
Discrete form:
Fourier transform:
Z ∞
F{δ(t)}(ω) = δ(t) · e− jωt dt = e0 = 1
−∞
∞
1
Z
F −1
{1}(t) = 1 · e jωt dω = δ(t)
2π −∞
http://mathworld.wolfram.com/DeltaFunction.html
41
−f0 f0 f −f0 f0 f
As any real-valued signal x(t) can be represented as a combination of sine and cosine functions,
the spectrum of any real-valued signal will show the symmetry X(e jω ) = [X(e− jω )]∗ , where ∗
denotes the complex conjugate (i.e., negated imaginary part).
42
Fourier transform symmetries
We call a function x(t)
43
X(f ) Y (f )
∗ =
The spectrum of the baseband signal in the interval −fl < f < fl is
shifted by the modulation to the intervals ±fc − fl < f < ±fc + fl .
How can such a signal be demodulated?
44
Sampling using a Dirac comb
The loss of information in the sampling process that converts a con-
tinuous function x(t) into a discrete sequence {xn } defined by
xn = x(ts · n) = x(n/fs )
The function x̂(t) now contains exactly the same information as the
discrete sequence {xn }, but is still in a form that can be analysed using
the Fourier transform on continuous functions.
45
s(t) S(f )
· =
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
X(f ) S(f ) X̂(f )
∗ =
... ... ... ...
0 f −fs fs f −fs 0 fs f
49
0 f −fs 0 fs f
double-sided bandwidth
reconstruction filter
X̂(f ) X̂(f )
Anti-aliasing and reconstruction filters both suppress frequencies outside |f | < fs /2.
50
Reconstruction of a continuous
band-limited waveform
The ideal anti-aliasing filter for eliminating any frequency content above
fs /2 before sampling with a frequency of fs has the Fourier transform
(
1 if |f | < f2s
H(f ) = fs = rect(ts f ).
0 if |f | > 2
This leads, after an inverse Fourier transform, to the impulse response
sin πtfs 1 t
h(t) = fs · = · sinc .
πtfs ts ts
The original band-limited signal can be reconstructed by convolving
this with the sampled signal x̂(t), which eliminates the periodicity of
the frequency domain introduced by the sampling process:
x(t) = h(t) ∗ x̂(t)
Note that sampling h(t) gives the impulse function: h(t) · s(t) = δ(t).
51
sampled signal
interpolation result
scaled/shifted sin(x)/x pulses
1 2 3 4 5
53
Reconstruction filters
The mathematically ideal form of a reconstruction filter for suppressing
aliasing frequencies interpolates the sampled signal xn = x(ts · n) back
into the continuous waveform
∞
X sin π(t − ts · n)
x(t) = xn · .
n=−∞
π(t − ts · n)
55
„ «
sin πtfs /2 2n + 1 sin πt(n + 1)fs sin πtnfs
h(t) = fs · cos 2πtfs = (n + 1)fs − nfs .
πtfs /2 4 πt(n + 1)fs πtnfs
− 45 fs 0 5
4 fs f −fs −fs 0 fs fs f
2 2
n=2
56
Exercise 9 Reconstructing a sampled baseband signal:
Why should the first filter have a lower cut-off frequency than the second?
57
= ∗
... ... ... ...
−1/fp 0 1/fp t t −1/fp 0 1/fp t
X(f ) Ẋ(f ) P (f )
= ·
... ...
−fp 0 fp f f −fp 0 fp f
59
· =
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
X(f ) S(f ) X̂(f )
∗ =
... ... ... ...
0 f −fs fs f −fs 0 fs f
60
Continuous vs discrete Fourier transform
• Sampling a continuous signal makes its spectrum periodic
• A periodic signal has a sampled spectrum
ẍ(t) Ẍ(f )
x0 X0 X0 x0
x1 X1 X1 x1
x2 X2
1
∗ X2
x2
Fn · = , ·F · =
.. .. n n .. ..
. . . .
xn−1 Xn−1 Xn−1 xn−1
62
Discrete Fourier Transform visualized
x0 X0
x1 X1
x2 X2
· x3
= X3
x4
X4
x5
X5
x6 X6
x7 X7
X0 x0
X1 x1
X2 x2
1 X3 x3
· · =
8
X4
x4
X5
x5
X6 x6
X7 x7
64
Fast Fourier Transform (FFT)
n−1
X ik
n−1
xi · e−2π j n
Fn {xi }i=0 k
=
i=0
n n
2
−1 2
−1
ik k ik
−2π j n/2 −2π j n/2
X X
= x2i · e + e−2π j n x2i+1 · e
i=0 i=0
n n
−1 −1
k
2 −2π j n 2 n
F n2 {x2i }i=0 + e · F n2 {x2i+1 }i=0 , k<
2
k k
= n n
−1 −1
k
2
F n2 {x2i }i=0 + e−2π j n 2
· F n2 {x2i+1 }i=0 , k≥ n
k− n2 n
k− 2 2
The DFT over n-element vectors can be reduced to two DFTs over
n/2-element vectors plus n multiplications and n additions, leading to
log2 n rounds and n log2 n additions and multiplications overall, com-
pared to n2 for the equivalent matrix multiplication.
A high-performance FFT implementation in C with many processor-specific optimizations and
support for non-power-of-2 sizes is available at http://www.fftw.org/.
65
To optimize the calculation of a single real-valued FFT, use this trick to calculate the two half-size
real-value FFTs that occur in the first round.
66
Fast complex multiplication
Calculating the product of two complex numbers as
provides the same result with three multiplications and five additions.
The latter may perform faster on CPUs where multiplications take three
or more times longer than additions.
This trick is most helpful on simpler microcontrollers. Specialized signal-processing CPUs (DSPs)
feature 1-clock-cycle multipliers. High-end desktop processors use pipelined multipliers that stall
where operations depend on each other.
67
FFT-based convolution
m−1 n−1
Calculating the convolution of two finite sequences {xi }i=0 and {yi }i=0
of lengths m and n via
min{m−1,i}
X
zi = xj · yi−j , 0≤i<m+n−1
j=max{0,i−(n−1)}
takes mn multiplications.
Can we apply the FFT and the convolution theorem to calculate the
convolution faster, in just O(m log m + n log n) multiplications?
{zi } = F −1 (F{xi } · F{yi })
There is obviously no problem if this condition is fulfilled:
{xi } and {yi } are periodic, with equal period lengths
In this case, the fact that the DFT interprets its input as a single period
of a periodic signal will do exactly what is needed, and the FFT and
inverse FFT can be applied directly as above.
68
In the general case, measures have to be taken to prevent a wrap-over:
−1
A B F [F(A)⋅F(B)]
−1
A’ B’ F [F(A’)⋅F(B’)]
Both sequences are padded with zero values to a length of at least m+n−1.
This ensures that the start and end of the resulting sequence do not overlap.
69
70
Each block is zero-padded at both ends and then convolved as before:
∗ ∗ ∗
= = =
The regions originally added as zero padding are, after convolution, aligned
to overlap with the unpadded ends of their respective neighbour blocks.
The overlapping parts of the blocks are then added together.
71
Deconvolution
A signal u(t) was distorted by convolution with a known impulse re-
sponse h(t) (e.g., through a transmission channel or a sensor problem).
The “smeared” result s(t) was recorded.
Can we undo the damage and restore (or at least estimate) u(t)?
∗ =
∗ =
72
The convolution theorem turns the problem into one of multiplication:
Z
s(t) = u(t − τ ) · h(τ ) · dτ
s = u∗h
F{s} = F{u} · F{h}
F{u} = F{s}/F{h}
u = F −1 {F{s}/F{h}}
In practice, we also record some noise n(t) (quantization, etc.):
Z
c(t) = s(t) + n(t) = u(t − τ ) · h(τ ) · dτ + n(t)
Typical workarounds:
→ Modify the Fourier transform of the impulse response, such that
|F{h}(f )| > ǫ for some experimentally chosen threshold ǫ.
→ If estimates of the signal spectrum |F{s}(f )| and the noise
spectrum |F{n}(f )| can be obtained, then we can apply the
“Wiener filter” (“optimal filter”)
|F{s}(f )|2
W (f ) =
|F{s}(f )|2 + |F{n}(f )|2
before deconvolution:
ũ = F −1 {W · F{c}/F{h}}
Exercise 11 Use MATLAB to deconvolve the blurred stars from slide 26.
The files stars-blurred.png with the blurred-stars image and stars-psf.png with the impulse
response (point-spread function) are available on the course-material web page. You may find
the MATLAB functions imread, double, imagesc, circshift, fft2, ifft2 of use.
Try different ways to control the noise (see above) and distortions near the margins (window-
ing). [The MATLAB image processing toolbox provides ready-made “professional” functions
deconvwnr, deconvreg, deconvlucy, edgetaper, for such tasks. Do not use these, except per-
haps to compare their outputs with the results of your own attempts.]
74
Spectral estimation
Sine wave 4×fs/32 Discrete Fourier Transform
1
15
10
0
5
−1 0
0 10 20 30 0 10 20 30
Sine wave 4.61×f /32 Discrete Fourier Transform
s
1
15
10
0
5
−1 0
0 10 20 30 0 10 20 30
75
30
25
20
15
10
0
0 16
5 10 15 15.5
20 25 30 15
input freq.
DFT index
The leakage of energy to other frequency bins not only blurs the estimated spec-
trum. The peak amplitude also changes significantly as the frequency of a tone
changes from that associated with one output bin to the next, a phenomenon
known as scalloping. In the above graphic, an input sine wave gradually changes
from the frequency of bin 15 to that of bin 16 (only positive frequencies shown).
77
Windowing
Sine wave Discrete Fourier Transform
1 300
200
0
100
−1 0
0 200 400 0 200 400
Sine wave multiplied with window function Discrete Fourier Transform
1 100
0 50
−1 0
0 200 400 0 200 400
78
The reason for the leakage and scalloping losses is easy to visualize with the
help of the convolution theorem:
The operation of cutting a sequence of the size of the DFT input vector out
of a longer original signal (the one whose continuous Fourier spectrum we
try to estimate) is equivalent to multiplying this signal with a rectangular
function. This destroys all information and continuity outside the “window”
that is fed into the DFT.
Multiplication with a rectangular window of length T in the time domain is
equivalent to convolution with sin(πf T )/(πf T ) in the frequency domain.
The subsequent interpretation of this window as a periodic sequence by
the DFT leads to sampling of this convolution result (sampling meaning
multiplication with a Dirac comb whose impulses are spaced fs /n apart).
Where the window length was an exact multiple of the original signal period,
sampling of the sin(πf T )/(πf T ) curve leads to a single Dirac pulse, and
the windowing causes no distortion. In all other cases, the effects of the con-
volution become visible in the frequency domain as leakage and scalloping
losses.
79
0.8
0.6
0.4
0.2
Rectangular window
Triangular window
0
Hann window
Hamming window
Magnitude (dB) 20 20
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
Hann window Hamming window
20 20
Magnitude (dB)
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
81
2 2
0 0
0 5 10 15 0 20 40 60
83
= ∗
... ... ... ...
−fs 0 fs f −fs 0 fs f − f2s 0 fs
2
f
0.2
Amplitude
0.5
30
0 0.1
−0.5 0
−1
−0.1
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −500
−40 −1000
−60 −1500
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: n = 30, cutoff frequency (−6 dB): fc = 0.25 × fs /2, window: Hamming
88
We truncate the ideal, infinitely-long impulse response by multiplication with a window sequence.
In the frequency domain, this will convolve the rectangular frequency response of the ideal low-
pass filter with the frequency characteristic of the window. The width of the main lobe determines
the width of the transition band, and the side lobes cause ripples in the passband and stopband.
H(f )
= ∗
−fh −fl 0 fl fh f − fh −f
2
l fh −fl
2
f − fh +f
2
l
0 fh +fl
2
f
89
Exercise 12 Explain the difference between the DFT, FFT, and FFTW.
(1 + 2v + 3v 2 ) · (2 + 1v)
2 + 4v + 6v 2
+ 1v + 2v 2 + 3v 3
= 2 + 5v + 8v 2 + 3v 3
91
92
Example of polynomial division:
∞
1 2 2 3 3
X
= 1 + av + a v + a v + · · · = an v n
1 − av n=0
1 + av + a2 v 2 + · · ·
1 − av 1
1 − av
av
av − a2 v 2
a2 v 2
a2 v 2 − a 3 v 3
···
93
The z-transform
The z-transform of a sequence {xn } is defined as:
∞
X
X(z) = xn z −n
n=−∞
Note that is differs only in the sign of the exponent from the polynomial representation discussed
on the preceeding slides.
Recall that the above X(z) is exactly the factor with which an expo-
nential sequence {z n } is multiplied, if it is convolved with {xn }:
{z n } ∗ {xn } = {yn }
∞
X ∞
X
⇒ yn = z n−k xk = z n · z −k xk = z n · X(z)
k=−∞ k=−∞
94
The z-transform defines for each sequence a continuous complex-valued
surface over the complex plane C. For finite sequences, its value is al-
ways defined across the entire complex plane.
For infinite sequences, it can be shown that the z-transform converges
only for the region
xn+1 xn+1
lim < |z| < lim
n→∞ xn n→−∞ xn
The z-transform identifies a sequence unambiguously only in conjunction with a given region of
convergence. In other words, there exist different sequences, that have the same expression as
their z-transform, but that converge for different amplitudes of z.
95
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
(bm =
6 0, ak 6= 0) which can also be written as
zk m m−l
P
l=0 bl z
H(z) = .
z m kl=0 al z k−l
P
where the cl are the non-zero positions of zeros (H(cl ) = 0) and the dl
are the non-zero positions of the poles (i.e., z → dl ⇒ |H(z)| → ∞)
of H(z). Except for a constant factor, H(z) is entirely characterized
by the position of these zeros and poles.
As with the Fourier transform, convolution in the time domain corre-
sponds to complex multiplication in the z-domain:
{xn } •−◦ X(z), {yn } •−◦ Y (z) ⇒ {xn } ∗ {yn } •−◦ X(z) · Y (z)
Delaying a sequence by one corresponds in the z-domain to multipli-
cation with z −1 :
{xn−∆n } •−◦ X(z) · z −∆n
97
2
1.75
1.5
1.25
|H(z)|
1
0.75
0.5
0.25
0
1
0.5 1
0 0.5
−0.5 0
−0.5
−1 −1
imaginary real
0.8 0.8z z −1
H(z) = = 0.2
1 − 0.2 · z −1 z − 0.2
yn−1
which features a zero at 0 and a pole at 0.2.
98
z 1
H(z) = z−0.7
= 1−0.7·z −1
Imaginary Part z Plane Impulse Response
1 1
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z−0.9
= 1−0.9·z −1
1 1
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
99
z 1
H(z) = z−1
= 1−z −1
1 1
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z−1.1
= 1−1.1·z −1
1 20
Amplitude
0 10
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
100
z2 1
H(z) = (z−0.9·e jπ/6 )·(z−0.9·e− jπ/6 )
= 1−1.8 cos(π/6)z −1 +0.92 ·z −2
Amplitude
2
0 0
−1 −2
−1 0 1 0 10 20 30
Real Part n (samples)
z2 1
H(z) = (z−e jπ/6 )·(z−e− jπ/6 )
= 1−2 cos(π/6)z −1 +z −2
1 5
Amplitude
2
0 0
−1 −5
−1 0 1 0 10 20 30
Real Part n (samples)
101
z2 1 1
H(z) = (z−0.9·e jπ/2 )·(z−0.9·e− jπ/2 )
= 1−1.8 cos(π/2)z −1 +0.92 ·z −2
= 1+0.92 ·z −2
1 1
Amplitude
2
0 0
−1 −1
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z+1
= 1+z −1
1 1
Amplitude
0 0
−1 −1
−1 0 1 0 10 20 30
Real Part n (samples)
102
IIR Filter design techniques
The design of a filter starts with specifying the desired parameters:
The designer can then trade off conflicting goals such as a small tran-
sition band, a low order, a low ripple amplitude, or even an absence of
ripples.
Design techniques for making these tradeoffs for analog filters (involv-
ing capacitors, resistors, coils) can also be used to design digital IIR
filters:
Butterworth filters
Have no ripples, gain falls monotonically across the pass and transition
p
band. Within the passband, the gain drops slowly down to 1 − 1/2
(−3 dB). Outside the passband, it drops asymptotically by a factor 2N
per octave (N · 20 dB/decade).
104
Chebyshev type I filters
Distribute the gain error uniformly throughout the passband (equirip-
ples) and drop off monotonically outside.
Chebyshev type II filters
Distribute the gain error uniformly throughout the stopband (equirip-
ples) and drop off monotonically in the passband.
Elliptic filters (Cauer filters)
Distribute the gain error as equiripples both in the passband and stop-
band. This type of filter is optimal in terms of the combination of the
passband-gain tolerance, stopband-gain tolerance, and transition-band
width that can be achieved at a given filter order.
All these filter design techniques are implemented in the MATLAB Signal Processing Toolbox in
the functions butter, cheby1, cheby2, and ellip, which output the coefficients an and bn of the
difference equation that describes the filter. These can be applied with filter to a sequence, or
can be visualized with zplane as poles/zeros in the z-domain, with impz as an impulse response,
and with freqz as an amplitude and phase spectrum. The commands sptool and fdatool
provide interactive GUIs to design digital filters.
105
0.6
Amplitude
0.5
0 0.4
−0.5 0.2
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20
−50
−40
−60 −100
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 1, cutoff frequency (−3 dB): 0.25 × fs /2
106
Butterworth filter design example
Impulse Response
1 0.3
Imaginary Part
0.2
Amplitude
0.5
0 0.1
−0.5 0
−1 −0.1
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 5, cutoff frequency (−3 dB): 0.25 × fs /2
107
0.4
Amplitude
0.5
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB
108
Chebyshev type II filter design example
Impulse Response
1 0.6
Imaginary Part
0.4
Amplitude
0.5
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 100
Phase (degrees)
Magnitude (dB)
0
−20
−100
−40
−200
−60 −300
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 5, cutoff frequency: 0.5 × fs /2, stop-band ripple: −20 dB
109
0.4
Amplitude
0.5
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−100
−20
−200
−40
−300
−60 −400
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
110
Exercise 14 Draw the direct form II block diagrams of the causal infinite-
impulse response filters described by the following z-transforms and write
down a formula describing their time-domain impulse responses:
1
(a) H(z) =
1 − 21 z −1
1 −4
′ 1− 44
z
(b) H (z) =
1 − 14 z −1
1 1 1
(c) H ′′ (z) = + z −1 + z −2
2 4 2
111
112
Random sequences and noise
A discrete random sequence {xn } is a sequence of numbers
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
where each value xn is the outcome of a random variable xn in a
corresponding sequence of random variables
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
Such a collection of random variables is called a random process. Each
individual random variable xn is characterized by its probability distri-
bution function
Pxn (a) = Prob(xn ≤ a)
and the entire random process is characterized completely by all joint
probability distribution functions
Pxn1 ,...,xnk (a1 , . . . , ak ) = Prob(xn1 ≤ a1 ∧ . . . ∧ xnk ≤ ak )
for all possible sets {xn1 , . . . , xnk }.
113
117
and
∞
X φxx (k) = E(xn+k · x∗n )
φyy (k) = φxx (k−i)chh (i), where P∞
i=−∞ chh (k) = i=−∞ hi+k hi .
118
In other words:
{φyy (n)} = {chh (n)} ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
Φyy (f ) = |H(f )|2 · Φxx (f )
Similarly:
{φyx (n)} = {hn } ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
Φyx (f ) = H(f ) · Φxx (f )
White noise
A random sequence {xn } is a white noise signal, if mx = 0 and
Φxx (f ) = σx2 .
119
Application example:
Where an LTI {yn } = {hn } ∗ {xn } can be observed to operate on
white noise {xn } with φxx (k) = σx2 δk , the crosscorrelation between
input and output will reveal the impulse response of the system:
120
DFT averaging
121
The rightmost figure was generated from the same set of 1000 windows,
but this time the complex values of the DFTs were averaged before the
absolute value was taken. This is called coherent averaging and, because
of the linearity of the DFT, identical to first averaging the 1000 windows
and then applying a single DFT and taking its absolute value. The windows
start 64 samples apart. Only periodic waveforms with a period that divides
64 are not averaged away. This periodic averaging step suppresses both the
noise and the second sine wave.
Periodic averaging
If a zero-mean signal {xi } has a periodic component with period p, the
periodic component can be isolated by periodic averaging :
k
1 X
x̄i = lim xi+pn
k→∞ 2k + 1
n=−k
Periodic averaging
P corresponds in the time domain to convolution with a
Dirac comb n δi−pn . In the frequency domain, this means multiplication
with a Dirac comb that eliminates all frequencies but multiples of 1/p.
122
Image, video and audio compression
Structure of modern audiovisual communication systems:
?
noise - channel
?
human perceptual entropy channel
senses
display decoding
decoding decoding
123
Literature
→ D. Salomon: A guide to data compression methods.
ISBN 0387952608, 2002.
126
Entropy coding review – Huffman
X 1
Entropy: H = p(α) · log2
1.00 p(α)
α∈A
0 1 = 2.3016 bit
0.40 0.60
0 1 0 1
v w 0.25
0.20 0.20 u
0.35 0 1
x
0.10
Mean codeword length: 2.35 bit 0.15
0 1
Huffman’s algorithm constructs an optimal code-word tree for a set of
symbols with known probability distribution. It iteratively picks the two y z
elements of the set with the smallest probability and combines them into 0.05 0.05
a tree by adding a common root. The resulting tree goes back into the
set, labeled with the sum of the probabilities of the elements it combines.
The algorithm terminates when less than two elements are left.
127
x x x x x
w w w w w
v v v v v
u u u u u
0.55
0.0 0.55 0.5745 0.5822
128
Arithmetic coding
Several advantages:
↓
5 7 12 3 3
Predictive coding:
encoder decoder
f(t) g(t) g(t) f(t)
− +
predictor predictor
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...)
Based on the counted numbers nblack and nwhite of how often each pixel
value has been encountered so far in each of the 1024 contexts, the proba-
bility for the next pixel being black is estimated as
nblack + 1
pblack =
nwhite + nblack + 2
The encoder updates its estimate only after the newly counted pixel has
been encoded, such that the decoder knows the exact same statistics.
Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/
132
Statistical dependence
Random variables X, Y are dependent iff ∃x, y:
P (X = x ∧ Y = y) 6= P (X = x) · P (Y = y).
Application
Where x is the value of the next symbol to be transmitted and y is
the vector of all symbols transmitted so far, accurate knowledge of the
conditional probability P (X = x | Y = y) will allow a transmitter to
remove all redundancy.
An application example of this approach is JBIG, but there y is limited
to 10 past single-bit pixels and P (X = x | Y = y) is only an estimate.
133
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8
256 256
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
136
Covariance and correlation
We define the covariance of two random variables X and Y as
137
Covariance Matrix
For a random vector X = (X1 , X2 , . . . , Xn ) ∈ Rn we define the co-
variance matrix
138
Decorrelation by coordinate transform
Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs
256 320
256
192
192
128 128
64
64
0
0 −64
0 64 128 192 256 −64 0 64 128 192 256 320
Probability distribution and entropy
139
E(Y) = A · E(X) + b
Cov(Y) = A · Cov(X) · AT
Proof: The first equation follows from the linearity of the expected-
value operator E(·), as does E(A · X · B) = A · E(X) · B for matrices
A, B. With that, we can transform
= A · E (X − E(X)) · (X − E(X))T · AT
= A · Cov(X) · AT
140
Quick review: eigenvectors and eigenvalues
We are given a square matrix A ∈ Rn×n . The vector x ∈ Rn is an
eigenvector of A if there exists a scalar value λ ∈ R such that
Ax = λx.
We convert this set of equations into matrix notation using the matrix
B = (b1 , b2 , . . . , bn ) that has these eigenvectors as columns and the
diagonal matrix D = diag(λ1 , λ2 , . . . , λn ) that consists of the corre-
sponding eigenvalues:
Cov(X)B = BD
142
B is orthonormal, that is BB T = I.
Multiplying the above from the right with B T leads to the spectral
decomposition
Cov(X) = BDB T
of the covariance matrix. Similarly multiplying instead from the left
with B T leads to
B T Cov(X)B = D
and therefore shows with
Cov(B T X) = D
that the eigenvector matrix B T is the wanted transform.
The Karhunen-Loève transform (also known as Hotelling transform
or Principal Component Analysis) is the multiplication of a correlated
random vector X with the orthonormal eigenvector matrix B T from the
spectral decomposition Cov(X) = BDB T of its covariance matrix.
This leads to a decorrelated random vector B T X whose covariance
matrix is diagonal.
143
147
...
Covariance matrix C Matrix U with eigenvector columns
148
Matrix U ′ with normalised KLT Matrix with Discrete Cosine
eigenvector columns Transform base vector columns
with
√1
2
u=0
C(u) =
1 u>0
is an orthonormal transform:
N −1
(2x + 1)uπ C(u′ ) (2x + 1)u′ π 1 u = u′
X C(u)
cos ·p cos =
0 u=6 u′
p
x=0 N/2 2N N/2 2N
150
The 2-dimensional variant of the DCT applies the 1-D transform on
both rows and columns of an image:
C(u) C(v)
S(u, v) = p p ·
N/2 N/2
N −1 N −1
X X (2x + 1)uπ (2y + 1)vπ
s(x, y) cos cos
x=0 y=0
2N 2N
s(x, y) =
N −1 N −1
X X C(u) C(v) (2x + 1)uπ (2y + 1)vπ
p p · S(u, v) cos cos
u=0 v=0 N/2 N/2 2N 2N
A range of fast algorithms have been found for calculating 1-D and
2-D DCTs (e.g., Ligtenberg/Vetterli).
151
Whole-image DCT
−1
−2
−3
−4
152
Whole-image DCT, 80% coefficient cutoff
−1
−2
−3
−4
153
−1
−2
−3
−4
154
Whole-image DCT, 95% coefficient cutoff
−1
−2
−3
−4
155
−1
−2
−3
−4
156
Base vectors of 8×8 DCT
v
0 1 2 3 4 5 6 7
0
3
u
157
158
The n-point Discrete Fourier Transform (DFT) can be viewed as a device that
sends an input signal through a bank of n non-overlapping band-pass filters, each
reducing the bandwidth of the signal to 1/n of its original bandwidth.
According to the sampling theorem, after a reduction of the bandwidth by 1/n,
the number of samples needed to reconstruct the original signal can equally be
reduced by 1/n. The DFT splits a wide-band signal represented by n input signals
into n separate narrow-band samples, each represented by a single sample.
A Discrete Wavelet Transform (DWT) can equally be viewed as such a frequency-
band splitting device. However, with the DWT, the bandwidth of each output signal
is proportional to the highest input frequency that it contains. High-frequency
components are represented in output signals with a high bandwidth, and therefore
a large number of samples. Low-frequency signals end up in output signals with
low bandwidth, and are correspondingly represented with a low number of samples.
As a result, high-frequency information is preserved with higher spatial resolution
than low-frequency information.
Both the DFT and the DWT are linear orthogonal transforms that preserve all
input information in their output without adding anything redundant.
As with the DFT, the 1-dimensional DWT can be extended to 2-D images by trans-
forming both rows and columns (the order of which happens first is not relevant).
159
c3 − c2 + c1 − c0 = 0
0c3 − 1c2 + 2c1 − 3c0 = 0
162
Discrete Wavelet Transform compression
80% truncated 2D DAUB8 DWT 90% truncated 2D DAUB8 DWT
163
Psychophysics of perception
Sensation limit (SL) = lowest intensity stimulus that can still be perceived
Difference limit (DL) = smallest perceivable stimulus difference at given
intensity level
Weber’s law
Difference limit ∆φ is proportional to the intensity φ of the stimu-
lus (except for a small correction constant a, to describe deviation of
experimental results near SL):
∆φ = c · (φ + a)
Fechner’s scale
Define a perception intensity scale ψ using the sensation limit φ0 as
the origin and the respective difference limit ∆φ = c · φ as a unit step.
The result is a logarithmic relationship between stimulus intensity and
scale value:
φ
ψ = logc
φ0
164
Fechner’s scale matches older subjective intensity scales that follow
differentiability of stimuli, e.g. the astronomical magnitude numbers
for star brightness introduced by Hipparchos (≈150 BC).
Stevens’ law
A sound that is 20 DL over SL is perceived as more than twice as loud
as one that is 10 DL over SL, i.e. Fechner’s scale does not describe
well perceived intensity. A rational scale attempts to reflect subjective
relations perceived between different values of stimulus intensity φ.
Stevens observed that such rational scales ψ follow a power law:
ψ = k · (φ − φ0 )a
165
Decibel
Communications engineers often use logarithmic units:
→ Quantities often vary over many orders of magnitude → difficult
to agree on a common SI prefix
→ Quotient of quantities (amplification/attenuation) usually more
interesting than difference
→ Signal strength usefully expressed as field quantity (voltage,
current, pressure, etc.) or power, but quadratic relationship
between these two (P = U 2 /R = I 2 R) rather inconvenient
→ Weber/Fechner: perception is logarithmic
Plus: Using magic special-purpose units has its own odd attractions (→ typographers, navigators)
U
0.6 0.6 0.6
Cr
Cr
Cr
Cb = + 0.5
0.4 0.4 0.4
2.0 0
0 0.5 1
0
0 0.5 1
0
0 0.5 1
Cb Cb Cb
Y=0.7 Y=0.9 Y=0.99
1 1 1
1.6
Cr
Cr
Cr
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb
173
Each curve represents a loudness level in phon. At 1 kHz, the loudness unit
phon is identical to dBSPL and 0 phon is the sensation limit.
174
Sound waves cause vibration in the eardrum. The three smallest human bones in
the middle ear (malleus, incus, stapes) provide an “impedance match” between air
and liquid and conduct the sound via a second membrane, the oval window, to the
cochlea. Its three chambers are rolled up into a spiral. The basilar membrane that
separates the two main chambers decreases in stiffness along the spiral, such that
the end near the stapes vibrates best at the highest frequencies, whereas for lower
frequencies that amplitude peak moves to the far end.
175
177
masking.wav
Twelve sequences, each with twelve probe-tone pulses and a 1200 Hz
masking tone during pulses 5 to 8.
Probing tone frequency and relative masking tone amplitude:
10 dB 20 dB 30 dB 40 dB
1300 Hz
1900 Hz
700 Hz
178
Audio demo: loudness.wav
80 0 dBA curve (SL)
first series
second series
70
60
50
SPL
40
dB
30
20
10
40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000
Hz
179
60
50
SPL
40
dB
30
20
10
40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000
Hz
180
Quantization
Uniform/linear quantization: Non-uniform quantization:
6 6
5 5
4 4
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
−5 −5
−6 −6
−6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6
V log(1 + µ|x|/V )
y= sgn(x) for −V ≤ x ≤ V
log(1 + µ)
A-law:
( A|x| V
1+log A
sgn(x) for 0 ≤ |x| ≤ A
y= A|x|
V (1+log V ) V
1+log A
sgn(x) for A
≤ |x| ≤ V
European digital telephone networks use A-law quantization (A = 87.6), North American ones
use µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T
G.711]
183
V µ−law (US)
A−law (Europe)
signal voltage
−V
184
Joint Photographic Experts Group – JPEG
Working group “ISO/TC97/SC2/WG8 (Coded representation of picture and audio information)”
was set up in 1982 by the International Organization for Standardization.
Goals:
→ continuous tone gray-scale and colour images
→ recognizable images at 0.083 bit/pixel
→ useful images at 0.25 bit/pixel
→ excellent image quality at 0.75 bit/pixel
→ indistinguishable images at 2.25 bit/pixel
→ feasibility of 64 kbit/s (ISDN fax) compression with late 1980s
hardware (16 MHz Intel 80386).
→ workload equal for compression and decompression
The JPEG standard (ISO 10918) was finally published in 1994.
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the
ACM 34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089
185
186
→ Quantization: divide each DCT coefficient with the correspond-
ing value from an 8×8 table, then round to the nearest integer:
The two standard quantization-matrix examples for luminance and chrominance are:
16 11 10 16 24 40 51 61 17 18 24 47 99 99 99 99
12 12 14 19 26 58 60 55 18 21 26 66 99 99 99 99
14 13 16 24 40 57 69 56 24 26 56 99 99 99 99 99
14 17 22 29 51 87 80 62 47 66 99 99 99 99 99 99
18 22 37 56 68 109 103 77 99 99 99 99 99 99 99 99
24 35 55 64 81 104 113 92 99 99 99 99 99 99 99 99
49 64 78 87 103 121 120 101 99 99 99 99 99 99 99 99
72 92 95 98 112 100 103 99 99 99 99 99 99 99 99 99
0 1 5 6 14 15 27 28
2 4 7 13 16 26 29 42
vertical frequency
3 8 12 17 25 30 41 43
9 11 18 24 31 40 44 53
10 19 23 32 39 45 52 54
20 22 33 38 46 51 55 60
21 34 37 47 50 56 59 61
35 36 48 49 57 58 62 63
After the 8×8 coefficients produced by the discrete cosine transform
have been quantized, the values are processed in the above zigzag order
by a run-length encoding step.
The idea is to group all higher-frequency coefficients together at the end of the sequence. As many
image blocks contain little high-frequency information, the bottom-right corner of the quantized
DCT matrix is often entirely zero. The zigzag scan helps the run-length coder to make best use
of this observation.
188
Huffman coding in JPEG
s value range
0 0
1 −1, 1
2 −3, −2, 2, 3
3 −7 . . . − 4, 4 . . . 7
4 −15 . . . − 8, 8 . . . 15
5 −31 . . . − 16, 16 . . . 31
6 −63 . . . − 32, 32 . . . 63
... ...
i −(2i − 1) . . . − 2i−1 , 2i−1 . . . 2i − 1
DCT coefficients have 11-bit resolution and would lead to huge Huffman
tables (up to 2048 code words). JPEG therefore uses a Huffman table only
to encode the magnitude category s = ⌈log2 (|v| + 1)⌉ of a DCT value v. A
sign bit plus the (s − 1)-bit binary value |v| − 2s−1 are appended to each
Huffman code word, to distinguish between the 2s different values within
magnitude category s.
When storing DCT coefficients in zigzag order, the symbols in the Huffman tree are actually
tuples (r, s), where r is the number of zero coefficients preceding the coded value (run-length).
189
1: x=a
2: x=b
3: x=c
4: x=a+b−c
c b
5: x = a + (b − c)/2
6: x = b + (a − c)/2 a ?
7: x = (a + b)/2
Predictor 1 is used for the top row, predictor 2 for the left-most row.
The predictor used for the rest of the image is chosen in a header. The
difference between the predicted and actual value is fed into either a
Huffman or arithmetic coder.
190
Advanced JPEG features
Beyond the baseline and lossless modes already discussed, JPEG pro-
vides these additional features:
→ 8 or 12 bits per pixel input resolution for DCT modes
→ 2–16 bits per pixel for lossless mode
→ progressive mode permits the transmission of more-significant
DCT bits or lower-frequency DCT coefficients first, such that
a low-quality version of the image can be displayed early during
a transmission
→ the transmission order of colour components, lines, as well as
DCT coefficients and their bits can be interleaved in many ways
→ the hierarchical mode first transmits a low-resolution image,
followed by a sequence of differential layers that code the dif-
ference to the next higher resolution
Not all of these features are widely used today.
191
JPEG-2000 (JP2)
Processing steps:
194
JPEG2000 examples (DWT)
195
196
JPEG2000 examples (DWT)
197
→ adaptive quantization
→ SNR and spatially scalable coding (enables separate transmis-
sion of a moderate-quality video signal and an enhancement
signal to reduce noise or improve resolution)
→ Predictive coding with motion compensation based on 16×16
macro blocks.
J. Mitchell, W. Pennebaker, Ch. Fogg, D. LeGall: MPEG video compression standard.
ISBN 0412087715, 1997. (CL library: I.4.20)
B. Haskell et al.: Digital Video: Introduction to MPEG-2. Kluwer Academic, 1997.
(CL library: I.4.27)
John Watkinson: The MPEG Handbook. Focal Press, 2001. (CL library: I.4.31)
199
Each MPEG image is split into 16×16-pixel large macroblocks. The predic-
tor forms a linear combination of the content of one or two other blocks of
the same size in a preceding (and following) reference image. The relative
positions of these reference blocks are encoded along with the differences.
200
MPEG reordering of reference images
Display order of frames:
time
I B B B P B B B P B B B P
Coding order:
time
I P B B B P B B B P B B B
MPEG distinguishes between I-frames that encode an image independent of any others, P-frames
that encode differences to a previous P- or I-frame, and B-frames that interpolate between the
two neighbouring B- and/or I-frames. A frame has to be transmitted before the first B-frame
that makes a forward reference to it. This requires the coding order to differ from the display
order.
201
buffer content
encoder
decoder
time time
MPEG can be used both with variable-bitrate (e.g., file, DVD) and fixed-bitrate (e.g., ISDN)
channels. The bitrate of the compressed data stream varies with the complexity of the input
data and the current quantization values. Buffers match the short-term variability of the encoder
bitrate with the channel bitrate. A control loop continuously adjusts the average bitrate via the
quantization values to prevent under- or overflow of the buffer.
The MPEG system layer can interleave many audio and video streams in a single data stream.
Buffers match the bitrate required by the codecs with the bitrate available in the multiplex and
encoders can dynamically redistribute bitrate among different streams.
MPEG encoders implement a 27 MHz clock counter as a timing reference and add its value as a
system clock reference (SCR) several times per second to the data stream. Decoders synchronize
with a phase-locked loop their own 27 MHz clock with the incoming SCRs.
Each compressed frame is annotated with a presentation time stamp (PTS) that determines when
its samples need to be output. Decoding timestamps specify when data needs to be available to
the decoder.
202
MPEG audio coding
Three different algorithms are specified, each increasing the processing
power required in the decoder.
Supported sampling frequencies: 32, 44.1 or 48 kHz.
Layer I
→ Waveforms are split into segments of 384 samples each (8 ms at 48 kHz).
→ Each segment is passed through an orthogonal filter bank that splits the
signal into 32 subbands, each 750 Hz wide (for 48 kHz).
This approximates the critical bands of human hearing.
→ Encoded frame contains bit allocation, scale factors and sub-band samples.
203
Layer II
Uses better encoding of scale factors and bit allocation information.
Unless there is significant change, only one out of three scale factors is transmitted. Explicit zero
code leads to odd numbers of quantization levels and wastes one codeword. Layer II combines
several quantized values into a granule that is encoded via a lookup table (e.g., 3 × 5 levels: 125
values require 7 instead of 9 bits). Layer II is used in Digital Audio Broadcasting (DAB).
Layer III
→ Modified DCT step decomposes subbands further into 18 or 6 frequencies
→ dynamic switching between MDCT with 36-samples (28 ms, 576 freq.)
and 12-samples (8 ms, 192 freq.)
enables control of pre-echos before sharp percussive sounds (Heisenberg)
→ non-uniform quantization
MPEG audio layer III is the widely used “MP3” music compression format.
204
Psychoacoustic models
MPEG audio encoders use a psychoacoustic model to estimate the spectral
and temporal masking that the human ear will apply. The subband quan-
tization levels are selected such that the quantization noise remains below
the masking threshold in each subband.
The masking model is not standardized and each encoder developer can
chose a different one. The steps typically involved are:
→ Fourier transform for spectral analysis
→ Group the resulting frequencies into “critical bands” within which
masking effects will not vary significantly
→ Distinguish tonal and non-tonal (noise-like) components
→ Apply masking function
→ Calculate threshold per subband
→ Calculate signal-to-mask ratio (SMR) for each subband
Masking is not linear and can be estimated accurately only if the actual sound pressure levels
reaching the ear are known. Encoder operators usually cannot know the sound pressure level
selected by the decoder user. Therefore the model must use worst-case SMRs.
205
Exercise 21 You adjust the volume of your 16-bit linearly quantizing sound-
card, such that you can just about hear a 1 kHz sine wave with a peak
amplitude of 200. What peak amplitude do you expect will a 90 Hz sine
wave need to have, to appear equally loud (assuming ideal headphones)?
206
Outlook
Further topics that we have not covered in this brief introductory tour
through DSP, but for the understanding of which you should now have
a good theoretical foundation:
→ multirate systems
→ adaptive filters
→ sound effects
If you find a typo or mistake in these lecture notes, please notify [email protected].
207
. . . and perception
Count how many Fs there are in this text:
FINISHED FILES ARE THE RE-
SULT OF YEARS OF SCIENTIF-
IC STUDY COMBINED WITH THE
EXPERIENCE OF YEARS
208