1
DIGITAL IMAGE FUNDAMENTALS
2 Brightness Adaptation & Discrimination
The human visual system can perceive
approximately 1010 different light intensity
levels.
However, at any one time we can only
discriminate between a much smaller
number – brightness adaptation.
Similarly, the perceived intensity of a region
is related to the light intensities of the
regions surrounding it.
3
Brightness Adaptation & Discrimination
(cont…)
Weber ratio
Intensity in millilambert
Perceived brightness is not the simple function of intensity. It overshoots
or undershoots at boundaries.
4
Brightness Adaptation & Discrimination
(cont…)
An example of Mach bands
5
Brightness Adaptation & Discrimination
(cont…)
An example of simultaneous contrast
6
Brightness Adaptation & Discrimination
(cont…)
7 Optical Illusions
Our visual
systems play lots
of interesting
tricks on us
8
Light And The Electromagnetic
Spectrum
Light is just a particular part of the
electromagnetic spectrum that can be
sensed by the human eye
The electromagnetic spectrum is split up
according to the wavelengths of different
forms of energy
9 Reflected Light
The colours that we perceive are determined
by the nature of the light reflected from an
object
For example, if white
light is shone onto a Wh
ite Ligh
green object most t
Colours
wavelengths are Absorbed
L ight
Green
absorbed, while green
light is reflected from
the object
10 Digital Image
In the following slides we will consider what
is involved in capturing a digital image of a
real-world scene
– Image sensing and representation
– Sampling and quantisation
– Resolution
11 Image Representation
Before we discuss image acquisition recall
that a digital image is composed of M rows
and N columns of pixels
each storing a value col
Pixel values are most
often grey levels in the
range 0-255(black-white)
We will see later on
that images can easily
be represented as f (row, col)
matrices row
12 Image Acquisition
Images are typically generated by
illuminating a scene and absorbing the
energy reflected by the objects in that scene
– Typical notions of
illumination and
scene can be way off:
• X-rays of a skeleton
• Ultrasound of an
unborn baby
• Electro-microscopic
images of molecules
13 Image Sensing
Incoming energy lands on a sensor material
responsive to that type of energy and this
generates a voltage
Collections of sensors are arranged to
capture images
Imaging Sensor
Line of Image Sensors Array of Image Sensors
14 Image Sensing
Using Sensor Strips and Rings
15 Image Sampling And Quantisation
A digital sensor can only measure a limited
number of samples at a discrete set of
energy levels
Quantisation is the process of converting a
continuous analogue signal into a digital
representation of this signal
16 Image Sampling And Quantisation
17 Image Sampling And Quantisation
18
Image Sampling And Quantisation
(cont…)
Digitizing the coordinate values is Sampling
Digitizing the amplitude values is
Quantization
Remember that a digital image is always
only an approximation of a real world
scene
19 Image Representation
20 Image Representation
21 Image Representation
22 Image Representation
23 Spatial Resolution
The spatial resolution of an image is
determined by how sampling was carried out
Spatial resolution simply refers to the
smallest discernable detail in an image
– Vision specialists will
often talk about pixel
size
– Graphic designers will
5.1 ls
talk about dots per M eg ap
i xe
inch (DPI)
24 Spatial Resolution (cont…)
25 Spatial Resolution (cont…)
1024 * 1024 512 * 512 256 * 256
128 * 128 64 * 64 32 * 32
26 Spatial Resolution (cont…)
27 Intensity Level Resolution
Intensity level resolution refers to the
number of intensity levels used to represent
the image
– The more intensity levels used, the finer the level of
detail discernable in an image
– Intensity level resolution is usually given in terms of
the number of bits used to store each intensity level
Number of Intensity
Number of Bits Examples
Levels
1 2 0, 1
2 4 00, 01, 10, 11
4 16 0000, 0101, 1111
8 256 00110011, 01010101
16 65,536 1010101010101010
28 Intensity Level Resolution (cont…)
256 grey levels (8 bits per pixel) 128 grey levels (7 bpp) 64 grey levels (6 bpp) 32 grey levels (5 bpp)
16 grey levels (4 bpp) 8 grey levels (3 bpp) 4 grey levels (2 bpp) 2 grey levels (1 bpp)
29 Saturation & Noise
30 Resolution: How Much Is Enough?
The big question with resolution is always
how much is enough?
– This all depends on what is in the image and
what you would like to do with it
– Key questions include
• Does the image look aesthetically pleasing?
• Can you see what you need to see within the
image?
31
Resolution: How Much Is Enough?
(cont…)
The picture on the right is fine for counting
the number of cars, but not for reading the
number plate
32 Intensity Level Resolution (cont…)
Low Detail Medium Detail High Detail
33 Intensity Level Resolution (cont…)
34 Intensity Level Resolution (cont…)
35 Intensity Level Resolution (cont…)
36 Interpolation (cont...)
37 Interpolation (cont...)
38 Review
Digitizing the
coordinate
values
Digitizing the
amplitude
values
39 Representing Digital Images
• The representation of an M×N numerical
array as
f (0, 0) f (0,1) ... f (0, N 1)
f (1, 0) f (1,1) ... f (1, N 1)
f ( x , y )
... ... ... ...
f ( M 1, 0) f ( M 1,1) ... f ( M 1, N 1)
40 Representing Digital Images
• The representation of an M×N numerical
array as
a0,0 a0,1 ... a0, N 1
a a1,1 ... a1, N 1
A 1,0
... ... ... ...
aM 1,0 aM 1,1 ... aM 1, N 1
41 Representing Digital Images
• The representation of an M×N numerical
array in MATLAB
f (1,1) f (1, 2) ... f (1, N )
f (2,1) f (2, 2) ... f (2, N )
f ( x , y )
... ... ... ...
f ( M ,1) f ( M , 2) ... f (M , N )
42 Representing Digital Images
• Discrete intensity interval [0, L-1], L=2 k
• The number b of bits required to store a M × N
digitized image
b=M×N×k
43 Representing Digital Images
44
Image Interpolation
• Interpolation — Process of using known data to
estimate unknown values
e.g., zooming, shrinking, rotating, and geometric correction
• Interpolation (sometimes called resampling) —
an imaging method to increase (or decrease) the number
of pixels in a digital image.
Some digital cameras use interpolation to produce a larger image than
the sensor captured or to create digital zoom
[Link]
45
Image Interpolation:
Nearest Neighbor Interpolation
f1(x2,y2) = f(x1,y1)
f(round(x2), round(y2))
=f(x1,y1)
f1(x3,y3) =
f(round(x3), round(y3))
=f(x1,y1)
46
Image Interpolation:
Bilinear Interpolation
(x,y)
f 2 ( x, y )
(1 a )(1 b) f (l , k ) a (1 b) f (l 1, k )
(1 a )b f (l , k 1) a b f (l 1, k 1)
l floor ( x), k floor ( y ), a x l , b y k .
47
Image Interpolation:
Bicubic Interpolation
• The intensity value assigned to point (x,y) is obtained by
the following equation
3 3
f 3 ( x, y ) aij x y i j
i 0 j 0
• The sixteen coefficients are determined by using the
sixteen nearest neighbors.
[Link]
48 Examples: Interpolation
49 Examples: Interpolation
49
50 Examples: Interpolation
51 Examples: Interpolation
52 Examples: Interpolation
53 Examples: Interpolation
54 Examples: Interpolation
55 Examples: Interpolation
56
Basic Relationships Between Pixels
• Neighborhood
• Adjacency
• Connectivity
• Paths
• Regions and boundaries
57
Basic Relationships Between Pixels
• Neighbors of a pixel p at coordinates (x,y)
4-neighbors of p, denoted by N4(p):
(x-1, y), (x+1, y), (x,y-1), and (x, y+1).
4 diagonal neighbors of p, denoted by ND(p):
(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).
8 neighbors of p, denoted N8(p)
N8(p) = N4(p) U ND(p)
58
Basic Relationships Between Pixels
• Adjacency
Let V be the set of intensity values
4-adjacency: Two pixels p and q with values from V are 4-
adjacent if q is in the set N4(p).
8-adjacency: Two pixels p and q with values from V are 8-
adjacent if q is in the set N8(p).
59
Basic Relationships Between Pixels
• Adjacency
Let V be the set of intensity values
m-adjacency: Two pixels p and q with values from V are
m-adjacent if
(i) q is in the set N4(p), or
(ii) q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels whose
values are from V.
60
Basic Relationships Between Pixels
• Path
A (digital) path (or curve) from pixel p with coordinates (x 0, y0) to pixel q
with coordinates (xn, yn) is a sequence of distinct pixels with
coordinates
(x0, y0), (x1, y1), …, (xn, yn)
Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.
Here n is the length of the path.
If (x0, y0) = (xn, yn), the path is closed path.
We can define 4-, 8-, and m-paths based on the type of adjacency
used.
61
Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
62
Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
8-adjacent
63
Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1
1
0 2 0 0 2 0 0 2
0
8-adjacent m-adjacent
0 0 1 0 0 1 0 0
1
64
Examples: Adjacency and Path
V = {1, 2}
0 1 1
1,1 1,2 1,3 0 1 1 0 1
1
0 2 0
2,1 2,2 2,3 0 2 0 0 2
0
8-adjacent m-adjacent
0 0 1
3,1 3,2 3,3 0 0 1 0 0
The 8-path from (1,3) to (3,3):
1
(i) (1,3), (1,2), (2,2), (3,3)
The m-path from (1,3) to (3,3):
(1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
65
Basic Relationships Between Pixels
• Connected in S
Let S represent a subset of pixels in an image. Two pixels
p with coordinates (x0, y0) and q with coordinates (xn, yn)
are said to be connected in S if there exists a path
(x0, y0), (x1, y1), …, (xn, yn)
Where i, 0 i n, ( xi , yi ) S
66
Basic Relationships Between Pixels
Let S represent a subset of pixels in an image
• For every pixel p in S, the set of pixels in S that are connected to p is
called a connected component of S.
• If S has only one connected component, then S is called Connected
Set.
• We call R a region of the image if R is a connected set
• Two regions, Ri and Rj are said to be adjacent if their union forms a
connected set.
• Regions that are not to be adjacent are said to be disjoint.
67
Basic Relationships Between Pixels
• Boundary (or border)
The boundary of the region R is the set of pixels in the region that
have one or more neighbors that are not in R.
If R happens to be an entire image, then its boundary is defined as the
set of pixels in the first and last rows and columns of the image.
• Foreground and background
An image contains K disjoint regions, Rk, k = 1, 2, …, K. Let Ru denote
the union of all the K regions, and let (Ru)c denote its complement.
All the points in Ru is called foreground;
All the points in (Ru)c is called background.
68
Question 1
• In the following arrangement of pixels, are the two
regions (of 1s) adjacent? (if 8-adjacency is used)
1 1 1
Region 1
1 0 1
0 1 0
Region 2
0 0 1
1 1 1
1 1 1
69
Question 2
• In the following arrangement of pixels, are the two
parts (of 1s) adjacent? (if 4-adjacency is used)
1 1 1
Part 1
1 0 1
0 1 0
Part 2
0 0 1
1 1 1
1 1 1
70
• In the following arrangement of pixels, the two regions
(of 1s) are disjoint (if 4-adjacency is used)
1 1 1
Region 1
1 0 1
0 1 0
Region 2
0 0 1
1 1 1
1 1 1
71
• In the following arrangement of pixels, the two regions
(of 1s) are disjoint (if 4-adjacency is used)
1 1 1
foreground
1 0 1
0 1 0
background
0 0 1
1 1 1
1 1 1
72
Question 3
• In the following arrangement of pixels, the circled
point is part of the boundary of the 1-valued pixels if 8-
adjacency is used, true or false?
0 0 0 0 0
0 1 1 0 0
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
73
Question 4
• In the following arrangement of pixels, the circled
point is part of the boundary of the 1-valued pixels if 4-
adjacency is used, true or false?
0 0 0 0 0
0 1 1 0 0
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
74 Distance Measures
• Given pixels p, q and z with coordinates (x, y), (s, t),
(u, v) respectively, the distance function D has
following properties:
a. D(p, q) ≥ 0 [D(p, q) = 0, iff p = q]
b. D(p, q) = D(q, p)
c. D(p, z) ≤ D(p, q) + D(q, z)
75 Distance Measures
The following are the different Distance measures:
a. Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]1/2
b. City Block Distance:
D4(p, q) = |x-s| + |y-t|
c. Chess Board Distance:
D8(p, q) = max(|x-s|, |y-t|)
76
Question 5
• In the following arrangement of pixels, what’s the
value of the chessboard distance between the circled
two points?
0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
77
Question 6
• In the following arrangement of pixels, what’s the
value of the city-block distance between the circled
two points?
0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
78
Question 7
• In the following arrangement of pixels, what’s the
value of the length of the m-path between the circled
two points?
0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
79
Question 8
• In the following arrangement of pixels, what’s the
value of the length of the m-path between the circled
two points?
0 0 0 0 0
0 0 1 1 0
0 0 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
80 Introduction to Mathematical Operations in
DIP
• Array vs. Matrix Operation
a11 a12 b11 b12
A B
Array a21 a22 b b
product 21 22
operator
a11b11 a12b12 Array product
A .* B
a21b21 a22b22
Matrix
product
operator
a11b11 a12b21 a11b12 a12b22 Matrix product
A * B
a b a b a b
21 11 22 21 21 12 22 22 a b
81 Introduction to Mathematical Operations in
DIP
• Linear vs. Nonlinear Operation
H f ( x, y ) g ( x, y )
H ai f i ( x, y ) a j f j ( x, y )
Additivity
H ai f i ( x, y ) H a j f j ( x, y )
ai H f i ( x, y ) a j H f j ( x, y ) Homogeneity
ai gi ( x, y ) a j g j ( x, y )
H is said to be a linear operator;
H is said to be a nonlinear operator if it does not meet the above
qualification.
82
Arithmetic Operations
• Arithmetic operations between images are array
operations. The four arithmetic operations are denoted
as
s(x,y) = f(x,y) + g(x,y)
d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)
83 Example: Addition of Noisy Images for Noise Reduction
Noiseless image: f(x,y)
Noise: n(x,y) (at every pair of coordinates (x,y), the noise is uncorrelated
and has zero average value)
Corrupted image: g(x,y)
g(x,y) = f(x,y) + n(x,y)
Reducing the noise by adding a set of noisy images,
{gi(x,y)}
K
1
g ( x, y )
K
g ( x, y )
i 1
i
84
Example: Addition of Noisy Images for Noise Reduction
K
1
g ( x, y )
K
g ( x, y )
i 1
i
1 K
E g ( x, y ) E g i ( x, y )
2
2 K
K i 1 g ( x,y ) 1
gi ( x , y )
K i 1
1 K
E f ( x, y ) ni ( x, y )
K i 1 1 2
2
n( x, y )
1 K
1 K
ni ( x , y ) K
f ( x, y ) E
K
i 1
ni ( x, y )
K i 1
f ( x, y )
85
Example: Addition of Noisy Images for Noise Reduction
► In astronomy, imaging under very low light
levels frequently causes sensor noise to render
single images virtually useless for analysis.
► In astronomical observations, similar sensors for
noise reduction by observing the same scene
over long periods of time. Image averaging is
then used to reduce the noise.
86
Weeks 1 & 2 86
87 An Example of Image Subtraction: Mask Mode Radiography
Mask h(x,y): an X-ray image of a region of a patient’s body
Live images f(x,y): X-ray images captured at TV rates after injection of
the contrast medium
Enhanced detail g(x,y)
g(x,y) = f(x,y) - h(x,y)
The procedure gives a movie showing how the contrast medium
propagates through the various arteries in the area being observed.
88
89 An Example of Image Multiplication
90 An Example of Image Multiplication
• Z = immultiply(X, Y)
– Multiplies each element in array X by the corresponding
element in array Y, and an array Z of same size.
– Original image I, I * I, I * 0.5
91 Set and Logical Operations
92
Set and Logical Operations
• Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where x
and y are spatial coordinates and z denotes the intensity at
the point (x, y).
A {( x, y, z ) | z f ( x, y )}
• The complement of A is denoted Ac
Ac {( x, y, K z ) | ( x, y, z ) A}
K 2k 1; k is the number of intensity bits used to represent z
93
Set and Logical Operations
• The union of two gray-scale images (sets) A and B is
defined as the set
A B {max(a, b) | a A, b B}
z
94
Set and Logical Operations
95 Set and Logical Operations