0% found this document useful (0 votes)
301 views

EC8093 Unit 2

The document discusses various image enhancement techniques in the spatial and frequency domains. It begins by introducing common types of image degradation and the goal of image enhancement. In the spatial domain, techniques include point processing methods like gray level transformations, mask processing using filters, and global operations like histogram equalization. Gray level transformations map pixel values, including techniques like negatives, log, power law, and piecewise linear. Histogram processing aids contrast. The frequency domain approach operates on the Fourier transform of the image.

Uploaded by

Santhosh Pa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
301 views

EC8093 Unit 2

The document discusses various image enhancement techniques in the spatial and frequency domains. It begins by introducing common types of image degradation and the goal of image enhancement. In the spatial domain, techniques include point processing methods like gray level transformations, mask processing using filters, and global operations like histogram equalization. Gray level transformations map pixel values, including techniques like negatives, log, power law, and piecewise linear. Histogram processing aids contrast. The frequency domain approach operates on the Fourier transform of the image.

Uploaded by

Santhosh Pa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

EC8093 DIGITAL IMAGE PROCESSING

UNIT 2 IMAGE ENHANCEMENT


PART 1
CONTENTS

Image Enhancement Techniques

Spatial Domain Method

Histogram Methods

Frequency Domain Method


INTRODUCTION
• Images may suffer from the following degradations:

Poor contrast due to poor illumination or finite


sensitivity of the imaging device

Electronic sensor noise or atmospheric disturbances


leading to broad band noise

Aliasing effects due to inadequate sampling

Finite aperture effects or motion leading to


spatial
INTRODUCTION

• Image enhancement is the process of making images


more useful

Highlighting interesting detail in images

Making images more visually appealing

Removing noise from images


Image Enhancement Examples
Image Enhancement Examples (cont…)
Image Enhancement Examples (cont…)
Image Enhancement Methods

Enhancement
Techniques

Frequency Domain
Spatial
Operates on FT of
Operates on pixels
Image
Chapter 2.1
SPATIAL DOMAIN
APPROACHES
SPATIAL DOMAIN APPROACHES

• Direct image manipulation


• Spatial domain will be denoted by the expression
g(x,y)=T[f(x,y)]
Where f(x,y)->input image
g(x,y)->Enhanced image
T->Operator
f->neighborhood of (x,y)
Image Enhancement in the Spatial
Domain
Spatial Domain Methods

SPATIAL DOMAIN

POINT GLOBAL
MASK PROCESSING OPERATION
PROCESSING
Point Processing
Simplest kind of range transformations are these
independent of position x,y:
g = T(f)

Image depends only on gray level at that point.


Ex: Gray level Transformations
MASK PROCESSING
Each pixel is modified according to the values in a
predefined neighborhood.

This approaches is based on the use of masks.(also


called as filters or kernels or templates or windows)
Ex: Smoothing and sharpening spatial filters
GLOBAL OPERATION

All pixel values in the Image are taken into


consideration for the enhancement process

Ex: Histogram
Equalization
Types of Spatial Domain Enhancement
Techniques

SPATIAL DOMAIN

Gray level Histogram


Spatial Filtering Enhancement
Transformation Processing
using arithmetic
(or)
and logic operation
Intensity
Transformation
Gray Level Transformations

Each pixel value in the original image is mapped on to a new pixel


value to obtain the enhanced image.

Gray level transformation is represented as


s =T(r)

Where ‘r’-> pixel value before processing


‘s’-> pixel value after processing
‘T’-> Transformation that maps a
pixel value ‘r’ to a pixel value ‘s’

Do not modify the spatial relationship between the pixels.


Ex: Contrast stretching
Types of Gray level Transformation

Gray Level
Transformation

Power Law Piece Wise Linear


Image Negative Log Transformation
Transformation Transformation
Image Negatives

The negative of an image with gray levels in the range[0,L-1] is obtained by using
the negative transformation.

The negative Transformation is given by the expression,


s=(L-1)-r
Where L=Number of gray levels in the image,
L-1= Maximum gray level value

since the input image is an 8 bpp image, so the number of levels in this
image are 256. Putting 256 in the equation, we get this
s = 255 – r
So each value is subtracted by 255
Digital Negative
L
Negative

Output gray level s


3L/4

The lighter pixels become dark L/2

and the darker picture becomes L/4


light. And it results in image Identity
negative.
0 L/4 L/3 3L/4 L-1
input gray level r

EE465: Introduction to Digital Image


20
Processing
LOG TRANSFORMATION
Image Negatives
The log transformations can be defined by this formula
s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a
constant.
LOG TRANSFORMATION FUNCTION
PowerImage
– LawNegatives
transformations
There are further two transformation is power law transformations, that
include nth power and nth root transformation. These transformations can
be given by the expression:
s=crγ

This symbol γ is called gamma, due to which this transformation is also


known as gamma transformation.
For example Gamma of CRT lies in between of 1.8 to 2.5, that means
the image displayed on CRT is dark.
Why power laws are popular?
• A cathode ray tube (CRT), for example, converts a video
signal to light in a nonlinear way. The light intensity I is
proportional to a power (γ) of the source voltage VS
• For a computer CRT, γ is about 2.2
• Viewing images properly on monitors requires γ-correction
POWER LAW TRANSFORMATION
FUNCTION
EXAMPLE GAMMA=10

GAMMA=8

GAMMA=6
Piecewise linear transformation
• Principle Advantage: Some important transformations can be
formulated only as a piecewise function.
• Principle Disadvantage: Their specification requires more
user input that previous transformations
• Types:
Piecewise linear
transformation

Contrast stretching Gray level slicing Bit plane slicing


Contrast stretching

➢ Low contrast images result from the following


a) Poor illumination
b) Lack of Dynamic range in the imaging sensors
c) Wrong setting of a lens aperture during image
acquisition

It is a process that increase the dynamic range of the gray


levels in the image being processed.
Contrast Stretching

Typical Transformation used for contrast stretching


Thresholding Function
• If r1=s1 &r2=s2,The transformation is a linear functions that produces no
changes in gray levels.
• Intermediate values of (r1=s1)&(r2=s2) produce various degrees of spread
in the gray levels of the output image, thus affecting its contrast.
Contrast Stretching
Gray-level Slicing
• Highlighting a specific range of gray levels in a given image.
Ex : Enhancing the flaws in an X-ray image.
• Equivalent to band pass filtering. It has two approach
– One approach is to highlight the value for all gray levels in
the range of interest while diminishing all the others. This
background information is not preserved in this method and
it produces a binary image.
– The second approach, based on the transformation
brightens the desired range of gray levels but also preserves
the background information.
Gray level Slicing
Bit-plane slicing

•Bit-plane slicing : Pixels are digital numbers composed of bits. For


example, the intensity of each pixel in an 256-level gray-scale image is
composed of 8 bits (i.e., one byte).
•Instead of highlighting intensity-level ranges, we could highlight the
contribution made to total image appearance by specific bits.
Piecewise linear Transformation Functions
Bit- plane Slicing: Fractal image
Piecewise linear Transformation Functions
Bit- plane Slicing: Fractal image

7 6

5 4 3

2 1 0
Chapter 2.2
HISTOGRAM PROCESSING
Histogram
• Histogram of an image , like other histograms also shows frequency.
But an image histogram , shows frequency of pixels intensity values.
• In an image histogram, the x axis shows the gray level intensities
and the y axis shows the frequency of these intensities.
• Gray-level histogram is a function showing, for each gray level, the
number of pixels in the image that have that gray level.
• Normalized histogram (probability):

nk = hist[k ] = 
( )
1
f x, y =k

pk = nk / N
• Application: X-ray of a bone of a body, computer vision.

38
Histogram Example
Types of image based on histogram components
Types of image based on histogram components
Histogram Equalization

• Histogram equalization:
– To improve the contrast of an image
– To transform an image in such a way that the transformed image has a
nearly uniform distribution of pixel values
• Transformation:
– Assume r has been normalized to the interval [0,1], with r = 0
representing black and r = 1 representing white
s = T (r ) 0  r 1
– The transformation function satisfies the following conditions:
• T(r) is single-valued and monotonically increasing in the interval 0  r  1
• 0  T (r )  1 for 0  r  1
Histogram Equalization

• For Example:
Histogram Equalization

• Histogram equalization is based on a transformation of the


probability density function of a random variable.
• Let pr(r) and ps(s) denote the probability density function of
random variable r and s, respectively.
• If pr(r) and T(r) are known, then the probability density
function ps(s) of the transformed variable s can be obtained
dr
p s ( s ) = pr ( r )
ds r
• Define a transformation function s = T (r ) = 0 pr ( w)dw
where w is a dummy variable of integration
and the right side of this equation is the cumulative distribution
function of random variable r.
Histogram Equalization
r
• Given transformation function T(r), T (r ) = 0 pr ( w)dw
ds dT (r ) d  r  = p (r ) (Using Leibniz’s Rule)
dr  0
= = p ( w) dw
dr dr
r  r

Leibniz’s Rule: State that the derivative integral with respect to its
upper limit is simply the integrand valuated at the
limit.

dr 1
p s ( s ) = pr ( r ) = pr ( r ) =1 0  s 1
ds pr ( r )

ps(s) now is a uniform probability density function.


• T(r) depends on pr(r), but the resulting ps(s) always is uniform.
Histogram Equalization

• In discrete version:
– The probability of occurrence of gray level rk in an image is
nk
pr ( r ) = k = 0,1,2,..., L − 1
n
n : the total number of pixels in the image
nk : the number of pixels that have gray level rk
L : the total number of possible gray levels in the image
– The transformation function is

k k nj
sk = T (rk ) =  pr (rj ) =  k = 0,1,2,..., L − 1
j =0 j =0 n

– Thus, an output image is obtained by mapping each pixel with level rk


in the input image into a corresponding pixel with level sk.
Histogram Equalization

Example:

We are looking for


this transformation !
Advantages
• Very simple computation.
• Its fully automatic i.e. histogram equalization is only based on
information that can be directly extracted from the given
image.
• Histogram equalization produses image with gray level values
that cover the entire gray level.
Histogram Matching (Histogram Specification)

• Histogram equalization does not allow interactive image enhancement and


generates only one result: an approximation to a uniform histogram.
• Sometimes though, we need to be able to specify particular histogram shapes
capable of highlighting certain gray-level ranges.
PROCEDURE:
1. Obtain the histogram of the given image.
k k nj
s = T (rk ) =  pr (rj ) = 
j =0 j =0 n
for k=0,1……..L-1
• n: total number of pixels,
nj: number of pixels with gray level rj
L: number of discrete gray levels
Contd….
2. Determine ‘G’ where Pz(zi) is the specific histogram required
z z
ni
vk = G ( z k ) =  p z ( z i )  
0 i =0 n

3. Find the value of zk for each value of sk using iterative scheme such
that,(G(zk)-sk)=0.

4. Now for each pixel rk map to corresponding sk and map that to


corresponding zk value.
rk → sk → zk
Histogram matching Example
Histogram matching Example
LOCAL ENHANCEMENT

• The histogram processing methods discussed above are global,


in the sense that pixels are modified by a transformation
function based on the gray-level content of an entire image.
• However, there are cases in which it is necessary to enhance
details over small areas in an image.
Procedure:
1. Define a square or rectangular neighborhood and move the
centre of this area from pixel to pixel.
2. At each location, the histogram of the points in the
neighborhood is computed and either histogram equialization
or specification transformation is obtained.
Contd…

3. This function is finally used to map the gray level of the pixel
centered in the neighborhood.
4. Move the centre of the neighborhood and repeat the procedure.
HISTOGRAM STATISTICS FOR IMAGE ENHANCEMENT

• Moments can be determined directly from a histogram much faster than


they can from the pixels directly.
• Let r denote a discrete random variable representing discrete gray-levels in
the range [0,L-1], and p(ri) denote the normalized histogram component
corresponding to the ith value of r, then the nth moment of r about its mean
is defined as L −1
 n (r ) =  (ri − m) p(ri )
n

i =0
where m is the mean value of r
L −1
m =  ri p(ri )
i =0

• For example, the second moment (also the variance of r) is


L −1
 2 (r ) =  (ri − m) 2 p(ri )
i =0
Contd….

• Moments can be determined directly from a histogram much faster than


they can from the pixels directly.
• Let r denote a discrete random variable representing discrete gray-levels in
the range [0,L-1], and p(ri) denote the normalized histogram component
corresponding to the ith value of r, then the nth moment of r about its mean
is defined as L −1
 n (r ) =  (ri − m) p(ri )
n

i =0
where m is the mean value of r
L −1
m =  ri p(ri )
i =0

• For example, the second moment (also the variance of r) is


L −1
 2 (r ) =  (ri − m) 2 p(ri )
i =0
Contd….
• Two uses of the mean and variance for enhancement purposes:
– The global mean and variance (global means for the entire image)
are useful for adjusting overall contrast and intensity.
– The mean and standard deviation for a local region are useful for
correcting for large-scale changes in intensity (mean)and
contrast in that neighborhood(variance).
➢ The Local mean of the pixels in Sxy can be computed as

mSxy = r
s ,tS xy
s ,t p (rs.t )

➢ The Local gray level variance mean of the pixels in region Sxy can be
computed as

 S2 =
xy  s,t S xy p(rs.t )
[ r − m
( s ,t )S xy
] 2
Use of histogram statistics for image enhancement
Example : Enhancement based on Local statistics
Use of histogram statistics for image enhancement
Example : Enhancement based on Local statistics
Use of histogram statistics for image enhancement
Example : Enhancement based on Local statistics
Image Enhancement Using Arithmetic and
Logical Operations

• Two images of the same size can be combined using


operations of addition, subtraction, multiplication, division,
logical AND, OR, XOR and NOT. Such operations are done
on pairs of their corresponding pixels.
• Often only one of the images is a real picture while the other is
a machine generated mask. The mask often is a binary image
consisting only of pixel values 0 and 1.
Image Enhancement Using Arithmetic and Logical
Operations

AND

OR
Image Subtraction
Example:1
Image Subraction
Example:1

• When subtracting two images, negative pixel values can result.


So, if you want to display the result it may be necessary to
readjust the dynamic range by scaling.
Image Averaging

• When taking pictures in reduced lighting (i.e., low


illumination), image noise becomes apparent.
• A noisy image g(x,y) can be defined by

g ( x, y ) = f ( x, y ) +  ( x, y )
where f (x, y): an original image
 ( x, y ) : the addition of noise
• One simple way to reduce this granular noise is to take several
identical pictures and average them, thus smoothing out the
randomness.
Noise Reduction by Image Averaging
Example:Adding Gausian Noise

• Figure 3.30 (a): An image


of Galaxy Pair NGC3314.
• Figure 3.30 (b): Image
corrupted by additive
Gaussian noise with zero
mean and a standard
deviation of 64 gray
levels.
• Figure 3.30 (c)-(f): Results
of averaging K=8,16,64,
and 128 noisy images.
Chapter 2.3
BASICS OF SPATIAL
FILTERING Or spatial averaging
Or Mask processing
Basics of spatial filtering
• In spatial filtering (vs. frequency domain filtering), the output image is
computed directly by simple calculations on the pixels of the input image.
• Spatial filtering can be either linear or non-linear.
• For each output pixel, some neighborhood of input pixels is used in the
computation.
• In general, linear filtering of an image f of size M X N with a filter mask of
size m x n is given by
a b
g ( x, y ) =   w(s, t ) f ( x + s, y + t )
s = − at = − b

where a=(m-1)/2 and b=(n-1)/2

• This concept called convolution. Filter masks are sometimes called convolution
masks or convolution kernels. It can be written as
mn
R=  z
( i =1)
i i

• Where ωi=mask coefficients, zi=image gray levels


Basics of spatial filtering
Spatial filters
• Types:
Spatial filters

Smoothing spatial Sharpening spatial


filters filters
Blurring and noise Highlight in an fine
reduction in images details in an image

Linear Smoothing Non Linear Smoothing


spatial filters spatial filters
Basics of spatial filtering

• Nonlinear spatial filtering usually uses a neighborhood too, but


some other mathematical operations are use. These can
include conditional operations (if …, then…), statistical
(sorting pixel values in the neighborhood), etc.
• Because the neighborhood includes pixels on all sides of the
center pixel, some special procedure must be used along the
top, bottom, left and right sides of the image so that the
processing does not try to use pixels that do not exist.
Linear Smoothing Spatial filters

• Smoothing linear filters :average of the pixel contained in the


neighborhood of the filter mask.
– Averaging filters 9
• Box filter => R =  zi
i =1

• Weighted average filter

Box filter Weighted average


Linear Smoothing Spatial filters

• The general implementation for filtering an MXN image with


a weighted averaging filter of size mxn is given by
a b

  w(s, t ) f ( x + s, y + t )
g ( x, y ) = s = − at = − b
a b

  w(s, t )
s = − at = − b

where a=(m-1)/2 and b=(n-1)/2


Non linear Smoothing Filters
• Based on ranking the pixel contained in the image area
encompassed by the filter,& then replacing the value of center
pixel with the value determined by the ranking result.
• TYPES:
NON LINEAR
SMOOTHING
SPATIAL FILTERS

Max and Min Alpha Trimmed


Median Filters Mid Point Filters
Filters Filters
Applications
• Removal of random noise
• Smoothing of False contours
• Reduction of irrelevant details in an image
• Blurring an image to highlight the objects of interest.
Smoothing Spatial filters
Image smoothing With masks of various sizes
Smoothing Spatial filters
Another Examples
Sharpening Spatial Filters
• Highlight Fine details in an image and achieved by spatial
differentiation.
• Derivative filters are used for sharpening.
• TYPES:

First order Second order


derivative filters derivative filters
• (produce thicker (respond to fine
edge in an image) details in the
image)
Order-statistic filters

• Order-statistic filters
– Median filter: to reduce impulse noise (salt-and-
pepper noise)
Use of second Order derivatives for enhancement (or)
Laplacian Operators

• Development of the Laplacian method


– The two dimensional Laplacian operator for continuous
functions: 2 f 2 f
2 f = +
x 2
y 2
– The Laplacian is a linear operator.
2 f
= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y )
x 2
2 f
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
y 2

 2 f = [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1)] − 4 f ( x)
Use of second Order derivatives for enhancement (or)
Laplacian Operators
Use of second Order derivatives for enhancement (or)
Laplacian Operators

• To sharpen an image, the Laplacian of the image is subtracted


from the original image.
 f ( x, y ) −  2 f if the center coefficient of the Laplacian mask is negative.
g ( x, y ) = 
 f ( x, y ) +  f if the center coefficient of the Laplacian mask is positive.
2

→Eq.(3-7-5)
Use of First Order derivatives for enhancement (or)
Gradient Operators

• Development of the Gradient method


– The gradient of function f at coordinates (x,y) is defined as the two-dimensional
column vector:  f 
Gx   x 
f =   =  f 
G y   
 y 
– The magnitude of this vector is given by

1
 f  2  f  2

 
1 2
f = mag(f ) = G + G 2 2 2
=   +   
 x   y 
x y

f  G x + G y
Robert’s Cross Operator
• The simplest approximations to a first-order
derivative that satisfy the conditions stated in that
section are
Gx = (z9-z5) and Gy = (z8-z6)

z1 z2 z3 f = ( z 9 − z 5 ) 2 + ( z8 − z 6 ) 2

z4 z5 z6 f  z 9 − z 5 + z8 − z 6
z7 z8 z9 Roberts Cross Gradient Operators
Gx Gy
-1 0 0 -1
0
-1 -1 0
Sobel’s Operator
• Mask of even size are awkward to apply.
• The smallest filter mask should be 3x3.
• The difference between the third and first rows of the 3x3
mage region approximate derivative in x-direction, and the
difference between the third and first column approximate
derivative in y-direction.
f  ( z7 + 2 z8 + z9 ) − ( z1 + 2 z 2 + z3 ) + ( z3 + 2 z6 + z9 ) − ( z1 + 2 z 4 + z7 )

-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1
Prewitt’s Operator
• Mask of even size are uncomfortable to implement.
• The smallest odd sized filter mask should be 3x3.
• The difference between the third and first rows of the 3x3 mage
region approximate derivative in x-direction, and the difference
between the third and first column approximate derivative in y-
direction.
f  ( z7 + z8 + z9 ) − ( z1 + z2 + z3 ) + ( z3 + z6 + z9 ) − ( z1 + z4 + z7 )

-1 -1 -1 -1 0 1

0 0 0 -1 0 1

1 1 1 -1 0 1
Applications of gradient operators
• For automated inspection in industries
• For detection of defects
• To enhance small discontinuities
• To eliminate slowly changing background features
Use of First Order derivatives for enhancement (or)
Gradient Using Sobel Operators
Unsharp masking
• A process to sharpen images consists of subtracting a
blurred version of an image from the image itself. This
process, called unsharp masking, is expressed as

f s ( x, y ) = f ( x, y ) − f ( x, y )

Where f s ( x, y ) → the sharpened image obtained by unsharp masking,


f ( x, y ) → blurred version of f ( x, y )
High-boost filtering

• A high-boost filtered image, fhb is defined at any


point (x,y) as

f hb ( x, y ) = Af ( x, y ) − f ( x, y ) where A  1

f hb ( x, y ) = ( A − 1) f ( x, y ) + f ( x, y ) − f ( x, y )

f hb ( x, y ) = ( A − 1) f ( x, y ) + f s ( x, y )

This equation is applicable general and does not state explicity


how the sharp image is obtained
High-boost filtering and Laplacian

• If we choose to use the Laplacian, then we know


fs(x,y)

 Af ( x, y ) −  2 f ( x, y ) If the center coefficient is negative


f hb = 
 Af ( x , y ) +  2
f ( x, y ) If the center coefficient is positive

0 -1 0 -1 -1 -1

-1 A+4 -1 -1 A+8 -1

0 -1 0 -1 -1 -1
Applications of Spatial enhancement filters

• Printing industry
• Image based product inspection
• Forensics
• Microscopy
• Surveillance
EC8093 DIGITAL IMAGE PROCESSING
UNIT 2 IMAGE ENHANCEMENT
PART 2
Chapter 2.4
Image Enhancement in the
Frequency Domain
Frequency Domain Processing
• Edges and sharp transitions (e.g., noise) in an image
contribute significantly to high-frequency content of FT.
• Low frequency contents in the FT are responsible to the
general appearance of the image over smooth areas.
• Blurring (smoothing) is achieved by attenuating range of
high frequency components of FT.

Forward Processing Inverse output


Input
Transform transform image
image

3
INTRODUCTION TO FOURIER TRANSFORM AND THE
FREQUENCY DOMAIN

• The one-dimensional Fourier transform and its inverse


– Fourier transform (continuous case)

F (u ) =  f ( x)e − j 2ux dx where j = − 1
−
– Inverse Fourier transform: e j = cos  + j sin 

f ( x) =  F (u )e j 2ux du
−
• The two-dimensional Fourier transform and its inverse
– Fourier transform (continuous case)
 
F (u, v) =   f ( x, y )e − j 2 (ux + vy ) dxdy
− −
– Inverse Fourier transform:
 
f ( x, y ) =   F (u, v)e j 2 (ux + vy ) dudv
− −
INTRODUCTION TO FOURIER TRANSFORM AND THE
FREQUENCY DOMAIN

• The one-dimensional Fourier transform and its inverse


– Fourier transform (discrete case) DTC
M −1
1
F (u ) =
M
 f ( x )e
x =0
− j 2ux / M
for u = 0,1,2,..., M − 1

– Inverse Fourier transform:


M −1
f ( x) =  F (u )e j 2ux / M for x = 0,1,2,..., M − 1
u =0
INTRODUCTION TO FOURIER TRANSFORM AND THE
FREQUENCY DOMAIN

• The two-dimensional Fourier transform and its inverse


– Fourier transform (discrete case) DTC

M −1 N −1
1
F (u, v) =
MN

x =0 y =0
f ( x, y )e − j 2 (ux / M + vy / N )

for u = 0,1,2,..., M − 1, v = 0,1,2,..., N − 1


– Inverse Fourier transform:
M −1 N −1
f ( x, y ) =  F (u, v)e j 2 (ux / M + vy / N )
u =0 v =0

for x = 0,1,2,..., M − 1, y = 0,1,2,..., N − 1


• u, v : the transform or frequency variables
• x, y : the spatial or image variables
INTRODUCTION TO FOURIER TRANSFORM AND THE
FREQUENCY DOMAIN

• Some properties of Fourier transform:



 f ( x, y )(−1) x+ y
 M N
= F (u − , v − ) (shift)
2 2
M −1 N −1
1
F (0,0) =
MN
 f ( x, y)
x =0 y =0
(average)

F (u, v) = F * (−u,−v) (conujgate symmetric)


F (u, v) = F (−u,−v) (symmetric)
THE TWO-DIMENSIONAL DFT AND ITS INVERSE

The 2D DFT F(u,v) can be obtained by


1. taking the 1D DFT of every row of image f(x,y), F(u,y),
2. taking the 1D DFT of every column of F(u,y)

(a)f(x,y) (b)F(u,y) (c)F(u,v)


Basics of Filtering in the Frequency Domain
Some Basic Filters and Their Functions

• Multiply all values of F(u,v) by the filter function (notch filter):


 0 if (u, v) = ( M / 2, N / 2)
H (u, v) = 
1 otherwise.
– All this filter would do is set F(0,0) to zero (force the average value of
an image to zero) and leave all frequency components of the Fourier
transform untouched.
Some Basic Filters And Their Functions

Lowpass filter

Highpass filter
Some Basic Filters And Their Functions
Frequency Domain filters
• Types:
Frequency domain
filters

Smoothing (or) Sharpening (or)


Low pass filters High pass filters

Ideal LPF Gaussian Butterworth Gaussian Butterworth


LPF LPF Ideal HPF
HPF HPF
Smoothing Frequency – Domain Filters

• The basic model for filtering in the frequency domain

G (u, v) = H (u, v) F (u, v)


where F(u,v): the Fourier transform of the image to be smoothed,
H(u,v): a filter transfer function

IFT: g(x,y)=F-1[G(U,V)]

• Smoothing is fundamentally a lowpass operation in the frequency domain.


• There are several standard forms of lowpass filters (LPF).
– Ideal lowpass filter
– Butterworth lowpass filter
– Gaussian lowpass filter
Ideal Low Pass Filters(ILPFs)

• The simplest lowpass filter is a filter that “cuts off” all high-
frequency components of the Fourier transform that are at a
distance greater than a specified distance D0 from the origin of
the transform.
• The transfer function of an ideal lowpass filter
1 if D(u, v)  D0
H (u, v) = 
0 if D(u, v)  D0
where D(u,v) : the distance from point (u,v) to the center of ther
frequency rectangle

 
1
D(u, v) = (u − M / 2) 2 + (v − N / 2) 2 2
Ideal Low Pass Filters(ILPFs)

• The simplest lowpass filter is a filter that “cuts off” all high-
frequency components of the Fourier transform that are at a
distance greater than a specified distance D0 from the origin of
the transform.
• The transfer function of an ideal lowpass filter
1 if D(u, v)  D0
H (u, v) = 
0 if D(u, v)  D0
where D(u,v) : the distance from point (u,v) to the center of
ther frequency rectangle

 
1
D(u, v) = (u − M / 2) + (v − N / 2)
2 2 2
Ideal Low Pass Filters(ILPFs)
Ideal Low Pass Filters(ILPFs)
Ideal Low Pass Filters(ILPFs)
Ideal Low Pass
Filters(ILPFs):
Example

frequency

Figure 4.13 (a) A frequency-domain


ILPF of radius 5. (b) Corresponding
spatial filter. (c) Five impulses in the
spatial domain, simulating the values spatial
of five pixels. (d) Convolution of (b)
and (c) in the spatial domain.

f ( x, y )  h( x, y )  F (u, v) H (u, v) spatial

spatial
Butterworth Low Bass Filters with order n

1
H (u, v) =
1 + D(u, v) / D0 
2n
Butterworth Lowpass
Filters (BLPFs)

n=2
D0=5,15,30,80,and 230
Butterworth Low Bass Filters(BLPFs)
Spatial Representation

n=1 n=2 n=5 n=20


Gaussian Low Bass Filters (FLPFs)

− D 2 ( u ,v ) / 2 D02
H (u, v) = e
Gaussian Lowpass
Filters (FLPFs)

D0=5,15,30,80,and 230
Sharpening Frequency Domain Filters

H hp (u, v) = 1 − H lp (u, v)
Ideal highpass filter

0 if D(u, v)  D0
H (u, v) = 
1 if D(u, v)  D0

Butterworth highpass filter


1
H (u, v) =
1 + D0 / D(u, v)
2n

Gaussian highpass filter

− D 2 ( u ,v ) / 2 D02
H (u, v) = 1 − e
High pass Filters Spatial Representations
Ideal High pass Filters

0 if D(u, v)  D0
H (u, v) = 
1 if D(u, v)  D0
Butterworth High pass Filters

1
H (u, v) =
1 + D0 / D(u, v)
2n
Gaussian High pass Filters

− D 2 ( u ,v ) / 2 D02
H (u , v) = 1 − e
Chapter-2.5
Homomorphic Filtering
Illumination-Reflectance model
• We can view an image f(x,y) as a product of two
components:

f(x,y) = i (x,y) . r (x,y)

• i(x,y): illumination. It is determined by the


illumination source.
• r(x,y): reflectance (or transmissivity). It is determined
by the characteristics of imaged objects.

32
Homomorphic Filtering
• In some images, the quality of the image has reduced
because of non-uniform illumination.
• Homomorphic filtering can be used to perform
illumination correction.
f(x,y) = i (x,y) . r (x,y)

• The above equation cannot be used directly in order


to operate separately on the frequency components of
illumination and reflectance.

33
Homomorphic Filtering

Input image Z(x,y) Z(U,V)


f(x,y)
DFT H(U,V)
ln
SU,V)

Enhanced s(x,y)
image g(x,y) exp IDFT

34
Homomorphic Filtering

• By separating the illumination and reflectance


components, homomorphic filter can then operate on
them separately.
• Illumination component of an image generally has
slow variations, while the reflectance component
vary abruptly.
• By removing the low frequencies (highpass filtering)
the effects of illumination can be removed .

35
Homomorphic Filtering Procedure
• Step 1: ln→ F{ab} ≠ F {a} . F{b}

• Step 2: DFT→ Z(U,V)= F{z(x,y)}

• Step 3: Apply Filter Function→


S(U,V)=Z(U,V)H(U,V)

• Step 4: IDFT→ s(x,y)=F-1(S(U,V))

• Step 5: Exponential→ g(x,y)= es(x,y)


Homomorphic Filtering

37
Homomorphic Filtering: Example 1

38
Homomorphic Filtering: Example 2

Original image Filtered image

39
Advantages
• Good control over illumination and reflectance
components.

• Simultaneously dynamic range compression


and contrast enhancement can be achieved.

You might also like