0 ratings0% found this document useful (0 votes) 57 views11 pagesUnit1 DIP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
(iotial image Processing )——— Gn
ee
es Explain the applications of digital image
Processing,
(RTU.2017
‘Ans. Applications of Digital Image Processing : Some
of the major fields in which digital image proctssing is
widely used are mentioned below:
© Image sharpening and restoration
+ Medical field
+ Remote sensing
+ Transmission and encoding
+ Machine/Robot vision
* Colorprocessing
.
Pattetn recognition
pee
5 Soe Sharpening wd Restoration + Image sharpening
and restoration refers here to process images that have
been captured from the modern camera to make them s
better image or to manipulate those images in way f0
achieve desired result. It refers to do what photoshop
usually does,
‘This includes zooming, blurring, sharpening, gray
scale to color conversion, detecting edge’ and vice versa,
image retrieval and image recognition. x
Medi Fl Ts cine pte XDI fo
the field of femote sensing, the area of
the earth is scanned by a satellite or from a very high
ground and then itis analyzed to obtain information about
it. One particular application of digital image processing
Dicitat: Image: Paocessine :
INTRODUCTION TO
in the field of remote sensing is to deteot infrastructure
damages caused by an-carthquake.. 2...
fertile eer ee
procedure, So a solution to this is found in
processing. An image of the effected area is captured
from the above [Link] then it is analyzed to detect
the various types of damage done by the earthquake,
‘The key steps include inthe analysis.
© — The extraction of edges
* Analysis and enhancement of various types of
edges
“Teassmiasion and Encoding s The very first fago that
‘bas been transmitted over-the wite wasifapin, 10
New York via a submarine cable. was
‘sent is shown below.
The picture that was sent took three hours to reach
from one place to another.
Now just iniagine, that today we are able to see
live video feed, or live cctv footage from one continent to
another with just a delay ofseconds. It means that alot of
‘work has been done in this field too. This field does not
only focus on transmission, but also on encoding: Many
different formats have been’ for high or low
bandwidth to encode photos and then stream it oer the
internet or etc.
MachineMRobot Vision: Apc mb many calleges
tht a robot face today, one ofthe
Lae we rae rer
things, identify them, identify the iirdles etc. Much work
has been contributed by this field and a complete other
field of computer vision has been introduced to work on
et
Scanned with CamScanner2
‘Hardie Detection : Hurdle detection is one of the common.
age processing, by
calculating the distance betweersrebot and hurdles,
Line Follower Robot : Most of the robots today work by
following the fine and thus are called line follower robots.
its path and perform
a dea Vehieved through ae
Cae Color processing includes processing
cof colored images and different CSlor spaces that are used.
Forexarnple RGB colormode!, YENCr, [Link] iavolves
studying transmission, storaé,' of [Link]
imagen. 76 peter oe
Pattern Recognition : ‘involves study
Patterrreeotmition
esa om (nie ine
includes machine learning'(a branch of at
intelligence). In pater recognition, image processing is
[Link] identifying the objects nan images and then
* Video Processing : A video is nothing but just the very
fastmorementof pictures. The quality ofthe video depends
‘on the tiumber of ftames/pictures per minute-and the
quality of each frame being used: Video processing °
involves noise reduction, detail enhancement, motion
detection, frame rate conversion, aspect ratio conversion,
colorspace conversion ec,
at 1s. Digital Image Processing? Give
fatdameal steps in DIP. Explain each block.
val a REU.2010)
. Define the image. Explain the steps of gal
Image processing with suitable diagram.
ah ee - {RTU. 2017)
OR. «
Give fundamental steps in Digital Image
Processing. Explain each block. [RI.V.2013]
wader acca te ‘An image may
defined [Link]-dimensional
of a digital computer. A digital image is composed of
finite number of elements, each of which has. particular f-
location and value, These elements are referred to as
picture clement, image elements, pals and pels Pixelis
the term most widely used to denote the elements of a
igital image.
sion is the most advanced of our senses, so it is
not surprising that images play the single most important
role in human perception. However, unlike humans, who
are limited tothe visual band of the electromagnetic (EM)
spectrum, imaging machines cover almost the entire EM
spectrum, ranging from gamma to radio waves. They can
operate on images geiérated by sources that humans are
not accustomed to associating with images. These include
ultra-sound electron microscopy and computer generated
images. Thus digital image processing encompasses a
‘wide and varied field of applications.
There areno clear-cut boundaries inthe continuum
from image processing at one end'to computer vision at
the other. However, one useful paradigm is to consider
three types of computerized processes in this continuum; |
low, mid and high-level processes. Low-level processes
involve primitive operations such as image preprocessing
to reduce noise, contrast enhancement and image
sharpening, A low-level process is characterized by'the
fact that both it inputs and outputs are images, Mid-level
processing on images involves tasks such as segmentation
(partitioning an image into regions or objects), description
of those objects to reduce them to a form suitable for
‘computer précessing and classification (recognition) of
individual objects. A mid-level process is characterized
by the fact that its inputs generatly are images, but its
outputs are attributes extracted from those images (e.g.,
edges, contours and thé identity of individual objects).
Finally, higher-level processing involves “making sense”
ofan ensemble of recognized objects, asin image analysis
and at the far end of the continuum, performing the
cognitive functions normally assoriated with vision,
Fundamental Steps in Digital Image Processing
() The acquisition could be as simple as being
siven an image that's already in digital form. Generally,
{he image acquiston sage involves preprocessing sich
asscaling,
(i) Image enhasicement is among the simplest
and most appealing areas of digital image processing.
Scanned with CamScannerthe idea behind enhancement techniques is to
tring out detail that is obscured or simply to highlight certain
features of interest in an image, A familiar example of
enhancement is when we increase the contrast of an
image because “it looks better.”
pete nae
tena
retin tin
oon L
Fig. : Fundamental steps in digital Image processing
(Gi) Image restoration is an area tat also deals
With improving the appearance of an image, However,
unlike enhancement, which is subjective, image restoration
is objective, inthe Sense that restoration techniques tend
to be based on mathematical or probabilistic models of
image degradation. Enhancement, on the other hand is
based on human subjective preferences regarding what
constitutes a “good” enhancement result
(iv) Color image processing is an area that has
been gaining in importance because of the significant
increase in the use of digital images over the Internet.
___(v) Wavelets are the foundation for representing
images in Various degrees of resolution.
(vi) Compression, 2s the name implies, deals with
techniques for reducing the storage required to save an
image or the bandwidth required to transmit it. Although
storage technology hs improved significantly over the past
decade the same cannot be sad for transmission capacity.
This is tre particularly in uses ofthe Internet, which are
characterized by significant pictorial content. Image
‘compression is familiar (perhaps inadvertently) to most
‘users of computers in the form of image file extensions,
such as the jpg file extension used in the JPEG (Joint
Photographic Experts Group) image compression
(vil) Morphological processing deals with tools
for extracting image components that are useful in the
representation and description of shape.
(vi Segmentation posedites prin damage
2)
into its constituent parts or objects. In general, autonomous
segmentation is one ofthe most difficult tasks in digital
image processing, A rugged segmentation procedure brings.
the process a long way toward successful solution of
ifnaging problems that require objects to be identified
individually. On the other hand weak or erratic,
segmentation algorithms almost always guarantee eventual
failure, In general, the more accurate the segmentation,
the more likely recognition is to succeed
(ix) Representation and description almost
always follow the output of a segmentation stage, stage
which usually is raw: pixel data, constituting either the
boundary of region (i.e. the set of pixel separating one
image region from atjother) or all the points inthe region
itself. nether case, converting the data toa form suitable
for computer processing is necessary. The first decision
that must be made is whether the data should be
represented as a bout i
Boundary representation is appropriate when the focus is
‘on external shape characteristics, such as corners and
inflections, Regional representation is appropriate when
the focus is on-internal properties, such as texture or
‘skeletal shape. In some applications, these representations
‘complement each othr. Choosing a representation is only
part ofthe solution for transforming raw data into form
suitable for subsequent computer processing. A method
must also be specified for describing the data so that
features of interest are highlighted, Description, also
called feature selection, deals with extracting attributes
that result in some quantitative information of interest or
are basic for differentiating one class of objects from
another.
(x) Recognition is the process that assigns a label
(eg, vehicle”) to an object based on its descriptors. We
conclude’ our coverage of digital image processing with
the development of methods for recognition of individual
objects. ‘
Knowledge about a problem domain is coded into
‘an image processing system in the form ofa knowledge
database. This knowledge may be as simple as detailing
regions of an image where the information of interest is
‘known to be located, thus limiting the search that has to
be conducted in seeking that information, The knowledge
base also can be quite complex, such as an inter-related
list of all major possible defects ina materials inspection
problem or an image database containing high-resolution
‘atellite-images of a region in connection with change-
‘detection applications. In addition o guiding the operation
of each processing module, the knowledge base also
controls the interaction between modules. This distinction
is made in fig, by the use of double-headed arrows
Scanned with CamScannerG@o——___
between the processing modules and the knowledge bas
ul lowledge base,
oppued {o single-headed arrows linking the ohne
___tisimportantto keep in mind that viewi
of image pocesing ane pace ae ouput of
Stage in fig. We also note that not all image processing
applications require the complexity of interactions implied
by fig. In fac, not even all those modules are needed i
Some cases. For example, image enhancement for human
visual interpretation seldom requires use of any of the other
Stages in fig. In general, however, as the complexity of an
image processing task increases, 50 does the number of
processes required to solve the problem,
peer ete ee eee ea
G3 (0) Explain image sensing and acquisition.
(0) Explain color vision modet with example.
(RTU.2017|
Ans.(a) Image Sensing and Acquisition : There are 3
pri
‘proportional to light intensity) (i) Single imaging sensor
(Gi) Line sensor (ii) Array sensor
‘Image Acquisition using a Single Sensor : The most
‘common sensor ofthis type isthe photodiode, which is
constructed of silicon materials and whose output voltage
waveform is proportional to light. The use of a filter in
front ofa sensor improves selectivity For example, a green
(pass) filter in front ofa light sensor favours light in the
‘green band ofthe color spectrum. As a consequence, the
sensor output wil be stronger for green light than for other
inthe visiblespectrum.
Tn order to generate a 2-D image using a single
sensor, there have tobe relative displacements in both the
x and y-directions between the sensor and the area to be
imaged. An arangement used in high precision scanning,
where a film negative is mounted onto a drum whose
mechanical rotation provides displacement in one
dimension, The single sensor is mounted on a lead screw
that provides motion inthe perpendicular direction. Since
‘mechanical motion canbe controlled with high precision,
this method is an inexpeasive (but slow) way fo obtain
‘ Ss
flathed scanners. Sensing devices with 4000 or more in-
Tine sensors are possible. line sensors are used routinely
in airborne imaging applications, in which the finaging
system is mounted onan aircraft that flies at» constant
sensor arangements (produce an electrical output.
tine wd speed ovethe geographical ae to beimaged.
(ech, (Vi Sem) OS. Solved Papers
(One-dimensional imaging sensor strips that respond to
various [Link] the electromagnetic spectrum are
mounted perpendigular to the direction of flight. The
imaging strip gives one line ofan image at atime, and the
motion ofthe strip completes the other dimension of a
two-dimensional image. Sensor strips mounted in a ring
configuration are used in medical and industrial imaging
to obtain cross sectional (“slice”) images of 3-D objects.
AA rotating X-ray source provides illumination and the
portion of the sensors opposite the source collect the X-
ray energy that pass through the object (the sensors
obviously have to be sensitive to X-ray energy)-This is
the basis for medical and industrial computerized axial
tomography (CAT) imaging,
Image Acquisition using Sensor Arrays : This type of,
arrangement is found in digital cameras. typical sensor
for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties
and can be packaged in rigged arrays of 4000 * 4000
‘elements or more. CCD sensors are used widely in dig
‘cameras and other light sensing instrument. The response
‘of each sensor is proportional to the integral ofthe light
energy projected onto the surface ofthe sensor, a property
that is used in astronomical and other applications requiring
low noise images.
‘The first function performed by the imaging system
isto collet the incoming energy and focus tonto an image
plane, Ifthe illumination i ight, the frontend ofthe imaging
system isa lens, which projects the viewed scene onto
the lens focal plane. The sensor array, which is coincident
with the focal plane, produces outputs proportional to the
integral of the light received at each sensor.
‘Ans.(b) Color Vision Model : Color vision model also
known as color model or color space on color system) is
the specification of colors in some standard and accepted
way. Itis a specification of a coordinate system and &
subspace within that system where each color is
represented by a single point,
The classification of color models can be done by
(@) Hardware -oriented models.
(i) Color description-oriented models:
‘The hardware-oriented models most commonly
used are the RGB (Red, Green, Blue) model for color
monitors and a broad class of color video cameras; the
CMY (Cyan, Magenta, Yellow) and. CMYK (Cyan,
‘Magenta, Yellow and Black) models for color printing.
models are HSI (Hue, Saturation and Intensity) model,
‘which describes and intéxpret different properties of colors.
‘The HSI model also helps decoupling the color and gray
scale information of an image,
Scanned with CamScannerRGB models thetnost widely used color model in
“hardware application, in kis mode, each colorin an image
Appear ts primar colo: component Red, Green, Blue
The model follows cartesian coordinate system.
Fig. : RGB Model
The above shown figure is the subspace of RGB
model. The primary values are positioned at the three axl
comers ofthe cube, and the secondary color components
are at the other three comers ofthe cube. Black is atthe
origin and whit is at the farthest comer from black; and
the line joining black and white comer is known as the
sray-scale, The different color in this models are pointed
‘on or inside this cube, andthe value of particular points
determined by the vector distance from the origin (Black)
The general assumption is that all the color values are
‘normalized, so the cube shown in the above figure is a
__nitoube, which signifies, all the color values will all inthe
RGB is 24bit. This is known as full color image depth.
Q4. What is image quantization? Explain the scalar
‘and image quantization in detail. (RIT.U.2016,
{hy k= lyon L 1) a8 set of increasing transition or
values, respectively, of [Link] in interval (t,t) ten
itis mapped tor, the k* reconstruction level
eet
Fig. 1A quantier:
Zero memory quantizers ae i
techniques such es pulse céde modulation5{PCM),
differential PCM, transform coding; and30 on Notethat
the quantizer mapping is irreversible; that is, fora given
quantizer output, the input value cannot be determined
‘uniquely. Hence, a quantizer introduces distortion, which
any reasonable design method must attempt to minimize.
‘There are several quantizer designs available that offer
various trade-offs between simplicity: 2
Scalar Quantization:
Insealarqunizatio, the quantization oupatsthe
result of division of the input data by a quantization
parameter, ‘with rounding: tonite nero If
X isan input sample and Q is a q
the quantized output is
xerten(3)
There are different types of scalar lat cotton
techniques uniform quantization, non-uniform quantization,
and adaptive quantization.
Uniform Quantization : Consider X.,, is the
maximum value fom an input soured te inpt als
sare uniformly distributed nie he
an N-Jeyel uniform scalar quantizer
Se tard XX] ae ee et Se
ail enbrinra ma
step sizeof the uniform quantizer
ie
Characteristics ofan 8-eve uniform sealar quantizer,
is shown in Fig. 2. Horizontal axis represents the input
Scanned with CamScannerandithe vertical axis represents the corresponding value
aftetaquantization and inverse quantization, Hence any ,
value-in between (24, 34), say as an example, will be!
approximated to 2.54.
Uniform scalar quantization is very simple and
straightforward for implementation. It is designed based
‘on assumption tha he input source is uniformly distributed
But often probability of distribution of the source: symbols
isnot uniform in nature andthe uniform scalar quantization
results in poor reconstructed quality. As a result, there is
‘anecessity to design non-uniform quantizers for these types
of sources,
fees
40-30-00 -: ee) BN 3y 4y Input
+250
=—4 -458
Figc2: Uniformsealar quantizer with step size A
Non-uniform Quantization : In order to minimize
the averagedistortion inthe reconstructed image because
ofthe quantization, wecan lightly quantize the transform
oefficients or prediction eror values in the region of high
importance and heavily quantize the corresponding
‘coefficients in a less important region in the image. ‘One
that the quantization steps are smaller for the samples
‘those have more concentration in the curve of probability
distribution of the samples. [Link], dis tion of
the prediction error values is more concentrated at the
origin of the curve compared to prediction error values
in, As.a result, we can use smaller
teem
{tothe other prediction error values,
x, + X_] in the quantizer'is non-
uniformly divided so that the relation between input and
_ output fromthiéquantizr ean match to any desired linear
Tac sell onion quate, Te
charactetistic of a general non-uniform quantizer is shown
inFi3sanexampe whee qanzaton ie pois
(. y %) and the output levels (Y,
: iin ere fixed and they can
Y,
minimize some fiction of the quantization eror
‘hen the probability distribution of the input is know
Adaptive quantization. When the statistics of the
Source symbols considerably changes in the process, the
fixed and predefined uniform or non-uniform quantizer,
fails to yield good results. tn this situation, the quantizers
need to be adaptive with the changes of the source
statistics. There are two classes of adaptive quantizers
forward adaptive quantizers and backuvard adaptive
izers (6). In a forward adaptive quantizer, the
izet extracts the quantization step size from the input,
In this approach the source data is divided into blocks,
Each block is independently analyzed and the quantization
step size is determined based on its statistical distribution,
Ina backward adaptive quantizer, the quantization step
size is determined based on the previously reconstructed
‘output signals from the quantizer. Both the forward and
backward adaptive quantizers have their own advantages
and disadvantages,
Fig. 3: Charncterit of nomaniorm sai quantioer
ee
QS. Explain the concept of image representation and
differentiate the image compression and
representation. IRT.U2016}
"i OR
is enon Jeet oteeetes ot So
IRTU.2011)
‘Ans, Image Representation and Modeling :
In image representation one is concerned with
characterization ofthe quantity that each picture-lement
(also called pixel or pel) represents. An image could
represent luminances of objects in a scene (such as
. pictures taken by ordinary camera), the absorption
haracteristics of the body tissue (X-ray imaging), the
radar cross section of a target (radar imaging), the
temperature profile ofa region (infrared imaging), or the
perianal field in an area (in geophysical imaging). ln
general, any two dimensional function that bears
Scanned with CamScannerfortiation can be contidered an image, Image models
ive logical or qui
ive description ofthe properties
* ofthis function, Figure | lists several image representation
‘id modeling problens:
important consideration in image representation
is the fidelity or intelligibility criteria for measuring the
{quitity of an image or the performance of a processing
tecttnique. Specification of such measures requires models
‘of perception of contrast, spatial frequencies, eolor and
d oh. Knowledge of a fidelity criterion helps in designing,
the’image sensor, because it tells us the variables that
Should be measured most accurately.
The fundamental requirement of digital processing
is that imiages be sampled and quantized, The sampling.
rate (rlumber of pixels per unit area) has to be large enough
to preserve the useful information in an image. It is
determined by the bandwidth of the image. For example,
the bandwidth of raster scanned common television signal
isabout 4MHz, From the sampling theorem, this requires
a minimum sampling rate of MHz. At 30 frames/s, this
‘means each frame should contain approximately 266,000
pixels. Thus, fora $12 line raster, this means each image
frame contains approximately $12 x $12 pixels. Image
quantization isthe analog to digital conversion of a sampled
image to finite number of gray levels.
lew norte emg
gure, 6 eo na eo eo ie
Seen ee Siete
ae “Sate ears a _
oe =—
Fig. | : Image representation and modeling,
‘A classical method of signal representation is by
‘an orthogonal seies expansion, such asthe Fourier series.
Forimages analogous representation is possible via two-
dimensional othogonal functions called basis images. For
cee ups tts iageciabe ceed et
‘unitary matrices called image transforms. Any given image
~ ean be expressed as a weighted sum ofthe bass images
equation and forced by white noise or some other random
field with known power spectrum density is a useful
approach for representing the ensemble.
Fie. 2: bmeserpeentaon orthogonel basis image series
Fig.3 shows three cs €5 of stochastic models where
‘an image pixel is characterized in terms of its neighboring
pixels. Ifthe image were scanned top to bottom and then
leftto right, the model of Fig. (a) would be called causal
model. This is because the pixel A is characterized by
pixels that lie in the “past.” Extending thi idea, the model
‘of Fig.3 (b) isa non-causal model because the neighbors
of A lie in the past as well as the “future” in both the
directions. In Fig.3 (¢), we have a semi-causal model
because the neighbors ofA are inthe past inthe j-direction
and are in the past as well as future in the i-direction,
ripen fet
Che isieeeh
eats! nine be
iNet fe
io Aes io A
oi A
ite 4
Wee capil
(0) Casal mode. (0) Wenenual model, (c)Semicnsl mode
Fig. 3 : Three canonical forms of stochastic models
Such models are useful in developing algorithms
that have different hardware realizations. For example,
causal models can realize recursive filters, which require
small memory while [Link] Infinite Impulse Response
(UR). On the other hand, non-causal models can be used
to design fast transform-based Finite Impulse Response
(FIR) filters. Semi-causal models can yield two-
dimensional algorithms, which are recursive in one
-dimension end-non-recursive inthe other. Some of these
stochastic models can be thought of as generalizations of
‘one dimensional random processes represented by
Auto-Regressive (AR) and Auto-Regressive Moving
‘Average (ARMA) models.
Difference between Image Compression and
_ Representation:
Image Compression : Compression, as the name
implies, deals with techniques for reducing the storage
required to save an image, or the bandwidth required to
Scanned with CamScanner
ee ae ee ae a a ae ee aeHanan Although storage technology has improved
significantly oy r tne dant be
ignficantly over the past decade, the same cannot be
(rized by significant
ssion is familiar (perhaps
computers in the form of
he jpg file extension used
ie Experts Group) image
Pictorial content. Image compre
inadvertently) to most users of
image file extensions, such a th
in the JPEG (Joint Photograph
Compression standard,
: fe oe + Representation and description
: ays follow the output ofa segmentation stage,
Which usually is raw pixel data, constituting either the
boundary ofa region (i. the set of pixels separating one
image region from another) or all he points in the region
itself. In either case, convert
for computer processing is necessary. The first dec
that must be made is whether the data should be
Tepresented asa boundary or as a complete region.
Boundary representation is appropriate when the focus is
fon external shape characteristics, such as corners and
inflections, Regional representation is appropriate when
the focus is on internal properties, such as texture or
skeletal shape. In some applications, these representations
complement each other. Choosing a representation is only
partof the solution for transforming raw data into a form
suitable for subsequent computer processing, A method
must also be specified for describing the data so that
features of interest are highlighted. Description, also called
feature selection, deals with extracting attributes that result
in some quantitative information of interest or are basic
for differentiating one class of objects from another.
—————
Q6 Write short note on Representing digital images.
IRT.U. 2012]
OR
What do you mean by Sampling and
Quantization in Digital Image Processing. How
digital Images are represented? |RLT.V.2013]
Let f(s, 1) represent a continuous image function of
two continuous variables, and1. We convert this function
into a digital image by sampling and quantization, Suppose
that we sample the continuous image into a 2-D array,
lx, y), containing M rows and N columns, where (x, y)
are discrete co-ordinates. For notational clarity and
convenience, we use integer values for these discrete
co-ordinates: x=0, 1, 2 M—1 and y=0, 1, 2yoyN= I,
‘Thus, for example; the value of the digital image at the
‘origin is (0, 0),nd the next co-ordinate value along the
first row is (0, 1), Here, the noltion (0,1) is used to signify
(a-Tech, (Vill Bem.) CS. Solued Papece
the second sample along the first row. It does not médn
that these are the values of the physical ¢o-ordinates when
the image was sampled. In general, the value ofthe image
at any co-ordinates (x, y) is denoted f(x,y), where x and
Yate intergers. The section of the real plane spanned by
the co-ordinates of an image is called the spatial domain,
with x and y being referred to as spatial variables or
spatial co-ordinates
As Fig. shows, there are three basic ways to
tepresent f(x,y), Figure 1(a) isa plot of the function, with
two axes determining spatial location and the third axis
being the values of intensities) asa function of the two
spatial variables x and y. Although we can infer the
structure of the image in this example by looking at the
plot, complex images generally are too detailed and difficult
to interpret from such plots. This representation is useful
when working with gray-scale sets whose elements are
expressed as triplets of the form (x, y, 2), where x and y
are spatial co-ordinates and z is the value of f at
co-ordinates (x,y).
(b), ©
Fig. 1; (a) Image ploted as a surface, (b) Image displayed os @
visual intensity array, () Image shown ax a2-D numericleray
(0,05 and Iepresen blac, gray and white, respectively)
The representation in Fig.b (b) is much more
common. It shows f(x, y) as it would appear on a monitor
‘or photograph. Here, the intensity of each point is
proportional to the value of fat that point. In this figure,
there are only three equally spaced intensity values. If
the intensity is normalized tothe interval (0,1), then each
point inthe image has the value 0, 0.5 or 1. A monitor or
printer simply converts these three values to black, gray,
‘or white, respectively, as. Fig.1 (b) shows. The third
representation is simply to display the numerical values of
(x,y) 48 an array (matrix) In this example, fis of size
Scanned with CamScanner60 +600 elements, or 360,000 numbers. Clearly, printing
the:complete array would be cumbersome and convey
litleinformation. When developing algorithms, however,
‘tis representation is quite useful when only parts of the
image-are printed and analyzed as numerical-values.
Fig. (¢) conveys this concept graphically.
‘We conclude that the Tepresentations in Fig.1(b)
and 1(¢) are the most useful. Image displays allow us to
View results at a glance Numerical arrays are used for
processing and algorithm development. In equation form,
we write the representation of an M x N numerical array
as
F(0,0)— f(0,1) F(0,N-1)
(1,0) it N=
flay)e| Joo). Huy ue )
sM- 1,0) foe 1) a f(M-LN=I),
Al)
Both sides of this equation are equivalent ways of
expressing a digital image quantitatively. Te right side is
‘matrix of real numbers, Each element ofthis matrix is
called an image element, picture element, pixel, or pel.
The terms image and pixel are used throughout the book
to denote a digital image and ts elements.
In some discussions it is advantageous to use a
‘more traditional matrix notation to denote a digital image
and its elements:
Mo - %) Hy
a ao. a N-1
: wn(2)
ae AO MAM om IM AN ot.
= x= i, y= j)= i,j), 80 Eqs. (1) and
@) are Salas matrices. We can even represent an
image as a vector, v. For example, a column vector of
size MN x 1 is formed by letting the first M elements of v
be the first column of A, the next M elements be the
second column, and so on, Alternatively, we can use the
‘rows instead of the columns of A to: form’such a vector.
Either representation is valid, as longas we are consistent.
into digital form. This involves two processes: sampling
and quantization.
Digitizing the co-ordinate values
+ Digitizing the amplitude values is
Basic Concepts in Sampling and Quantization
The basic idea behind sampling and quantization is
illustrated in fig.2(a) shows a continuous image, f(x,y),
that we want to convert to digital form. An image may be
continuous with respect to the x-and y-co-ordinates and
also in amplitude. To convert it to digital form, we haveto
sample the functions in both co-ordinates and in agpplitude.
A
20)
Fig. 2 : Generating a digital image (a) Continuous image, (b) A
sean line from A to B in the continuous image, used to illustrate the
concepts of sampling and quantization, (c) Sampling and
quantization, (@) Digital scan line = SBE
The one-dimensional function shown in fig. 2(b) is
4 plot of amplitude (gray level) values of the continuous
image along the lin segrnent AB in fig 24a).
‘Variations are due to image noise. Tosa i ion,
we take equally spaced samples al $3
in fig. 2(c). The location of each sample is given by 2
vertical tick mark in,the bottom part of the figure. The
samples are shown as small white squares superimposed
‘on the function. The set of these discrete locations gives
the sampled function, However, the values of the samples
still span (vertically) a continuous range of gray-level
values. In order to form a digital function, the gray-level
values also must be converted (q into discrete
‘quantities. The right side of fig, 2(c) shows the gray-level.”
scale divided Lika pretender
to white, The vertical tick marks i
assigned to each ofthepigit ew eae eens
‘gray levels are quantized simply by assigning’one of the ~
eight discrete gray levels to each campleto vertical tick *
Scanned with CamScanner‘mark: The digital samples iesulting from both sampling,
‘and quantization are shown in fig. 2(d). Starting at the top
Of the image and carrying out this procedure line by line
produces a two-dimensional digital image.
Saraplng inthe manner just described assumes that
‘we have a continuous image in both co-ordinate directions
as wellasin ampliuide. In practice, the method of sampling.
is determined by the sensor arrangement used to generate
the image. When an image is generated by a single sensing
element combined with mechanical motion. However,
sampling is accomplished by selecting the number of
individual mechanical increments at which we activate
the sensor to collect data, Mechanical motion can be made
‘very exact so, in principle, there is almost no limits are
established by imperfections in the optics used to focus
on the sensor an illumination spot that is inconsistent with
the fine resolution achievable with mechanical
displacements,
When a sensing strip is used for image acquisition,
the number of sensors in the strip establishes the sampling
limitations in one image direction, Mechanical motion in
the other direction can be controlled more accurately, but
it makes little sense to try to achieve sampling density in
cone direction thatexceeds thesampling limits established
by [Link] in the other. Quantization of the
sensonoatputcomplétes the process of generating a digital
image:
Pape) Coeienr mage projected ont | Sensor array,
(0) Resa of maze sampling and quantization
: When sensing ary isused for image acquisition,
there iso motion and the number of sensors in the array
establishes the limits of sampling in both directions.
‘Quantization of the sehsor outputs is as before. Figure 3
illustrates this coacept. Fig (8) shows a continuous image
‘Projected onto the plane of an array sensor. Fig. 3(b) shows
the image after sampling and quantization. Clearly, the
pore mcr rameneen sect
by: crete gray levels used in
([Link]. (Vill Sem) CS. Solved Papers
Q7 What do you mean by Aliasing in Digital Image
Processing? Explain Moiré pattern,
|RTU, 2013 201
Ans, Aliasing and Moire Patterns
Functions whose area under the curve is finite gan
be represented in terms of sine and cosines of various
frequencies. The sine/cosine component with the highest
frequency determines the highest “frequency content” of
the function, Suppose that this highest frequency is finite
and that the function is of unlimited duration (these
functions are called band-limited functions). Theiv the
Shannon sampling theorem tells us that, if the function is
‘sampled at a rate equal to or greater than twice its highest
frequenoy, it is possible to recover completely the original
funiction from its samples. Ifthe function is under sampled,
then a phenomenon called aliased frequencies. Note that
the sampling rate in images is the number of samples taken
(inboth spatial directions) per unit distance,
As itturns out, except for a special case discussed
in the following paragraph, itis impossible to satisfy the
sampling theorem in practice. We can only work with
‘sampled data that are finite in duration, We can model the
process of converting a function of unlimited duration into
a function of finite duration simply by multiplying the
unlimited function bya “gating function” thatis valued |
for some interval and 0 elsewhere. Unfortunately, this
function itself has frequency components that extend to
Hi
ee
tt i
te ~ i ni _
i il
emanate
Figs Mustration ofthe Moire Potern Effect
Thus, the very act of limiting the duration ofa band-
limited function causes it to cease being band limited, which
causes it to violate [Link] condition of the sampling
theorem. The principle approach for reducing the aliasing
effects on an image is to reduce its high-frequency
components by blurring the image prior to sampling.
Scanned with CamScanner* However,
fowever, aliasing is always present in a sampled image,
Thie'effect of alinsed frequencies can be seen under the
right conditions inthe forth of so called Moire patterns,
"There is one special case of significant importance
in'which a function of infinite duration can be sainpled
‘overa finite interval without violating the sampling theorem,
‘When a function is periodic, it may be sampled at a rate
equal to or exceeding twice its highest frequency and itis
possible to recover the function from its samples provided
that the sampling captures exactly an iiteger number of
periods of the function "This special case allows us to
illustrate vividly the Moire effect. Fig. shows two identical
periodic patterns of equally spaced vertical bars, rotated
in opposite directions and then superimposed on each other
‘by multiplying the two images. A Moire pattern, caused
by a breakup of the periodicity, is seen in fig. as a 2-D
sinusoidal (aliased) waveform (which looks like a
corrugated tin roof) running ina vertical direction. A similar
pattern can appear when images are digitized (e.g.
scanned) from a printed page, which consists of periodic
ink dots. :
Q8 Define sampling theorem and write a short note
on Image quantization, {R-7.U. 2011]
‘Ans. Sampling Theorem
A band limited image f{x, y) sampled uniformly on
a rectangular grid with spacing Ax, Ay can be recovered
‘without error from the sample values f{mAx, ny) provided
the sampling rate is yreater than the Nyquist rate, that is
a
y 7S
Moreover, the reconstructed image is given by the
interpolation formula
f(x,y) bE f'(mAx, nby)
Scanned with CamScanner
a a