0 ratings0% found this document useful (0 votes) 53 views14 pagesDip Mid-2 Unit-5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Opening:
In the opening ofa gray-sale image, we remove small light deals, while relatively
undisturbed overall gray levels and larger bright features
feb=(fOb)eb.
‘The structuring element is rolled underside the surface off, All the peaks that are
‘arow with respect to the diameter ofthe structuring element wil be reduced in amplitude
and sharpness. The initial erosion removes the details, but it also darkens the image. The
subsequent di
the detail totally removed by erosion,
‘Opening a G-S picture is descrbable as pushing object B under the sean-tine graph,
‘hile traversing the graph according the curvature of B
fon again increases the overall intensity of the image without reintroducing
To
Morphological Image Processing & Image Segmentation
Closing:
In the closing of gray-seal
‘undisturbed overall gray levels and larger dark features
feb=(f®b)Ob.
is rolled on top of the surface off, Peaks essentially ae left
Image, we remove small dark detail
while relatively
“The structuring eler
in their original form (assume that their separation at the narrowest poins exceeds the
iameter of the structuring element). ‘The inital dilation removes the dark details and
brightens the image. The subsequent erosion darkens the image without reintroducing the
details totally removed by dilation
Closing a G-S piture is deeribable as pushing object B on top of the sean-tine graph,
‘while traversing the graph according the curvature of B, The peaks are wsually remains in
theie original frm,
Aid aT x.Basi
Morphological Algorithms
Boundary Extraction:
‘The bound
'y of a set A is obtained by first eroding A by structuring element B and
then taking the set difference of A and it’s erosion, The resultant image after subtracting the
eroded image from the original image has the boundary of the objects extracted, The
thickness of the boundary depen
on the size of the structuring element, The boundary of a
set A is obtained by first eroding A by structuring element B and then taking the set
difference of A and it’
erosion. The resultant image after subtracting the eroded image from
20
Morphological Image Processing & Image Segmentation
the original image has the boundary of the objects extracted. The thickness of the bound
depends on the size of the structuring element. The boundary f (A) of a set A is
B(A)= A-(A@B)
—Origin
ab
ed
FIGURE 9.13 (a) Set
A. (b) Structuring
element B.(c) A
eroded by B.
(d) Boundary, given
by the set
difference between
n
B
A and its erosi
AGB B(A)
FIGURE 9.14
(a) A simple
binary image, with
Tsrepresented in
white (b) Result
Eq. (9.8-1) with
the stcteing
lement i
Fig. 9.130)Region Filling or Hole Filli
‘A Hole may be defined as a background region surrounded by connected
border of foreground pixels. This algorithm is based on a set of dilations,
complementation and intersections. Let P is the point inside the boundary, and
that is filled with the value of 1.
Xu = (Xx-1 © B)NAS
> The process stops when Xi = Xi.
Morphological Image Processing & Image Segmentation
» The set X, contains all the filled holes
» The result that given by union of A and Xx, is a set contains the filled set
and the boundary.
abe
def
ghi
FIGURE 9.15
Region filling.
(a) Set A.
(b) Complement
of A
(6) Structuring
element B.
(4) Initial point
inside the
boundary:
(e)-{h) Various
steps of
Eq. (95-2).
(i) Final result
[union of (a) and
(hhHit-or-Miss Transform:
‘The hit-and-miss transform is a basic tool for shape detection, The hit-or-miss
transform is a general binary morphological operation that can be used to look for particular
pattems of foreground and background pixels in an image.
Concept: To detect a shape:
> Hit object 2b
> Miss background
Let the origin of each shape be located at its center of gravity,
>
v
v
v
%
BL
v
¥
If we want to find the location of a shape~ X , at (larger) image, A
Let X be enclosed by a small window, say ~ W.
‘The local background of X with respect to W is defined as the set difference (W =
x)
Apply erosion operator of A by X, will get us the set of locations of the origin of X,
such that X is completely contained in A.
It may be also view geometrically as the set of all locations of the origin of X at
which X found a mateh (hit) in A.
Apply evasion operator on the complement of A by the local background set (W — X).
Notice, that the set of locations for which X exaetly fits
these two last operators. above.
If B denotes the set composed of X and it’s background B = (B1,B2) ; Bl =X ,
B2=(W-X),
‘The match (or set of matches) of B in A, denoted as
A@B=(A OB) n (4° OB)
Object related, B2: Background related
inside A is the intersection of
‘The reason for using these kind of structuring element B = (B1,B2) is based on
definition two or more objects are distinct onl
they are disjoint (disconnected)
sets.
In some applications, we may interested in detecting certain patterns (combinations)
of 1’s and 0°s, not for detecting individual objects
In this case a background is not required and the hit-or-miss transform reduces to
imple erosion
This simplified pattern detection scheme is used in some of the algorithms for -
identifying characters within a text.Morphological Image Processing & Image Segmentation
Aaxuvur
ab
ed
e
a
FIGURE 9.12
(a) Set A.(b) A
window, W. and
the local
background of X
with respect to W,
(W — X).
(©) Complement
‘of A, (d) Erosion
of A by X.
(e) Erosion of A®
Ww — X).
() fntersseton of
() and (e),
showing the
location of the
origin of X,as
desired.
oxourow
ie
‘The structural elements used for Hit-or-miss transforms are an extension to the ones.
used with dilation, erosion etc, The structural elements can contain both foreground and
background pixels, rather than just foreground pixels, i.e. both ones and zeros. The
structuring element is superimposed over each pixel in the input image, and if an exact match
is found between the foreground and background pixels in the structuring element and the
image, the input pixel lying below the origin of the structuring element is set to the
foreground pixel value. If it does not match, the input pixel is replaced by the boundary pixel
value.4.3, Pseudo color image processing
sci color (also called false color image processing consists of essigning coors to
ny values based on a specified criterion. The term pseudo of false color is used to
Aifferentiste the process of assigning colors to monochrome images from the processes
associated with true color images. The process of gray level to color transfomnations is
known as pseudo color image processing. The two techniques used for pscudo color image
processing are,
> Inensty Stcing
> Gray Levelt Color Transformation
42.1. Intensity Slicing:
‘The technique of intensity (sometimes called density) sing and color coding is one
ofthe simplest examples of pseudo color image processing Ian image is interpreted as 83.
function intensity versus spatial coordinates), the metho can be viewed a one of placing
planes parallel to the coordinate plane ofthe image: each plane then “slices” the funetion in
the area of intersection. The folowing figure shows an example of using a plane at fx y)= i
to slice the image function into two levels
40
Fig. Geometric interpretation ofthe intensity slicing technique
IF a ferent colori assigned to each sie ofthe plane shown in the above Figure any
pixel whose gray lve is above the plane wll he coded with one color and any pixel below
the plane will he coded withthe other: Level that lie om the plane itself may be arity
assignod one ofthe two colors. The result i a two-color image whose relative appearance can
be controlled by moving the slicing plane up and dove the gray-fvel ans.
In general, the technique may be summarized as follows, Let (0, L~ 1 represent the
sniy scale, level
[W(x y) = L = 1, Suppose that P planes perpendicular to the intensity ais are defined at
levels fy. Thon, assuming that 0-< P< I~ 1, the Pplanes patton the gray sale ito
l represent black {Hix, y) = Oh and level f+ represent white
Ps Limtervals, V1, V2. Vp ¢ 1. Gray-level to color assignments are made according to the
relation
f.9) =e iffR DEVE
Where eis the color associat withthe k intensity interval Vi, defined by the
Paitioning planes at =k - I and f= An altemative representation defines the same
‘mapping according to the mapping function shown in te following figure. Any inpat ary
levels assigned one of to colors, depending on whether itis above or below the value ofl
‘When more levels are wed, the mapping function takes on a staircase form,
An alternative representation ofthe intensity slicing technique4.3.2. Gray Level to Color Transformation:
This approach is to perform three independent transformations on the gray level of
any input pixel. The three results are then fed separately into the red, green, and blue
channels of a color television monitor. This method produces a composite image whose color
content is modulated by the nature of the transformation functions. These are transformations
on the gray-level values of an image and are not functions of position. In intensity slicing,
piecewise linear functions of the gray levels are used to generate colors. On the other hand,
this method can be based on smooth, nonlinear functions, which, as might be expected, gives
the technique considerable flexibility. The output of each transformation is a composite
image.
Red
transformation
Se(X.¥)
«
transformation
fay) Fol ¥)
Blue
transformation Sal ¥)
Fig. Functional block diagram for pseudo color image processingMorphological smoothing
> Perform opening followed by a closing
> The net result of these two operations is to remove or attenuate both bright and
dark artifacts and noise.
3b :
Morphological Image Processing & Image Segmentation
Morphol I gr:
» Dilation and erosion are use to compute the morphological gradient of an image,
g=( @b)- FO)
» Ituses to highlights sharp gray-level transitions in the input image.
denoted g:
> Obtained using symmetrical structuring elements tend to depend less on edge
directionality.42.1. The RGB Color Mod
In the RGB model, each color appears in its primary spectral components of red,
‘green, and blu, This model is based on a Cartesian coordinate system, The colo subspace of
interest is the cube show i the following figure. In which RGB values are at three comers:
cyan, magenta, and yellow are at thre other comers; blak is atthe origin and white sat the
comer farthest from dhe origin. In this mode, the gray scale (points of equal RGB values)
extends from black to white along the line joining these two points. The diferent colors in
‘his model are points on or inside the cube, and are defined by vectors extending fom the
origin, For convenience, the assumption is that al color values have been normalized so that
the cube shown in the figure isthe unit cube. That s all values of R, G. and B are assumed
to be inthe range (0, 1.
Fig. Schemati of the RGB color cube
Images represented in the RGB color move consis of thee component images, one
for cach primary color. When fed into an RGB monitor, these three images combine om the
phosphor sreen to produce a composite eolor image,
BF
— ?
Fig, Generating the RGB image of the cross Sectional color plane
“The numberof bits used to represen each pixel in RGH space is called the pixel
depth. Consider an RGH image in which each ofthe red, green, and blue images isan $-bit_
Image. Under these conditions each RGB color pixel [hat is, «triplet of valves (R, G. BNL is
sa to have a depth of 24 bits C image planes times the numberof bits per plane). The term
full-color image is used offen to denote a 24-bit RGB color image. The total aumber of
colors in a 24-bit RGB image i 28)" = 16,777,2164.2.2. The CMY and CMYK Color models
Cyan, magenta, and yellow are the secondary colors of light or, alternatively, the
primary colors of pigments, For example, when a surface coated with cyan pigment is
illuminated with white light, no red light is reflected from the surface. That is, cyan subtracts
red light from reflected white light, which itself is composed of equal amounts of red, green,
and blue light. Most devices that deposit colored pigments on paper, such as color printers
and copiers, require CMY data input or perform an RGB to CMY conversion internally. This
conversion is performed using
Cc 1 R
M|=|1|]-|] G
¥ 1 B
Where, again, the assumption is that all color values have been normalized to the
range [0, 1]. The above equation demonstrates that light reflected from a surface coated with
pure cyan does not contain red (that is, C = 1 — R in the equation), Similarly, pure magenta
does not reflect green, and pure yellow does not reflect blue, So, the RGB values can be
obtained easily from a set of CMY values by subtracting the individual CMY values from 1.
Equal amounts of the pigment primaries, cyan, magenta, and yellow should produce black. In
practice, combining these colors for printing produces a muddy-looking black. So, in order to
produce true black, a fourth color, black is added, giving rise to the CMYK color model.424. Conversion from RGB colar model to HSI color model
Given an image in RGB color format, the H component of each RGB pixel is
obtained using the equation,
360-6
Na-cstr-m)
[-oF +R-axG-By
0=008
3
ear tink 6.5)
(R+6+B)
Its assumed that the RGB values have been normalized tothe range [0,1] and that
angle 0 is measured with respect tothe red axis of the HST space. The SI values are in (0.1),
and the H value can be divide by 3600 tobe inthe same range.
42.5. Conversion from HSI color model to RGB color model
Given yalus of HSLin the interval (01 |, one aa find the comesponding RGB values
in the same range. The applicable equations depend on the values of H. There are three
sectors of interest, corresponding tothe 120" interval in the separation of primaris.
RG sector (0° Let R represent the entire image region
> Sogmentation is a process that pastitions R into subregions, R,.Ry,....Ry, such that
(b) R, is a connected region, i =1,2,....
(©) ROR, =¢ foralliand j,i 4 j
(d) P(R,) =TRUE fori = 1,
an
(e) P(R,UR,)=FALSE for any adjacent regions R, and R,
where PCR) logical predicate defined over the points in st Ry
For example: P(R,)=TRUE if all pixels in Rchave the same gray level
ed
FcURE 10.40
(@) Image
() Hound
‘Sspmented
Setectve welds
couresyof
XTTEK Systems
tw,Morphological Image Processing & Image Segmentation
Fig. 10.41 shows the histogram of Fig. 10.40 (a). It is difficult to
segment the defects by thresholding methods. (Applying region
growing methods are better in this case.)
Figure 10.41
Figure 10.40(a)Overview
Smoothing in image processing is a technique used to reduce noise and fine details in an image by applying a low-pass
filter. This filter works by replacing each pixel value with an average value of its neighboring pixels. Smoothing can help to
improve the visual quality of an image and make it easier to analyze by reducing the impact of small variations in pixel
values. However, too much smoothing can result in the loss of important information, so it's important to choose an
appropriate level of smoothing based on the specific requirements of the application
Introduction Ab | U St refe AS N ce
Smoothing in image processing refers to the process of reducing noise or other unwanted artifacts in an image while
preserving important features and structures. The goal of smoothing is to create a visually appealing image that is easy to
interpret and analyze. Smoothing techniques use various algorithms, such as filters or convolutions, to remove noise or
other distortions in the image. Effective smoothing requires striking a balance between removing unwanted artifacts and
preserving important image details, and is an essential step in many image processing applications, including image
segmentation, object recognition, and computer vision.