0% found this document useful (0 votes)
23 views6 pages

Umer 2015

This paper introduces a fast iris localization method utilizing Restricted Circular Hough Transform (RCHT) and Inversion Transform (IT) to enhance the accuracy and speed of iris segmentation in recognition systems. The method effectively identifies the inner and outer boundaries of the iris by reducing the search space typically required by traditional Circular Hough Transform techniques. Experimental results demonstrate the proposed approach's efficiency when tested against standard iris databases MMU1, CASIA-Iris V3, and IITD.

Uploaded by

ragou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views6 pages

Umer 2015

This paper introduces a fast iris localization method utilizing Restricted Circular Hough Transform (RCHT) and Inversion Transform (IT) to enhance the accuracy and speed of iris segmentation in recognition systems. The method effectively identifies the inner and outer boundaries of the iris by reducing the search space typically required by traditional Circular Hough Transform techniques. Experimental results demonstrate the proposed approach's efficiency when tested against standard iris databases MMU1, CASIA-Iris V3, and IITD.

Uploaded by

ragou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A Fast Iris Localization using Inversion Transform

and Restricted Circular Hough Transform

Saiyed Umer Bibhas Chandra Dhara


Electronics and Communication Sciences Unit Department of Information Technology
Indian Statistical Institute, India Jadavpur University
203, B.T.Road, Kolkata, India Kolkata, India
Email: [email protected] Email: [email protected]

Abstract—This paper presents a fast segmentation of iris


portion from an eye image in an iris recognition system. In iris
segmentation, we have to find the inner boundary (between pupil
and iris) and outer boundary (between sclera and iris). To find
the inner boundary a restricted circular Hough transform based
method is applied. To locate the outer boundary image is first
passes through inversion transform and then restricted circular
Hough transform method is used. Both restricted circular hough
transform method and inversion transform reduce the search Fig. 1. Examples of iris localization.
space of the Circular Hough transform. The proposed method is
a faster one and gives a high accuracy result. The performance of
the proposed method is tested with three standard iris databases
defines the image which is further used for normalization,
MMU1, CASIA-Iris V3 and IITD.
feature extraction and matching steps. The matching process
Keywords—Iris localization; Circular Hough Transform; Inver- is related to the accuracy obtained for the recognition system.
sion Transform;
Daugman [7] [8] used integro-differential operator (IDO)
to localize the iris. Integro-differential operator searches the
I. I NTRODUCTION circular contours by varying the radius and center position of
the circular contour. It works with raw derivative information
In the current world of information technology, security and it does not need thresholding like Hough transform. How-
is more vital and essential issue in our daily life. The field ever, integro-differential operator can not five results where
of security system confirms the identity of an individual or there is noise in the eye image, such as from reflections
recognizes a person based on their biometric characteristics. and less intensity variations between iris and sclera region.
The term biometric refers to a person’s physiological (e.g., Shamsi et al. used circular integro differential operator and
fingerprint, palm print, ear, face, iris) or behavioral (e.g, square shrinking approaches for iris localization in [25]. Ab-
gait, keystroke, signature, speech, voice etc) characteristic. duljalil [22] obtained rough position of the pupil center using
Generally the physiological characteristics are stable and they circular gabor filter and then used integro differential operator
are related to the shape of the body parts where as behavioral for iris boundaries and live-wire technique for eyelid detection.
biometrics are related to the human behavioral habits which Cui [5] used wavelet transform for pupil segmentation and
are relatively less stable. Among these various biometric traits, integro-differential operator for iris segmentation.
iris is one of the most useful due to its high reliability for
person identification [20]. A human iris has an annular region Wildes [30] used Circular Hough transform (CHT) for
between the pupil (black portion) and the sclera (white portion) iris localization. The iris localization using Circular Hough
of an eye with complex regular structure and abundant texture transform, requires thresholding for further edge detection
information. This texture information of the iris are unique to process, and this may result in critical edge points. Sundaram
each individual [19] as compared with other biometrics (such et al. [27] proposed fast method for iris localization based on
as face, fingerprints, voice-prints, etc.). Iris is more reliable Circular Hough transform. Dey [9] proposed scaling and color
and stable due to its unalterable characteristic. level transform followed by thresholding for pupil boundary
detection and dilation, thresholding and vertical edge detection
To develop an iris recognition system there are some basic for iris boundary detection. A gaze estimation based on pro-
steps such as localization, normalization, feature extraction, jective geometry approaches and elliptic curve fitting methods
and iris matching. In the step of localization the task of to find centers and radii for pupil and iris boundaries was
iris recognition system is to detect two boundaries (inner proposed by Mohammadi [21]. Kumar et al. [28] proposed iris
and outer). The detection of inner boundary is used to find segmentation framework for near infrared or visible illuminate
boundary in between pupil and iris regions where as outer iris images based on classification of pixel dependencies be-
boundary in between iris and sclera regions. Fig. 1 shows some tween iris and non-iris regions. Zhaofeng proposed adaBoost-
examples of iris localization. The result of iris localization based localization of pupil and Hookes law to find centers and
radius for pupil and iris [13].
978-1-4799-7458-0/15/$31.00 ⃝2015
c IEEE
Lili Pan proposed thresholding and curve fitting methods B. Circular Hough Transform
to segment both pupil and iris boundary [18]. The approaches
The Hough transform (HT) was first introduced by
like reflections localization and filling, localization for iris
Hough [14] to locate particle tracks in bubble chamber im-
and eyelids boundaries were proposed by Sankowski [24].
agery. Rosenfeld [23] and Duda and Hart [10] had investigated
Attarchi [3] used thresholding with morphological operator to
further application for Hough transform. It detects the paramet-
extract pupil and filtering and edge detection methods for iris
ric curves using a voting process that maps image edge points
and then apply complex inversion map to find centers and
into an appropriately defined parameter space [26] [11]. During
radii for both boundaries. Anisotropic diffusion followed by
detection of parametric curves with in an image, this trans-
laplace pyramid and morphologic operations were used for
formation needs some pre-processing applied on that image
non-ideal iris segmentation by Hong-Lin [29]. Bonney [4]
to identify feature points. These feature points contain some
used significant bit of every pixel in the image and standard
salient information associated to the boundary points respect
deviations of the image intensity to localize the iris pattern.
to the required shape with in the image. The coordinates of
Peihua Li et al. [17] localize the iris for non ideal image
each feature point are used to identify the loci during ‘voting’
conditions using random sample consensus method and image
process where each feature point votes for a set of points in
registration method based on the LucasKanade algorithm.
the parameter space. At the time of quantization this parameter
This paper presents a fast iris localization method in space is divided into bins and the voting process accumulates
which a Restricted Circular Hough Transform (RCHT) and the votes into the bins to form discrete approximation of the
Inversion transform (IT) are used to reduce the search space loci identified by feature points. The transformation of this
of CHT method. The reminder of this paper is organized parameter space is termed as ‘accumulator’.
as follows. Section II discusses some traditional methods HT algorithms can be used to determine the parameters
for iris localization of an eye image. Section III explains of simple geometric objects, such as lines, circles and eclipse
the proposed iris localization method using RCHT and IT present in an image. The Circular Hough transform (CHT) is
methods. Experimental results and discussions are reported in designed to find a circle by characterizing centers (𝛼, 𝛽) and
Section IV. Section V presents the conclusion of this paper. radius 𝑟 in the parameter space. Mathematically, an equation
of a circle is given by 𝑟2 = (𝑥 − 𝛼)2 + (𝑦 − 𝛽)2 , where 𝑥
II. T RADITIONAL METHODS FOR IRIS LOCALIZATION and 𝑦 are the points on the circle in the image. The parametric
form of this circle equation is given by 𝑥 = 𝛼 + 𝑟 × 𝑐𝑜𝑠(𝜃)
The purpose of iris localization is to segment an iris portion and 𝑦 = 𝛽 + 𝑟 × 𝑠𝑖𝑛(𝜃).
from a given eye. To perform these tasks there are several
traditional methods like: Integro-differential operator [6], [7], In CHT method, for each edge point (𝑥𝑖 , 𝑦𝑖 ) a circle ,say
[8], Circular Hough transform [14], [23], [31] or their variants C, with radius 𝑟 is drawn considering (𝑥𝑖 , 𝑦𝑖 ) as the center.
are widely used. To represent the state of the art of the Consider an arbitrary point 𝑝 = (𝑥𝑐 , 𝑦𝑐 ) on C, then the circle
proposed system, in this section we are discussing only IDO centered on (𝑥𝑐 , 𝑦𝑐 ) with radius 𝑟 must be passes through
and CHT methods. (𝑥𝑖 , 𝑦𝑖 ). To find the desired circle, majority voting technique
(i.e., Hough Transform) is applied. In this process, for each
point on C the accumulator value is increased by one. For 𝑛
A. Integro-differential Operator edge points (𝑥𝑖 , 𝑦𝑖 ) : 𝑖 = 1, 2, 3 ⋅ ⋅ ⋅ , 𝑛, the HT is defined as

An Integro-differential operator (IDO) was first proposed


by John Daugman in the year of 1993. It is used to locate the 𝑛

circular boundaries both for pupil and iris regions of an eye 𝐻(𝑥𝑐 , 𝑦𝑐 , 𝑟) = ℎ(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑐 , 𝑦𝑐 , 𝑟)
image. Mathematically it is defined as: 𝑖
(2)

∂ 𝑓 (𝑥, 𝑦) where
max ∣𝐺𝜎(𝑟) ∗ 𝑑𝑠∣ (1) {
{𝑟, 𝑥0 , 𝑦0 } ∂𝑟 (𝑟,𝑥0 ,𝑦0 ) 2𝜋𝑟 1, if 𝑔(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑐 , 𝑦𝑐 , 𝑟) = 0
ℎ(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑐 , 𝑦𝑐 , 𝑟) =
0 otherwise
where 𝑓 (𝑥, 𝑦) is an eye image, 𝑑𝑠 is a circular disk
with radius 𝑟 and center coordinate as (𝑥0 , 𝑦0 ). The symbol with
∗ denotes convolution and 𝐺𝜎(𝑟) is a Gaussian smoothing 𝐶 : 𝑔(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑐 , 𝑦𝑐 , 𝑟) = (𝑥𝑖 − 𝑥𝑐 )2 + (𝑦𝑖 − 𝑦𝑐 )2 − 𝑟2 (3)
function at scale 𝜎.
The parameter triple (𝑥𝑐 , 𝑦𝑐 , 𝑟) that maximizes is common to
The Integro-differential operator acts as a circular edge the largest number of edge points and is a reasonable choice
detector to find the circular path by varying different radii to represent the circular contour.
and center positions. It iteratively finds a maximum contour
integral derivative with increasing radius at finer scales using
III. P ROPOSED L OCALIZATION METHOD
the parameter space of center coordinates and radius i.e.
(𝑥0 , 𝑦0 , 𝑟) to define a path of contour integration. Since it In CHT, the triplet (𝑥𝑐 , 𝑦𝑐 , 𝑟) gives the desired contour
directly works on raw derivative information of the image, and where the accumulator ℎ(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑐 , 𝑦𝑐 , 𝑟) is defined respect to
is sensitive to noise and reflections with in the eye image. This the circle C (defined in Eq. (3)). Finding the location of ‘h’
method is time consuming for localization process as compared with maximum value is time consuming. The major drawback
to CHT method. of the CHT method is the search complexity. To reduce the
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1
searching complexity of the CHT method, we have adopted
2 1 1 1 2 3
the concept of Restricted Circular Hough transform (RCHT)
3 4 4 4
and Inversion transform (IT).
4 cen 5 5 5

5 1 1 1 4 5 2 5 4 3
A. Restricted Circular Hough Transform c1 c2
6 5 5 5

7 4 4 4
In RCHT method, to reduce the search time we consider
8 1 1 1 2 3
some selective search points and then we iterate the process.
9
In this approach, we have started with an initial position (𝑐𝑒𝑛)
10
with some other positions, which are symmetrically distributed
around 𝑐𝑒𝑛 at distance 𝑑, say. Respect to these points CHT 11

is applied and find the best position, 𝑐𝑒𝑛𝑏𝑒𝑠𝑡 . If 𝑐𝑒𝑛𝑏𝑒𝑠𝑡 is Fig. 2. Illustration of the proposed RCHT method.
the current center position, then 𝑑 will be decreased by one;
else respect to new center 𝑐𝑒𝑛𝑏𝑒𝑠𝑡 with distance 𝑑 repeat the A
process. This process will terminate when 𝑑 becomes zero
and corresponding center and radius give the desired circular
contour.
In this experiment, for simplicity, we have considered O Q' Q (x, y)
(x', y')
square pattern of the search centers like as given in Fig. 2
with 𝑑 = 3. The initial search positions are labeled by ‘1’ and C
the search center is 𝑐𝑒𝑛. Then, say, CHT gives 𝑐1 as the best
position and for the next iteration three new positions, labeled Fig. 3. An example for inversion in circle
by ‘2’, are obtained at 𝑑 = 3. Now, suppose best position is
𝑐2 so three new positions, which are labeled by ‘3’ obtained
at distance 𝑑 = 3. Again suppose 𝑐2 is best position given by B. Inversion Transform
CHT, eight new positions at 𝑑 = 2 (labeled by ‘4’) are used.
At this moment, say, CHT gives the same point (𝑐2 ) as the The revival of Euclidean geometry provides the most
best position, for further execution eight new search positions powerful product like inversion which lies in the ability to
(labeled by ‘5’) at 𝑑 = 1 are used. Since, the same point 𝑐2 is convert statements about circles into statements about lines,
fourth time best point so for the next iteration 𝑑 becomes zero often reducing the difficult to the trivial. One of an example
and 𝑐2 is considered as the center. among many euclidean geometry is the inversion in circle
which has received a variety of treatments with very analytical
The algorithmic sketch of the RCHT method is given standpoint [12]. Informally an inversion in a circle is a trans-
below. formation of the plane that flick the circle inside-out (i.e points
outside the circle get mapped to points inside the circle, and
Algorithm: RCHT method points inside the circle get mapped outside the circle) [15]. As
to explain the concept of inversion in circle, we demonstrate
Input: Binary image 𝐵𝑊 , Neighbor distance 𝑑. an example for it in Fig. 3 as follows.
Output: Center (𝛼, 𝛽) and radius 𝑟. Let 𝐶 be a circle in the plane with center at point
𝑂(𝑥0 , 𝑦0 ). Consider a point 𝑄(𝑥, 𝑦)(∕= 𝑂) on the plane of
1. Compute the centroid (𝛼, 𝛽) of 𝐵𝑊 and set 𝐶. We would like to construct an image 𝑄′ (𝑥′ , 𝑦 ′ ) of 𝑄 under
𝑐𝑒𝑛 = (𝛼, 𝛽) inversion transform respect to 𝐶. Let a tangent passes through
2. Consider nine seacrh poistions {(𝛼 + 𝑖.𝑑, 𝛽 + 𝑗.𝑑), 𝑄 touches the circle at point 𝐴. Let 𝐴𝑄′ is the perpendicular
for 𝑖, 𝑗 ∈ {−1, 0, 1}} as possible search positions. on 𝑂𝑄. Inversion transform says that the point 𝑄 is mapped
2.1. Apply CHT respect to the above set to 𝑄′ and conversely. Using similarity triangle property we get
of positions and find the best position
𝑐𝑒𝑛𝑏𝑒𝑠𝑡 and radius 𝑟.
2.2. If 𝑐𝑒𝑛 = = 𝑐𝑒𝑛𝑏𝑒𝑠𝑡 then 𝑂𝑄 𝑂𝐴
= (4)
2.2.a. 𝑑 = 𝑑 − 1. 𝑂𝐴 𝑂𝑄′
2.2.b. If 𝑑 = = 0 then Considering 𝑂𝐴 = 𝑟, we have that 𝑂𝑄.𝑂𝑄′ = 𝑟2 . The
return 𝑐𝑒𝑛, 𝑟. general equation for the inverse of the point 𝑄(𝑥, 𝑦) relative
2.2.c. Goto Step 2. to the inversion circle (𝐶) with inversion center 𝑂(𝑥0 , 𝑦0 ) and
2.3. 𝑐𝑒𝑛 = 𝑐𝑒𝑛𝑏𝑒𝑠𝑡 . inversion radius 𝑟 is given by
2.4. Goto Step 2.
𝑟2 (𝑥 − 𝑥0 )
𝑥′ = 𝑥0 + ,
(𝑥 − 𝑥0 )2 + (𝑦 − 𝑦0 )2
𝑟2 (𝑦 − 𝑦0 )
From the above algorithm and also from Fig. 2, it is 𝑦 ′ = 𝑦0 + (5)
clear that to find the final result we consider some positions (𝑥 − 𝑥0 )2 + (𝑦 − 𝑦0 )2
(determined dynamically) as the possible center of the circular
boundary, thus it is a Restricted Circular Hough transform. In inversion transform, a point outside the inversion circle
INTENSITY
VARIATION INVERSION
SMOOTHED ESTIMATION APPROXIMATE TRANSFORMATION
IMAGE (I')
REFERENCE
CIRCLE I1

FILTERING
ITERATIVE
MEDIAN FILTERING

EDGE DETECTION INVERSION

IRIS
I21 AND RCHT CIRCULAR
CONTOUR
TRANSFORMATION OUTER
BOUNDARY
IMAGE
(I) IRIS BOUNDARY DETECTION

THRESHOLDING
EDGE
(a) (b) BINARY DETECTION
B
REMOVE SMALL
COMPONENTS RCHT INNER
IMAGE B' BOUNDARY

PUPIL BOUNDARY DETECTION

Fig. 5. The schematic diagram of the proposed method.

2) Outer Boundary Detection: In outer boundary detection,


first note the intensity variation both horizontally and vertically
through the center of the inner boundary. For this purpose,
(c) (d) a smoothed iris image (to reduce the effect of noise) is
Original image Transformed image considered. We note the distance from inner boundary center
to the maximum variation (𝑟𝑣𝑎𝑟 )(Fig. 6(e)). We also let 𝑟𝑖𝑛
Fig. 4. Examples of inversion transformation. as the radius of inner boundary and a reference circle with
radius 𝑟𝑟𝑒𝑓 (𝑟𝑖𝑛 < 𝑟𝑟𝑒𝑓 < 𝑟𝑣𝑎𝑟 ) (as shown in Fig. 6(e)).
We apply inversion transform respect to the reference circle
will be mapped inside the circle and vice-verse. Fig. 4 (a) ‘𝐶’ with radius 𝑟𝑟𝑒𝑓 and center of inner boundary as the
and (c) show synthetic images where as (b) and (d) show center of reference. The inversion transform is applied only
their transformed images respectively as considering their on pixels outside ‘𝐶’ which effectively mapped the points
background as black. inside it (Fig. 6(f)). This step actually reduces the search
space. Then the transformed image is smoothed (Fig. 6(g))
and binarized (Fig. 6(h)). The inner and outer circular edge
C. Iris Localization points of binarized image are due to inner and outer boundary
of the transformed image, are here ignored (Fig. 6(i)) and then
The proposed iris localization method consists of two steps: RCHT is applied (Fig. 6(j)). Finally inversion transform is
i) identification of the inner boundary between pupil and iris applied (on Fig. 6(j)) to obtained circular contour (Fig. 6(k)).
region and ii) localization for the outer boundary between iris The algorithmic sketch is discussed as follows.
and sclera region of an eye. The schematic diagram of the
proposed method is shown in Fig. 5. At first an inner boundary Step 1. Smoothen the original image 𝐼 to obtain 𝐼 ′ .
is detected using RCHT method. With the help of this detected Step 2. Note the maximum intensity variation in 𝐼 ′ and
inner boundary a reference circle is considered in between iris find distance 𝑟𝑣𝑎𝑟 from inner boundary center.
and sclera region of eye. Further, IT is applied respect to the Step 3. Take a reference circle ‘𝐶’ with radius 𝑟𝑟𝑒𝑓
reference circle and then RCHT method is used to detect outer (𝑟𝑖𝑛 < 𝑟𝑟𝑒𝑓 < 𝑟𝑣𝑎𝑟 ).
boundary for that eye. The following subsections describe the Step 4. Apply Inversion transform on points lying outside
processes of finding inner and outer boundaries. ‘𝐶’ to obtain 𝐼1 .
Step 5. Smoothen 𝐼1 iteratively to get 𝐼2 .
1) Inner Boundary Detection: To detect pupil boundary Step 6. Binarize 𝐼2 to obtain 𝐵1 .
(i.e, the boundary between iris and pupil), we analyze the Step 7. Remove small connected components with inner
histogram of the original eye image I. Pupil is the darkest and outer edge boundaries from 𝐵1 , gives 𝐵2 .
region of the eyeball. In the gray-level histogram of the eyeball Step 8. RCHT is applied on 𝐵2 to get circular contour.
image, this region corresponds to the peak at the lowest gray- Step 9. Apply Inversion transform on circular contour
level. Let us call this as the first peak of the gray-level (from Step 8) to detect outer boundary for 𝐼.
histogram. So the intensity values in the vicinity of this first
peak or mode represent the pupil region. The valley point IV. E XPERIMENTAL R ESULTS AND D ISCUSSIONS
greater than but nearest to this mode of the histogram is
A. Databases used
considered as threshold t and with respect to this t the original
image is transformed to a binary image B. To identify the In this experiment we select three iris databases, namely,
circular boundary of pupil area we have adopted canny edge MMU1 [1], IITD [16] and CASIA-Iris V3 [2] and perform iris
detection method on the image B and then cleaned it by pre-processing using the proposed method for localization. The
removing small components using morphological area filter MMU1 database has 45 subjects and each subject having 10
to get another image says B′ . Then RCHT based technique is (5 for left and 5 for right) iris images. The MMU1 images
applied on B′ to get center and radius for inner boundary of are captured using LG Iris Access camera. The images of this
I. The step by step results of the inner boundary detection are database contain severe obstructions such as eyelids/eyelashes,
shown Fig. 6 (a) - (d). specular reflection, nonlinear deformation, low contrast, and
rin rref rvar

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)


Fig. 6. The steps for pupil and iris boundary detections: (a) An eye image 𝐼, (b) Binary image B′ (removing small components), (c) Circular contour by
applying RCHT, (d) Inner boundary of 𝐼, (e) Intensity variation profile obtained from smoothed of (a), (f) I1 obtained by applying IT, (g) I2 obtained from (f)
by smoothing, (h) B1 (edge detection of (g)), (i) B2 (removing components from (h)), (j) Circular contour by applying RCHT on (i), (k) Inversion transform
on (j), (l) Outer boundary of 𝐼.

TABLE I. S EGMENTATION ACCURACY FOR IRIS DATABASES

Iris Database
Methods IITD MMU1 CASIA-Iris-V3
Acc. (%) Time (s) Acc. (%) Time (s) Acc. (%) Time (s)
IDO [7][8] 92.27 62.33 97.78 19.18 92.27 14.88
CHT [30] 96.98 0.98 97.99 0.77 95.87 0.98
RCHT 98.48 0.86 98.22 0.72 95.65 0.73
IT+RCHT 99.10 0.56 99.11 0.32 99.07 0.48

illumination changes. The IIT Delhi iris Database consists of (IT) followed by RCHT method (IT+RCHT) to improve the
the iris images collected from the students and staffs at IIT performance to great extent as compared to RCHT method.
Delhi, New Delhi, India. JIRIS, JPC1000, and digital CMOS
Fig. 7 shows the results for iris localization on some
cameras are used to capture the images of this database. The
challenging images of IITD, MMU1 and CASIA-Iris-V3 iris
IITD iris database contains 224 subjects and each subject
databases using IDO, CHT, RCHT and IT+RCHT methods.
with 10 iris images. The images of CASIA-IrisV3 database
have been captured using different imaging setup. The quality
of images present in the database also varies from high- V. C ONCLUSION
quality images with extremely clear iris textural details to In this paper, we propose an accurate and fast algorithm
the images with nonlinear deformation due to variations in which is necessarily helpful in the automatic iris recognition
visible illumination. It contains 2,639 images of 249 number system. The proposed method improves both the segmentation
of subjects. accuracy and speed of iris localization in real life scenario. The
three publicly available iris data sets namely IITD, MMU1
and CASIA-Iris-V3 are tested and compared with the state
B. Results and Discussions
of the art of the iris localization system and this shows the
We have implemented the iris localization system in MAT- satisfactorily results for our proposed method.
LAB on Fedora O/S of version 14.0 with a Intel Core i3 3.20
GHz processor. In this paper the performance of proposed iris R EFERENCES
segmentation is obtained for three iris databases. In the same [1] Multimedia university iris database [online], Avail-
experimental setup, we have compared the performance of the able:http://pesona.mmu.edu.my/ ccteo/.
proposed method with IDO and CHT methods. The MATLAB [2] Casia iris image databases service team, CAS Institute of Automation,
code for IDO and CHT methods are available1 . To measure http://biometrics.idealtest.org/ (2009).
the performance we calculate accuracy (with visual inspection) [3] Sepehr Attarchi, Karim Faez, and Amin Asghari, A fast and accurate
in terms of percentage and consumption of time in terms of iris recognition method using the complex inversion map and 2dpca,
ICIS, IEEE, 2008, pp. 179–184.
second.
[4] Bradford Bonney, Robert Ives, Delores Etter, and Yingzi Du, Iris pattern
Table I shows the performances of our proposed iris extraction using bit planes and standard deviations, SSC, vol. 1, IEEE,
2004, pp. 582–586.
localization approach (IT+RCHT) with the existing methods
like IDO and CHT for IITD, MMU1 and CASIA-Iris-V3 iris [5] Jiali Cui, Yunhong Wang, and Tan, A fast and robust iris localization
method based on texture segmentation, Defense and Security, Interna-
databases. Here, it is clear that the performance of RCHT tional Society for Optics and Photonics, 2004, pp. 401–408.
method is better than IDO and CHT methods both in terms [6] John Daugman, Statistical richness of visual phase information: update
of accuracy and speed. Further we apply inversion transform on recognizing persons by iris patterns, IJCV 45 (2001), no. 1, 25–38.
[7] , The importance of being random: statistical principles of iris
1 http://www.mathworks.com recognition, Pattern recognition 36 (2003), no. 2, 279–291.
[8] John G Daugman, High confidence visual recognition of persons by a
test of statistical independence, PAMI, IEEE Transactions on 15 (1993),
no. 11, 1148–1161.
[9] Somnath Dey and Debasis Samanta, A novel approach to iris localiza-
tion for iris biometric processing, IJB 3 (2008), 180–191.
[10] Richard O Duda and Peter E Hart, Use of the hough transformation to
detect lines and curves in pictures, Communications of the ACM 15
(1972), no. 1, 11–15.
[11] Marco Ferretti and Maria Grazia Albanesi, Architectures for the hough
transform: A survey.., MVA, 1996, pp. 542–551.
[12] ERIN M HANDBERG, On the compass: A fresh look at the classics,
(2006).
[13] Zhaofeng He, Tieniu Tan, and Zhenan Sun, Iris localization via pulling
and pushing, ICPR, vol. 4, IEEE, 2006, pp. 366–369.
[14] Paul VC Hough, Method and means for recognizing complex patterns,
1962, US Patent 3,069,654.
[15] Hiroshi Imai, Masao Iri, and Kazuo Murota, Voronoi diagram in the
laguerre geometry and its applications, SIAM 14 (1985), no. 1, 93–
105.
[16] Ajay Kumar and Arun Passi, Comparison and combination of iris
matchers for reliable personal authentication, PR 43 (2010), no. 3,
1016–1026.
(a) IITD database [17] Peihua Li and Hongwei Ma, Iris recognition in non-ideal imaging
conditions, Pattern Recognition Letters 33 (2012), no. 8, 1012–1018.
[18] Pan Lili and Xie Mei, The algorithm of iris image preprocessing, AIAT,
IEEE, 2005, pp. 134–138.
[19] Li Ma, Yunhong Wang, and Tieniu Tan, Iris recognition using circular
symmetric filters, PR, vol. 2, IEEE, 2002, pp. 414–417.
[20] Kazuyuki Miyazawa and Ito, An efficient iris recognition algorithm
using phase-based image matching, ICIP, vol. 2, IEEE, 2005, pp. II–49.
[21] Mohammad Reza Mohammadi and Abolghasem Raie, Selection of
unique gaze direction based on pupil position, IET-CV 7 (2013), no. 4,
238–245.
[22] Abduljalil Radman, Kasmiran Jumari, and Nasharuddin Zainal, Fast and
reliable iris segmentation algorithm, IET-IP 7 (2013), no. 1, 42–49.
[23] Azriel Rosenfeld, Picture processing by computer, ACM Computing
Surveys (CSUR) 1 (1969), no. 3, 147–176.
[24] Wojciech Sankowski, Kamil Grabowski, Małgorzata Napieralska, Mar-
iusz Zubert, and Andrzej Napieralski, Reliable algorithm for iris
segmentation in eye image, IVC 28 (2010), no. 2, 231–237.
[25] Mahboubeh Shamsi, Puteh Bt Saad, Subariah Bt Ibrahim, and Abdol-
reza Rasouli Kenari, Fast algorithm for iris localization using daugman
circular integro differential operator, SOCPAR, IEEE, 2009, pp. 393–
(b) MMU1 database 398.
[26] Marcin Smereka and Ignacy Dule, Circular object detection using a
modified hough transform, IJAMCS 18 (2008), no. 1, 85–91.
[27] R Meenakshi Sundaram, Bibhas Chandra Dhara, and Bhabatosh
Chanda, A fast method for iris localization, EAIT, IEEE, 2011, pp. 89–
92.
[28] Chun-Wei Tan and Ajay Kumar, Unified framework for automated iris
segmentation using distantly acquired face images, IP, IEEE Transac-
tions on 21 (2012), no. 9, 4068–4079.
[29] Hong-Lin Wan, Zhi-Cheng Li, Jian-Ping Qiao, and Bao-Sheng Li, Non-
ideal iris segmentation using anisotropic diffusion, IET-IP 7 (2013),
no. 2, 111–120.
[30] Richard P Wildes, Iris recognition: an emerging biometric technology,
IEEE 85 (1997), no. 9, 1348–1363.
[31] Richard P Wildes, Jane C Asmuth, Gilbert L Green, Steven C Hsu,
Raymond J Kolczynski, JR Matey, and Sterling E McBride, A system
for automated iris recognition, ACV’94, IEEE, 1994, pp. 121–128.

(c) CASIA-Iris V3 database


IDO CHT RCHT IT+RCHT
Fig. 7. Comparison results of iris localization.

You might also like