1359-Article Text-5075-1-10-20240315
1359-Article Text-5075-1-10-20240315
1 Introduction
Automatic human eye tracking and localization is an extensive investigation
in computer science technologies. Many human-computer interface devices use
eye localization technology. Due to their daily use, Human-Computer Interface
(HCI) applications seem to be the most interesting. Eye tracking and detection
are focused on two priority areas, eye localization and gaze estimation. It should
be noted that there are three aspects related to the first one.
1
Intelligent Systems Research Laboratory, Electronics Department, University of Sciences and Technology of
Oran USTO-MB, Algeria; E-mails: [email protected]; [email protected]
91
S. Berrached, N. Berrached
The primary aspect is to detect the eye existence, the second one is to localize
eye position in the image, and the third one is to track the previously located eyes
[1]. On computers, smartphones, and other new vision technologies, gaze can
offer a comfortable manner for guiding by gaze directions (robotic control) [2].
Eyes are becoming an interesting communication channel between contributors.
Other control systems and security applications gain robustness by adding eyes
detection and eye center localization: precise eye localization permits suspicious
behavior detection in stadiums, airports, ports, theatres and other sites. Person
identification improves due to image processing advances.
Also, other commercial sectors benefit from the regarded tracking process;
for example, potential customer gaze direction analysis can help to choose most
relevant product place. Eye movement is becoming more and more used in
medical field to get earlier and more precise diagnoses such as some pathologies
related to brain state. All these recent applications prove eyes localization
importance focusing on eye center localization.
Different eyes localization methods are summarized in [3], those methods
can be divided into two different categories according to acquisition sources. The
first category are images obtained under Infra-Red (IR) illumination acquisition,
used to yield corneal reflection, which is very helpful for center eye’s location.
The second category takes into account light variations in a set of images taken
from a video stream, this type is closer to real scenarios, however, the extreme
variability of eyes' appearance in a real environment makes eyes center detection
a very complex challenge. Different approaches are involved in this category of
acquisition type. Many challenging difficulties have to be handled as human eye
is a very complex organ, its nature varies from a person to another, it has intrinsic
changes (open or closed eyes, partially or totally), wearing glasses affects eye
image, presence of light modulates pupil dilatation, and eye does not have a
simple known shape. All these characteristics necessitate a development of a
robust application which can achieve the majority of the requirements. In their
paper, Pavlović et al. [4] made the comparison between two imagery types, the
visible and the infrared light, where the original size image was 320 x 240 pixels
for face identification purpose, and proposed an optimal ratio between cell and
image size for calculating Histogram of Oriented Gradients (HOG) features.
Results are compared with the original size facial image and conclusion was that
downsized dimension images with scale factor from 0.1 to 1 offer better recognition
compared with original size images precisely for scale factor equal to 0.2.
Precise eye center location methods differ by computation, efficiency and
more over precision degree. The precision degree is typically defined by the
relative error measure proposed by Jesorsky et al. [5], which is considered
nowadays the major evaluation metric used. Yang et al. [6] proposed a novel
Gabor Eye based method that makes full use of the special grey distribution in
92
Fast Eye Centre Localization Using Combined Unsupervised Technics
the eye-and-brow region, and self adaptively selects proper Gabor kernel to
convolute with the face image. The proposed method is robust against the face
changes in illumination, expression and pose.
Hamouz et al. [7] develop a new method to localize faces for person
identification by ten feature points on face through Gabor filters yielding a
superior performance compared with the reference methods. S. Kim et al. [8] use
multiscale Gabor Feature vectors at eye coordinates which allowed efficient eye
localization. In [9] and [10], authors introduce isophote curvatures to infer the
center of (semi) circular patterns and a novel center voting mechanism, whereas
in [11] use of isophote curvatures in fatigue detection process provided a
detection rate exceeding 85%. Eye’s isophote curvature combined to shape
regression model is used in [12] to obtain robustness in eye center localization.
In [13] isophote curvature is combined with quasi-continuous responses of a
modified cascade classifier framework using appearance-based features to study
its performance within real scenarios. Timm and Barth [14] proposed to localize
eye centers by managing image gradients, an objective function expressed by a
periocular geometry that peaked at the center of a circular object.
Efficient SVM, PCA, neural networks, and Fisher linear discriminant
classifiers are widely used to locate eyes, respectively, in [15 17] and [5].
Naseem et al. [18] adopt a faster RCNN deep learning model and AlexNet
as the detection key for face, eyes and eye openness detection. The localization
step combines techniques composed of rectangular-intensity-gradient approach.
Levinstein et al. [19] created a hand-crafted data set adopted to train a cascade of
regression forests, the obtained results are equivalent to automatically trained
systems, the final results are refined by robust circle fitting followed by circle
matching process, the most relevant candidate is assumed to be the iris and its
center is declared the eye center. Struc and co-authors in [20] introduced a new
technique to extract facial landmarks, based on Principal directions of Synthetic
Extract Filters (PSEFs) and then applied this method for eye localization. The
results were in accordance with the Haar filter method proposed in [21]. Wei-Yen
Hsu et al. [22] carried out a new center localization method to locate the eye
center under different situations in a large yaw head rotation (between -67.5° and
+67.5°).
Fusion and combination of multiple unsupervised techniques is largely used
for iris localization. Xiao et al. [23] fused the facial landmarks, active contours
(snakuscule), circle fitting, and a simple binary connected component. E. Skodras
and N. Fakotakis [24] adopted color information to build an eye map, a
cumulative result of the radial symmetry transform is applied both to the eye map
and to the original eye image. In [25] M. Soltany et al. proposed grey projection
and Circular Hough Transform (CHT) to locate the pupil in natural light eye
images.
93
S. Berrached, N. Berrached
Leo et al. [26] proposed a new method to locate the eyes, in particular to
localize the pupils, by a two-step procedure in which differential analysis of
image intensities is combined with self-similarity coefficients, and junction
results give the estimated center eye position.
W. Zhang et al. [27] used isophote global center voting and gradient-based
pupil estimation in a modular, unsupervised approach. The previous author’s
papers presented in [23 27] prove the efficiency of eye center localization by
unsupervised techniques.
From the above literature, the majority of prior work achieve eye center
localization in a known environment using a learning process, which consumes
much time and is inappropriate in real-life scenarios. We choose to create
algorithm by trying to attempt real-time execution without any previous
knowledge about the environment. In this paper, we are proposing a new hybrid
method that can achieve robust eye center localization; the main contribution of
this research is the combination of several techniques based on mean of edges
combined with Circular Hough Transform (CHT) and Maximally Stable
Extremal Regions (MSER).
The primary contributions of this research are as follows:
(i) Proposal of fast approach that combines several methods.
(ii) Simple techniques are used to create a primary sub window.
(iii) The MSER technique is applied to the eyes' localization systems.
Furthermore, we evaluate robustness by using the very challenging BioID
database.
94
Fast Eye Centre Localization Using Combined Unsupervised Technics
95
S. Berrached, N. Berrached
2.2.2 Binarization
Based on raw thresholding, we assign a value one (the color white) to a pixel
of the image if it has an intensity lower than the threshold, otherwise, it will be
zero (black). This process is performed on each pixel of the image, and as a result,
we obtain an image including only two levels (values 0 or 1).
2.2.3 Closing
This mathematical morphology is applied to remove all undesired blobs
around eye region [27].
The closing of I by a structuring element B (where B is a 3×3 pixels) is
denoted as ◦, and is obtained by the dilation of I by B, followed by erosion of the
resulting structure by B:
96
Fast Eye Centre Localization Using Combined Unsupervised Technics
3 CHT Detector
We use a Circular Hough Transform (CHT) based algorithm to find the eye
circle in the image. This approach is used due to its robustness under different
illumination and noise conditions, which is in line with our requirements. Due to
the possibility of circle parameters being directly transferred to parameters space,
circle is simpler to represent compared to the line [29]. For the CHT calculation,
97
S. Berrached, N. Berrached
a separate circle filter can be used for each circle radius to be detected. This forms
the familiar 3-dimensional parameter space, usually associated with the CHT,
where two dimensions represent the position of the circle center (a, b), and the
third is its radius r,
r 2 ( x a)2 ( y b)2 . (7)
In parametric form, a circle is expressed by (8) and (9),
X a r cos , (8)
Y b r sin . (9)
Therefore (from Fig. 5) in the (a, b) space, a circle with a radius r and a centre
(x, y) is represented as follows:
a x r cos , (10)
b y r sin , (11)
(a, b) represent polar coordinate for center (x, y) converted to radians.
Y b
r
(x, y) (x, y)
X a
98
Fast Eye Centre Localization Using Combined Unsupervised Technics
We must deal with the cells fielding accumulator problem (for each
accumulator we instantiate 3D matrix), in addition to the 3D parameter space.
CHT classical algorithm is computationally very intensive and takes considerable
time. Votes for several radii must be stored in a 3D array when using the
conventional Hough Transform, which increases storage needs and processing
times. For these reasons, the algorithm implementation is modified to be efficient
and fast.
3.1 Circle Hough Transform Modification
Many different approaches can be taken for the CHT implementation. In this
study, we modified the algorithm in order to lower the computation time. There
are three important steps that are common to all approaches: accumulator array
computation, centre estimation, and radius estimation.
We used the following parameters in the CHT implementation:
Instead of using an accumulator array for each radii, a single 2-D
accumulator covers all radii, by using this strategy the overall computational time
is lowered, especially when working across a wide radius range. The pre-
treatment step significantly reduces calculation here.
Edge Pixels are used because the number quantity of candidate pixels
substantially influences both overall memory needs and performance. The input
image's gradient magnitude is thresholded to ensure that only pixels with a high
gradient are counted in order to reduce their number.
Edge Orientation Information, restricting the number of cells available to
candidate pixels improves performance. For these aims, voting process is allowed
for a short period along the gradient. The radius range defined by rmin and rmax
determines the width of the voting interval between points cmin and cmax, Fig. 6.
99
S. Berrached, N. Berrached
The modified Hough Transform algorithm is used to determine the iris circle
for a range of values, with the range being narrowed down to [1, 30] through
assessment experiments. Iris contour matching is completed by the modified
circular Hough Transform technique, Fig. 7.
Eyelids, eyelashes, closed or open eyes, and even a person wearing glasses
are correctly detected, see examples in Fig. 8. On the other hand, false edges
disturb the CHT detector and the method can diverge and provide false detections
with no localization or only one eye accurate localization, Fig. 9.
(a) (b)
Fig. 8 Accurate localization by CHT Modified: (a) True localization in both eyes;
(b) True localization in both eyes in presence of glasses.
(a) (b)
Fig. 9 – Examples of inaccurate localization by CHT Modified algorithm:
(a) No localization of the right eye and false localization of the left eye;
(b) No localization of both eyes.
100
Fast Eye Centre Localization Using Combined Unsupervised Technics
By using a second detector that operates in parallel with the CHT detector,
these flaws can be avoided, and the rate of accurate localization will be enhanced.
The CHT method is limited by incomplete, insufficient and partial boundary
detection. Partially, and completely closed or occluded eyes are not localized
correctly.
101
S. Berrached, N. Berrached
We apply MSER algorithm to the regions Reye and Leye (calculated in pre-
processing step) according to the minima previously discovered, regions in each
blob that correspond to those minima are considered as MSERs with the
constraint of additional requirements that define the allowable minimum and
maximum MSER region sizes. These requirements are designated by:
– Region Area Range;
– Threshold Delta;
– Max area variation (typical values range from 0.1 to 1.0);
– ROI =Reye and Leye.
102
Fast Eye Centre Localization Using Combined Unsupervised Technics
To choose the best region which represents real eye, we introduce pseudo
code MSERs Selection algorithm, where n represents number of ellipses found
and Max-Variation is defined by iris nature. Our algorithm defines which Mser-
Feature from multiple Mser-Features is the most relevant to be the real eye region.
Centre of each eye is centre of ellipses obtained by MSER localization.
The equation's best-fit ellipses (given by (14)) ultimately come close to the
positions of the discovered MSERs, resulting in a single region that represents the
eye region by an ellipse and the centre by an ellipse centroid.
The ellipses equation is given by:
a( x x0 )2 2b( x x0 )( y y0 ) c( y y0 )2 1 , (14)
where ( x0 , y0 ) is the centroid’s region centre of the mass computed by calculating
moments of order 0 and 1, and (a, b, c) are coefficients for which it holds
a 2 b2 c 2 , and which are calculated from the center moment of order 2. The
computational expenses are limited (mostly the conversion of region moments
into ellipse parameters), since moments (of any order) for all areas of a segmented
image may be precomputed in a single scan of the image.
4.3 Discussion
The obtained results demonstrate the applicability of MSER to eye detection
on faces under different conditions including wearing glasses, and partially or
completely closed eyes (see Fig. 11). Our approach is based on connectivity
within the region and its stability and it performs relatively accurate and precise
in the localization of eye centers. When MSER regions are localized, these
regions can be tracked over time from frame to frame by the MSER algorithm.
MSER algorithm is adapted for real time applications as in [32].
103
S. Berrached, N. Berrached
104
Fast Eye Centre Localization Using Combined Unsupervised Technics
6 Evaluation
Bio ID database [33] is chosen to test our algorithm, because of two major
reasons: it is one of the most used datasets in our domain hence it enables
comparison with the recent state of the art work. In addition, it is rich in
information encompassing differences in subjects, pose, environment and even
illumination conditions. The database consists of 1521 grey level images of 23
different subjects. The image quality and the image size (286x384) is
approximately equal to the quality of a low-resolution webcam. The left and right
eye centers are annotated and provided together with the images. Our approach
starts with face and eye region position offered by Viola and Jones detection
algorithm, eye centres are estimated on two regions obtained by several steps (as
explained in Section 2) based on gradient MSERs and CHT were combined to
calculate accurate eye centers. We evaluate the normalized error, which indicates
the error obtained by the worst of both eye estimations.
This measure was introduced by Jesorsky et al [5], and is defined as:
1
e max(el , er ) , (15)
d
where el , er are the Euclidean distances between the estimated and the correct
left and right eye centers, and 𝑑 is the distance between the eye centres.
Other metric values eAverage, eBest were largely used to demonstrate accuracy
of the proposed algorithms: these two normalized errors indicate in a large area
representing successively eye region e 0.25 , iris regions e 0.10 and pupil
e 0.05 how close is the found location to the real eye center. We also provide
the measures eBest and eAverage in order to observe smallest error as well as an
averaged error.
1
eAverage max(el , er ) , (16)
2d
1
eBest min(el , er ) . (17)
d
7 Results
The evaluation of the proposed algorithm yields precise localization of eye
pupil, as shown in Fig. 13. Table 1 contains the obtained results to verify the
method's effectiveness. Note that our algorithm consists of four levels:
– The mean of the gradients to achieve a starting step for subsequent
treatments. It reduces ROI region and yields an accuracy of 64.65% for
e 0.25 (eye region);
105
S. Berrached, N. Berrached
– MSERs are effective in dealing with eyes that are partially or even
completely closed and provide 89.08% for e 0.25 ;
– The CHT approach is powerful in different scenarios because of the nature
of the eye (when eyes shape in the image approximates a circle shape, eyes
open) with effectiveness of 91.78% for e 0.25 ;
– The combined approach produces the best accuracy of 98.85% for e 0.25
and we obtain 82.53% for e 0.05 (pupil localization).
Table 1
Accuracy vs. normalized error for applying method.
METHODS e 0.05 e 0.10 e 0.25
MEAN OF GRADIENTS 20.37% 58.62% 64.65%
MSER 45.21% 64.44% 89.08%
CHT 58.62 72.66% 91.78%
MEAN OF GRADIENTS + MSERS + CHT 82.53% 89.81% 98.85%
From the accuracy curve of the obtained results for different normalized error
e (relative error measure proposed by Jesorsky and al. [5]) (red line, Fig. 13), the
y-axis reports the percentage of images in the database on which the pupils were
localized in terms of the error parameter e calculated by (15), x-axis reports the
corresponding value of e (error less than the normalized error). The same figure
also reports the pupil localization performances obtained on the same database
by using two other metrics, eAverage (black line calculated by (16)) and eBest (blue
line calculated by (17)). For e 0.05 , 10 and 0.25, the proposed method obtains
accuracy results of 82.53, 89.81 and 98.85% respectively on the BioId database.
Table 2
Comparison of the accuracy for eye center localization on the BioID dataset.
106
Fast Eye Centre Localization Using Combined Unsupervised Technics
107
S. Berrached, N. Berrached
average processing time is calculated using the BioID database. The size of each
image is 384 × 288 pixels.
Table 3 compares the average processing time of several algorithms that
enable near real time performance, taking both hard and soft implementation into
account. The proposed method delivers lower processing time compared to others
results. The MSER eye center localization process alone takes 0.04 s. The
processing time will be shorter by applying MSER for tracking the eye center
between successive images and not for detection on each image separately, and
this has enabled us to improve our results.
Table 3
Processing time for most common algorithms for the eye localization.
Average
Methods Implementation
processing time
Matlab (R2015a) platform and is tested
Proposed method on Asus computer with 8 GB RAM and 0.085 s
1.8 GHz Intel i7-4500U CPU processor
Vater, Sebastian et León Standard CPU with 3.3GHZ in a
0.1 s
(2016) [12] prototype Matlab implementation
Chen, Shuo et LIU,
Pentium 3with 3.0 GHZ 0.12 s
Chengjun (2015) [14]
0.04 s (not
Levinshtein et al.
Modern Laptop Xeon 2.8GHZ CPU including the
(2018) [18]
face detection)
Laptop with an intel ® Core ™ i7,
Xiao, Fet al.
7500U CPU running at 2.7GHZ an 0.0257 s
(2018) [22]
8GB RAM
Leo M, Cazzato Matlab (R2012a) platform and is tested
0.07 s
(2014) [25] on Sony VAIO PGG_71213w processor
Matlab (R2017b) platform and is tested
Ahmed, Manir et al. on Dell computer with 16 GB RAM
0.0657 s
(2022) [32] and 3.30 GHz Intel Core i5 CPU
processor
9 Conclusion
Eye localization method based on an unsupervised system made of multiple
combined techniques is proposed, which employs the combined CHT and MSER
algorithm to achieve robust and fast eye localization. The objectives of strength,
ease, and inexpensive imaging devices were achieved with the proposed
algorithm. Introducing CHT modified method enables good results on images
under complicated conditions (pose, luminance, etc.). The MSER approach
improves eye center estimation by resolving partially or totally closed eyes due
to connecting blobs technique. In particular, our algorithm achieves an accuracy
of 80.53% with an error less than the normalized error 0.05.
108
Fast Eye Centre Localization Using Combined Unsupervised Technics
10 References
[1] D.W. Hansen, Q. Ji: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze, IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 3, March 2010,
pp. 478 500.
[2] L. Itti, C. Koch: Computational Modelling of Visual Attention, Nature Reviews Neuroscience,
Vol. 2, No. 3, March 2001, pp.194 203.
[3] F. Song, X. Tan, S. Chen, Z.- H. Zhou: A Literature Survey on Robust and Efficient Eye
Localization in Real-Life Scenarios, Pattern Recognition, Vol. 46, No. 12, December 2013,
pp. 3157 3173.
[4] M. Pavlović, B. Stojanović, R. Petrović, S. Puzović, S. Stanković: Optimal HOG Cell to
Image Ratio for Robust Multi-Sensor Face Recognition Systems, Serbian Journal of Electrical
Engineering, Vol. 16, No. 3, October 2019, 387 403.
[5] O. Jesorsky, K. J. Kirchberg, R. W. Frischholz: Robust Face Detection Using the Hausdorff
Distance, Proceedings of the International Conference on Audio- and Video-Based Biometric
Person Authentication, Halmstad, Sweden, June 2001, pp. 90 95.
[6] P. Yang, B. Du, S. Shan, W. Gao: A Novel Pupil Localization Method Based on GaborEye
Model and Radial Symmetry Operator, Proceedings of the International Conference on Image
Processing (ICIP), Singapore, Singapore, October 2004, pp. 67 70
[7] M. Hamouz, J. Kittler, J. K. Kamarainen, P. Paalanen, H. Kälviäinen, J. Matas: Feature-Based
Affine-Invariant Localization of Faces, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 27, No. 9, September 2005, pp. 1490 1495.
[8] S. Kim, S.- T. Chung, S. Jung, D. Oh, J. Kim, S. Cho: Multi-Scale Gabor Feature Based Eye
Localization, World Academy of Science, Vol. 21, 2007, pp. 483 487.
[9] R. Valenti, T. Gevers: Accurate Eye Center Location and Tracking Using Isophote Curvature,
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
Anchorage, USA, June 2008, pp. 1 8.
[10] R. Valenti, T. Gevers: Accurate Eye Center Location Through Invariant Isocentric Patterns,
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, No. 9, September
2012, pp. 1785 1798.
[11] Z. Deng, R. Jing, L. Jiao, L. Liu: Fatigue Detection Based on Isophote Curve, Proceedings of
the International Conference on Computer and Computational Sciences (ICCCS), Greater
Noida, India, January 2015, pp. 146 150.
109
S. Berrached, N. Berrached
[12] C. Wei, Z. Pang, D. Chen: Combining Shape Regression Model and Isophotes Curvature
Information for Eye Center Localization, Proceedings of the 7th International Conference on
Biomedical Engineering and Informatics, Dalian, China, October 2014, pp. 156 160.
[13] S. Vater, F. Puente León: Combining Isophote and Cascade Classifier Information for Precise
Pupil Localization, Proceedings of the IEEE International Conference on Image Processing
(ICIP), Phoenix, USA, September 2016, pp. 589 593.
[14] F. Timm, E. Barth: Accurate Eye Centre Localisation by Means of Gradients, Proceedings of
the International Conference on Computer Vision Theory and Applications, Vilamoura,
Portugal, March 2011, pp. 125 130.
[15] S. Chen, C. Liu: Eye Detection Using Discriminatory Haar Features and a New Efficient
SVM, Image and Vision Computing, Vol. 33, January 2015, pp. 68 77.
[16] B. Kroon, A. Hanjalic, S. M. P. Maas: Eye Localization for Face Matching: Is it Always
Useful and Under What Conditions?, Proceedings of the International Conference on Content-
Based Image and Video Retrieval, Niagara Falls, Canada, July 2008, pp. 379 388.
[17] D. E. Benrachou, F. N. Dos Santos, B. Boulebtateche, S. Bensaoula: Automatic Eye
Localization; Multi-Block LBP vs. Pyramidal LBP Three-Levels Image Decomposition for
Eye Visual Appearance Description, Proceedings of the 7th Iberian Conference on Pattern
Recognition and Image Analysis (IbPRIA), Santiago de Compostela, Spain, June 2015,
pp. 718 726.
[18] N. Ahmad, K. Singh Yadav, M. Ahmed, R. H. Laskar, A. Hossain: An Integrated Approach
for Eye Centre Localization Using Deep Networks and Rectangular-Intensity-Gradient
Technique, Journal of King Saud University-Computer and Information Sciences, Vol. 34,
No. 9, October 2022, pp. 7153 7167.
[19] A. Levinshtein, E. Phung, P. Aarabi: Hybrid Eye Center Localization Using Cascaded
Regression and Hand-Crafted Model Fitting, Image and Vision Computing, Vol. 71, March
2018, pp. 17 24.
[20] V. Štruc, J. Žganec Gros, N. Pavešić: Advanced Correlation Filters for Facial Landmark
Localization, Elektrotehniški vestnik, Vol. 79, No. 4, 2012, pp. 209 212.
[21] P. Viola, M. Jones: Rapid Object Detection Using a Boosted Cascade of Simple Features,
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR), Kauai, USA, December 2001, pp. I-I.
[22] W.- Y. Hsu, C.- J. Chung: A Novel Eye Center Localization Method for Multiview Faces,
Pattern Recognition, Vol. 119, November 2021, p. 108078
[23] F. Xiao, K. Huang, Y. Qiu, H. Shen: Accurate Iris Center Localization Method Using Facial
Landmark, Snakuscule, Circle Fitting and Binary Connected Component, Multimedia Tools
and Applications, Vol. 77, No. 19, October 2018, pp. 25333 25353.
[24] E. Skodras, N. Fakotakis: An Accurate Eye Center Localization Method for Low
Resolution Color Imagery, Proceedings of the IEEE 24th International Conference on Tools
with Artificial Intelligence, Athens, Greece, November 2012, pp. 994 997.
[25] M. Soltany, S. T. Zadeh, H.- R. Pourreza: Fast and Accurate Pupil Positioning Algorithm
Using Circular Hough Transform and Gray Projection, Proceedings of the International
Conference on Computer Communication and Management, Singapore, Singapore, January
2011, pp. 556 561.
[26] M. Leo, D. Cazzato, T. De Marco, C. Distante: Unsupervised Eye Pupil Localisation through
Differential Geometry and Local Self-Similarity Matching, PLOS ONE, Vol. 9, No. 8, August
2014, p. e102829.
110
Fast Eye Centre Localization Using Combined Unsupervised Technics
111