0% found this document useful (0 votes)
54 views5 pages

Face Recognition Based On Facial Landmark Detection: December 2017

Uploaded by

Mario Torres
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views5 pages

Face Recognition Based On Facial Landmark Detection: December 2017

Uploaded by

Mario Torres
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/321579006

Face Recognition Based on Facial Landmark Detection

Conference Paper · December 2017


DOI: 10.1109/BMEiCON.2017.8229173

CITATIONS READS

6 1,551

2 authors, including:

Aniwat Juhong
Michigan State University
5 PUBLICATIONS   12 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Smart Eye-Tracking System View project

All content following this page was uploaded by Aniwat Juhong on 06 December 2017.

The user has requested enhancement of the downloaded file.


The 2017 Biomedical Engineering International Conference (BMEiCON-2017)

Face Recognition Based on


Facial Landmark Detection

Aniwat Juhong C. Pintavirooj


Department of Biomedical Engineering Department of Biomedical Engineering
Faculty of Engineering, King Mongkut’s Institute of Faculty of Engineering, King Mongkut’s Institute of
Technology Ladkrabang Technology Ladkrabang
Bangkok, Thailand Bangkok, Thailand
[email protected] [email protected]

Abstract— This paper presents a novel technique for face explains how to extract feature. Results and conclusion is
recognition based on facial landmarks extracted automatically. provided in section IV and V respectively.
Our landmarks are those associated with eyes mouth and nose.
With the extracted landmarks, the area triplets and the
associated geometric invariance are formed. We opt to use area II. ABSOLUT GEOMETRIC INVARIANCE
and triangle confined within the triangle as the invariance. To
bypass the perspective constraints, we take the face image with Under rigid transformation, triangle side and area is absolute
high focal length and at the farther distance. Orthogonal invariance [2,3]. Beyond the rigid transformation, similarity
projection and Euclidean transformation are then assumed. As and affine transformation, the triangle side lengths and angles
area is relative invariance under Euclidean transformation, the are no longer preserved. The area of the corresponding
absolute area ratios between consecutive area triples are applied. triangles, however, becomes a relative invariant, with the two
Out purposed algorithm is tested successfully to identify person corresponding areas being related to each other through the
and could be a promising technique for facial recognition. determinant of the linear transformation matrix T, in the affine
Index Terms— Facial Landmarks, Haar casecade, Face
map transform (T,b) where b is the translation vector. If the
recognition. area patches of the sequence of triangles on the template are
[A(1), …, A(n)] then the corresponding area patches Aa(k) of
I. INTRODUCTION the sequence of triangles on the query under affine
Face recognition is technology measures and matches the transformation are related to those of the template in
unique characteristics for the purpose of personal identification accordance with the following relative invariant
to apply in many applications such as face recognition system
at airport, patient identification, access control and so forth.
Sometimes in traditional methods, face recognition misses a11 a12
matching because there are invariance factor resulted from Aa (k )  A(k ), k  1,2,3,..., n (1)
human pose, hairstyle, acne arising, fatter etc. Hence face a21 a22
recognition based on facial geometric landmarks was
developed and seems promising to solve these problems.
Recently, there are many attempts to develop face recognition a11 a12
based on geometr. Panagiotis B. Perakis et al. [1] developed Where is the determinant of affine transformation
novel method for 3D landmark detection. 3D facial a21 a22
Landmarks Model (FLM) was proposed. Although it was matrix. As the linear transformation matrix is unknown,
claimed to have high accuracy result but the 3D technique absolute affine invariants are constructed out of the area
require high specification, spending long time to process, and it relative invariants by taking the ratio of two triangles to cancel
has a high cost.
out the dependence of the area relative invariant on the
determinant of affine transformation matrix. By taking the
This paper developed 2D technique to identify person using
ratio of the consecutive elements in the sequence, the set of
facial geometric landmark. The technique extracts landmark
absolute invariants in (2) and (3) are obtained
associated with facial anatomical landmark including nose, eye
and mouth. The geometric invariance is then constructed from
area triplet formed by the landmarks. This paper is organized
as follow. Section II reviews about geometric invariance.
Section describes landmark extraction process. Section III

978-1-5386-0882-1/17/$31.00 ©2017 IEEE


III. LANDMARK EXTRACTION
A(k )
I (k )  , k  1,2,3,..., n (2) In order to apply geometric described in section (ii), we
A((k  1) mod n) extract facial landmarks from the 2D face image as shown in
black dot of Fig 1. The landmarks are those associated with
anatomical landmarks including eye, nose and mouth. With the
and extracted landmarks, the six area triplets are formed denoted
them as A1-A6 (also shown in Fig. 1). To the facial
landmarks, we first use Haar cascade algorithm to detect the
Aa (k )
I a (k )  , k  1,2,3,..., n (3) face ROI as shown in figure 2 (a) following determine eye,
Aa ((k  1) mod n) mouth and nose ROI using the Haar cascade algorithm as
shown in figure 2 (b) – (c). To find landmark associated with
the eye, we convert eye ROI image to binary image using
In the case of noise free measurement, the absolute
thresholding algorithm. The result is shown in Fig. 3 (b) which
invariant of the query equals that of the template, i.e.
includes mainly the binarized region associated with eye and
and in the presence of noise and occlusion, each
eyebrow.
of will have a counter part with , with that To exclude the eyebrow region, we apply horizontal
counterpart easily determines through a circular shift projection. The project data will then be used to separate the
involving comparison where is the number of invariants. eyebrow region from the eye region. To detect eye-related
To allow for noise and small deviations from an affine map, landmark, vertical projection is applied. With the vertical
we allow a small error percentage difference between projection data, the outermost pixel can be identified and the
corresponding invariants to allow for only small difference associated eye landmark can be determined (shown as yellow
between the area patches before declare them as matching. dot in Fig. 3 (c)).
This may reduce the length of the matched triangle sequence.
The lower the error percentage is the more strict the matching.
Experimentally, an error percentage of 5% was applied. We
adopt a run length method to decide on the correspondence
between the two ordered set of triangles. For every starting
point on the sequence, the run length method computes a
sequence of consecutive invariants satisfying the criterion

(a) (b) (c)


( I a ( j )  I (i)) 2 Figure 2. Defined ROI of facial components by Haar cascade algorithm
%e 2  100   (4)
I (i ) 2

We declare the match on the longest string (M) of triangles


that yields minimum averaged error of .
(a) (b) (c)

Figure 3. Projection technique to find landmarks of eye

Figure 4. Projection data of Fig 3 (b)


Figure 1 Areas and internal angles of facial triangles
To detect the nose-related landmark, we convert the nose ROI The angle is computed using
to binary image and apply vertical projection. The outermost
pixel can be identified and the associated nose landmark can be
determined (shown as yellow dot in Fig. 5 (c). Similar A  B  A B sin( )
algorithm is applied to detect the mouse-related landmark
shown in Fig. 5(c)
A B
  arcsin( ) (2)
AB

The area are computed via cross product


(a) (b) (c)
Figure 5. Projection technique to find landmarks of nose 1
Area of triangle = A B (3)
2
Where vector A and B are defined in figure 7.

(a) (b) (c)


Figure 6. Projection technique to find landmarks of mouth

(a) (b) (c)

Figure 7. Vector of facial landmark (d) (e) (f)


Figure 8. Sample Facial Triangles result

IV. FEATURE EXTRACTION


With the extracted landmarks, six area triplets and 18
V. EXPERIMENTAL AND RESULT
angles are formed as feature vectors i.e. two feature vectors are
defined as We test our algorithm using the face image captured from
Fa  [ A1 ,..., A6 ] (1) model HD C270 Logitech webcam. The subjects sit in front of
the camera with distance of 100 cm. from the camera. The
camera distortion is corrected using parameter extracted from
F  [1 ,...,18 ] camera calibration [4]. We perform two experiments including
intra-subject and inter subject experiment. The intra-subject
where Fa and F are feature vectors associated with area and
experiment is used to verify the robustness of the purpose
angle respectively and Ax is area xth and  x is angle xth. technique with facial image captured from different geometric
transformation. The inter-subject experiment is to apply our
To identify person, we compare Fa and F of the reference technique in face recognition. Figure 9 shows the result of
with the query. For Fa , we compute absolute difference of intra-subject experiment where subject performs head
geometric transformation including (a) 10-degree tilt, (b) -10-
feature and estimate the average. For F ,we compute degree tilt, (c) 10 –degree pan, (d) -10-degree pan, (e) 10-
absolute difference of feature and estimate the average of the 4 degree roll and (f) -10-degree roll. The average feature error is
smallest error. The average error of both Fa and F is then shown in Table1. To test face recognition, six subjects are
tested where for each person face images are collected two
used as the criterion for person identification . times; one for reference and one for query. The query set is
then tested against the reference set. The reference set is shown
in Fig. 8. The result is shown in table 2. Note that the diagonal
element yields the minimum average error as it is the same-
person testing.
VI. CONCLUSION AND DISCUSSION
Fig 9 Fig 9 Fig 9 Fig 9 Fig 9 Fig 9 Face recognition based on geometric invariance is proposed
(a) (b) (c) (d) (e) (f) in this paper. A facial landmark associated with anatomical
Straight 0.3868 1.3347 0.5757 1.2927 0.9015 0.9111 landmark was extracted automatically. A series of area triplets
0.0684 0.1029 0.0902 0.0551 0.0949 0.0822
Face was constructed and sorted in a formal order. A geometric
(Reference) invariance including area ratio and angle are then used in
Table 1: intra-subject experiment feature vector. A feature vector comparison is then used in
Upper row is average error associated with area feature vector. facial recognition. The result of person identification using
Lower row is average error associated with angle feature vector proposed techniques demonstrates promising results.

Subject Subject Subject Subject Subject Subject


ACKNOWLEDGMENT
1 2 3 4 5 6 This paper was supported by the Biomedical
Face1 Face1 Face1 Face1 Face1 Face1
Engineering Program, Faculty of Engineering, King Mongkut’s
Subject 0.3868 1.3882 2.9298 0.4792 1.7987 3.4948
1 0.0684 0.2515 0.2674 0.1543 0.2250 0.2375
Institute of Technology Ladkrabang (KMTIL)
Face2
Subject 1.0126 0.6632 0.8190 1.0009 0.9934 1.5033
2 0.2487 0.0361 0.1300 0.1859 0.0598 0.4740
Face2
REFERENCES
Subject 1.6895 0.6661 0.7066 0.9477 0.6825 1.0585 [1] Panagiotis B. Perakis, “Landmark Detection for
3 0.2281 0.1871 0.0738 0.1502 0.1749 0.4369 Unconstrained Face Recognition,” National and Kapositrian
Face2 University of Athens.
Subject 1.2216 0.5829 1.6945 0.2936 0.5943 4.3963
4 [2] W. lampa, C. Pintavirooj and F.S. Cohen, “Fingerprint
0.1751 0.1082 0.1117 0.0770 0.1194 0.3733
Face2 Alignment using Minutiae-Based MethodCombined with Affine
Subject 1.0409 0.7613 1.0728 0.8358 0.6320 2.3387 Invariant”, Biomedical Engineering International conference
5 0.3091 0.8066 0.1314 0.1379 0.0476 0.4818 (BMEiCON2010), Kyoto, Japan, August 27, 28, 2010
Face2 [3] C. Pintavirooj, F.S. Cohen and W. lampa, “Fingerprint
Subject 3.3944 1.8266 1.9435 5.1168 1.9970 0.7340 Verification and Identification Based on Local Geometric
6 0.2638 0.4756 0.4065 0.3055 0.4927 0.0570 Invariants Constructed from Minutiae Points and Augmented
Face2
With Global Directional Filterbank Features”, IEICE
Table 2: inter-subject experiment
Transaction on Information and Systems, Vol.E97-D, No.6, pp.
1599-1613, Jun.2014
Upper row is average error associated with area feature vector. [4] K. Hemtiwakorn, N.Srisuk and C.Pintavirooj, “Indirect
Lower row is average error associated with angle feature vector
X-Ray Detector Panel Using Multiple Cameras,” ISCIT. 2008.

(a) (b) (c)

(d) (e) (f)


Figure 9. Facial landmark image of same person at different position

View publication stats

You might also like