JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
A SURVEY ON NON-VERBAL CUES LIE
DETECTORS: METHODS, RECENT
DEVELOPMENTS, AND FUTURE SCOPE
G.Krishna Vasudeva Rao1, Prof. Prasad Reddy P.V.G.D2, Prof. P.Srinivasa Rao3
1
Department of CS&SE, Andhra University, Visakhapatnam,
2
Sr. Professor, Department of CS&SE, Andhra University, Visakhapatnam
3
Professor, Department of CS&SE, Andhra University, Visakhapatnam
Received: 14 April 2020 Revised and Accepted: 8 August 2020
ABSTRACT: People lie for various reasons. Telling a lie can sometimes lead to problems which are ignorable
but sometimes it may result in serious problems. Hence lies must be detected before facing its adverse effects.
Lie detection has become very significant in recent years and applications of lie detection are very useful in
many areas namely, legal and clinical fields. Humans find great difficulty in detecting lies. So, there is a great
need for automating the process of lie detection. In this paper, we have classified the lie detectors based on the
cues from human conversations. We have presented a study on different lie detectors based on Non-verbal cues
which includes vocal cues and visual cues. Accuracy of the lie detectors which are developed based on Facial
expressions, Eye blinks, Speech, Multi-modal features, Hand movements and Head movements are compared.
KEYWORDS: Lie detector – Non-verbal Cues - Visual Cues – Feature Extraction – Classification
I. INTRODUCTION
Lie is to speak falsely or utter untruth knowingly with intent to deceive. Lie detection refers to a cognitive
process of detecting deception by considering verbal or non-verbal cues. Lie detection is very useful in areas
such as security, terrorism, interpersonal relationships. Lies in some cases may lead to serious problems if they
remain undetected. There are several contemporary methods which were followed to detect lies. Some of them
are discussed as follows.
Contemporary Methods of Lie Detection:
Rice method:
The relation between lie and stress was observed by ancient Chinese with a small experiment. A suspected
person was given a handful of rice to chew while the person was being questioned. After the interview is
completed, the state of the rice would be examined. If the rice remains wet, the person examined was considered
truthful. If the rice becomes dry, the person examined was considered lying. This conclusion is drawn based on
the assumption that, when a person feel stressed, there will be a fall in the saliva production which causes the
rice to become dry.
Pulse:
Pulse is another factor which was associated with lying. Lying can be detected by analysing blood flow of the
suspect while the person was being interrogated. The factor being sensed was the pulse of the suspect. It was
identified that every time the suspect was lying, there is an increase in the pulse of the person.
Fear:
In, 1875 an Italian physiologist, Angelo Mosso started research to identify the relation between fear and lying.
In his studies, Mosso identified that, along with pulse there are other factors such as respiratory rate and blood
pressure which could be affected by lying. Detection of Lies based on respiratory rate, blood pressure and pulse
is referred as Polygraph technology. Angelo Mosso constructed a device called “Plethysmograph”.
Plethysmograph will measure the changes in breathing or blood pressure. While the suspect was being
questioned, if there is a significant change in breathing or blood rate, it implies that the suspect was afraid. As
lying was connected with stress and fear, it concludes that the person suspected was lying.
5774
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Electro dermal response:
In 1879 a French therapist, Dr. Marie Gabriel Romain Vigouroux began research to identify a new factor. Marie
identified that when a person was lying, the electrical resistance of the skin would increase up to an extent that it
can be measurable. This factor is named as “Electro dermal Response”.
Galvanic Deflections:
Boris Sidis, a psychology professor at Harvard University conducted several experiments to identify the relation
between emotional state of human mind and electric changes in human skin under a study named “A study of
galvanic deflections due to Psycho-Physiological Phenomena”. Sidis proved that there will be significant
galvanic changes in the human skin when the suspect was lying.
Blood Pressure and pulse:
In 1895, an Italian criminologist Cesare Lombroso performed experiments to measure changes in blood pressure
and pulse of a suspect when being questioned. Lombroso identified that there is a sudden change in the blood
pressure of the suspect for certain questions. This gives a conclusion that the suspect was lying. The device used
for measuring changes in blood pressure is called Hydrosphygmograph.
Inspiration/Expiration Ratio:
In 1914, an Italian Vittorio Benussi focused on the Inspiration/Expiration processes of a person being
interrogated. He constructed a device called Pneumograph, which would be responsible for breathing
measurements i.e., Inspiration/Expiration ratio. He conducted several experiments and identified that
Inspiration/Expiration ratio of suspect is changing during interrogation for certain questions. He identified that if
the length of inspiration is divided by the length of expiration during a lie, the Inspiration/Expiration ratio has a
tendency of growing when false statements are given by the suspect.
Polygraph:
In 1921, a medical student named John Augustus Larson built a polygraph device which is used to measure
more than one physiological change at a time in the human body. Larson developed a technique called the R/I
(relevant/irrelevant) questioning technique. The method was simple, as the suspect was being questioned,
Larson would make a mixture of questions relevant and irrelevant to the main subject of interrogation. Some of
them would be relevant to the main subject of interrogation while others were totally irrelevant to the given
subject. He observed the physiological changes while irrelevant questions were asked after some relevant
questions. This polygraph device helped many investigators to make progress in their criminal cases.
Drawback with Polygraph Lie Detectors:
The polygraph “lie detector” fundamentally monitors physiological changes (blood pressure, pulse) as a suspect
is being questioned. If the suspect appears anxious for any reason whether he is culprit or not, the outcome can
be skewed. The physiological changes that a polygraph monitors can be easily influenced by the interviewee, or
misinterpreted by the interviewer. The result of such skewed outcomes can often lead to false readings which
can be concluded as either truth when suspect actually lies or lie when suspect actually told truth. This
ambiguity over polygraphs is the basis for the development of computerized Lie detectors.
The computerized lie detectors will take the input from Human conversations and classify them as truth or lie.
FIG 1. ABSTRACT VIEW OF A LIE DETECTOR
5775
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Classification of Lie detectors:
Lie detectors can be classified based on cues from human conversations. Cues of Lie detection from human
conversations can be classified as follows.
FIGURE 2. CLASSIFICATION OF CUES
Vocal Cues include,
1) Speech hesitations
2) Speech errors
3) Pitch of voice
4) Speech rate
5) Pause durations
6) Frequency of pauses
Visual Cues include,
1) Facial Expressions
2) Hand and finger movements
3) Leg and foot movements
4) Head movements
5) Eye blinking
The paper proceeds as section 2 shows some related work done by various researchers, section 3 deals with our
findings from the related work. Finally, we will conclude the paper and come up with better future suggestions.
II. RELATED WORK
Facial Expressions
All the thoughts and emotions that dart in human mind will be exhibited on human face naively in the form of
facial expressions. Motion of one or more than one muscles beneath the skin is known as a Facial Expression.
Motion of each muscle is referred as Action Unit. There are 46 important Action Units that can be identified in a
human face. Paul Ekman, developed a frame work for analysing these action units known as Facial Action
Coding System(FACS).There are six universal emotions viz., neutral, happy, sad, angry, disgust and surprise.
Any of these emotions can be represented using Action Units. Lie detectors based on Facial expressions will try
to establish a relation between emotions and lie.
Lie detecting system using facial micro expressions was developed by Owayjan et al. [1]. The Lie detecting
system was developed in four stages. Colour conversion and filtering techniques are applied in the first two
stages. In the third stage, features of facial structure are identified by applying geometric-based dynamic
templates on each frame. In the fourth stage, facial micro-expressions are detected by extracting features. These
micro-expressions will determine whether the subject is lying or not.
5776
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Yap, Moi Hoon et al. [2] developed a Lie detecting system based on Visual cues of facial behaviour. They
conducted experiment by taking one-one interviews for 12 participants. The participants were given two topics
and were asked to answer some questions honestly and some questions deceitfully. The facial expressions of the
participants were captured while they were answering and it was observed that micro-expressions and Lip wipe
are the two facial expressions which were exhibited frequently while lying by the participant.
Da Rocha Gracioso et al. [3] proposed a model called WeBSER(Web –based System for Emotion
Recognition) to recognise the emotional state of the user. In this model, first the reading points on the face are
identified and then the emotions are classified based on the movement of reading points. Viola-Jones method
was used for face detection and segmentation of facial landmarks such as eyes, nose and mouth. This model
achieved an accuracy of 76.6% for exact emotions and 84.4% to indicate uncomfortable states of persons which
might indicate that the person is lying.
Li, Xiaobai et al. [4] used Feature difference contrast to spot micro expressions in videos. Tracking of
facial points is done using Kanade-Lucas-Tomasi algorithm. Feature extraction is done by using a feature
descriptor named Histogram of Optical Flow (HOOF). SMIC and CASMEII databases are used. This model
achieved an accuracy of 92.98% on CASMEII database for micro expression recognition.
Zhang et al. [5] developed a platform for automatic micro expression recognition by using Gabor
wavelet filter for expression feature extraction. Images of human face are projected to low-dimension space
using Principal Component Analysis and Support Vector Machine is used for expression classification.
CASMEII database is used. 100% recognition rate is achieved using Linear kernel Function.
Xu, Feng et al. [6] used Facial Dynamics Map to characterize the movements of a micro expression.
Pixel level movements of micro expression are measured by using Optical flow estimation technique. Facial
Dynamics Map method is evaluated using SMIC, SMIC2, CASME I, CASME II datasets. This model achieved
an accuracy of 75.66% for identification and 71.43% for classification.
An automated computer vision solution based on facial cues- eye-blinks, eye brow motion, wrinkle
occurrence and mouth motion for detecting deception in high-stakes is developed by Su, Lin et al. [7]. These
four facial cues were integrated into a single facial behaviour pattern vector for classifying truth and lie. The
primary facial landmarks of the face: left eye, right eye and nose base are located by applying PittPatt SDK. A
threshold value of eye blinks for each video clip is obtained by applying valley-emphasis thresholding. Eye
blinks are detected by considering it as anomaly detection problem. This model achieved an accuracy of 76.92%
when discriminating liars from truth-tellers.
Pfister, Tomas et al. [8] proposed to combine Temporal Interpolation Model with Multiple Kernel
Learning (MKL) method to recognize facial micro expressions. Feature extraction is done by applying
spatiotemporal local texture descriptors (SLTD) to the video. Support Vector Machine (SVM) and Multiple
Kernel Learning (MKL) are used for classification. This model achieved an accuracy of 76.2% in discriminating
truth and lie.
Eye blinks
Extensive research on Eye blinks says that, when liars experience cognitive demand, their lies would be
associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has
ceased after the lie is told. Eye blinks are used in many ways for lie detection by many researchers.
Birendar singh et al. [9] developed a Lie detecting system using Image processing. In this system, threshold
value of the blink rate of the victim is measured initially. HAAR cascade algorithm is implemented in MAT Lab
for the detection of eye blinks. Skin detection algorithms are implemented using histogram projections to
determine the state of the eye- open eye or closed eye. The blink rate in the target period is measured and
compared with the initial threshold values and concluded that the eye blink rate of the suspect was decreased
during the target period and the eye blink rate was increased suddenly after the target period.
Immanuel, Joshua et al. [10] conducted experiments based on the hypothesis that EEG signals extracted from
human brain are used for lie detection. 10 eye blinks of 10 different suspects have been recorded and the blink
patterns with respect to lies have been observed. EEGLAB Toolbox in MAT Lab is used for classifying Truth
and Lie. The experiment resulted that blink rate of the suspect was decreased during lying with an accuracy of
95.12%.
5777
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Mitre Hernandez et al. [11] conducted experiments and classified the responses from the suspect as spontaneous
lies, planned lies and truths. They observed that suspect will have greater cognitive load for telling truth
compared with planned lies for manipulating and controlling blinks. Similarly, suspect will require higher
cognitive load for telling spontaneous lies compared with planned lies as assessed by blinks.
Dhanush, T et al. [12] developed a computer vision application to analyze the blink rates of a suspect. They
conducted experiments to identify whether the blink rate of the suspect during lying will increase or decrease. It
was observed that the blink rate of the culprit tend to increase drastically for certain questions related to crime
and the blink rate of innocent person tends to have lower blink rate.
Dibeklioglu, H et al. [13] showed that eye lid movements are used to classify spontaneous smiles and posed
smiles. Piecewise Bezier Volume Deformation (PBVD) tracker is used for tracking the face of the suspect.
Continuous hidden Markov models (CHMM) are used to model the movement features of eye lids. Smiles are
classified using KNN and Naive bayes classifiers. BBC SMILE dataset and the Cohn-Kanade AU-Coded Facial
Expression Database are used for conducting experiments. Accuracy of classification achieved for posed smiles
is 91% and for spontaneous smiles is 80%.
Speech
Nasri, Hanen et al. [14] conducted experiments to check which acoustic parameter is used to classify
truth or lie from speech signal. They developed a model for automatic speech processing based on Mel
Frequency Cepstral Coefficient (MFCC). Voicebox tool of MAT Lab is used to extract MFCC and pitch from
speech signal. A new database named ReLiDDB was built. Support Vector Machine Classifier was used for
classifying Truth and Lie. This model has achieved an accuracy of 88.23% in classifying Lie and an accuracy of
84.52% in classifying Truth.
Srivastava, Nidhi et al. [15] found correlation between Voice, Stress and lying. This model was
developed using Artificial Neural Networks and was able to detect lies with an accuracy of 71%.
Multi-Modal
Krishnamurthy, Gangeshwar et al. [16] proposed a multi-modal neural model for deception detection.
Features from different modalities such as video, audio, text and micro-expression features are combined. 3d-
Convolutional Neural Networks (3d-CNN) are used to extract visual features from image and video.
Convolutional Neural Networks (CNN) is used to extract textual features from a video. OpenSMILE tool is used
to extract audio features from an audio file. This model has achieved an accuracy of 96.14%.
Hartwig, Maria et al. [17] proposed that predictability of deception is measured by multiple correlation
coefficient. They examined the degree to which deception can be predicted from multiple behavioural cues.
Finding show that lies can be detected with nearly 70% accuracy.
Hand and Head Movement
Vrij, Aldert et al. [18] conducted experiments and found out that hand and head movements can be
controlled consciously. So, individuals who have control on their behaviour can have fewer hand and head
movements during deception compared to truth telling.
Noje, Dora-Ionut et al. [19] conducted several experiments to identify the correlation between hand
movement and Lie detection. Frame to Frame analysis was done on stream of video to detect head movement
and head position. Their experiments concluded that head movement and lie detection can be correlated only
when additional information like voice, gaze, words are provided as input along with head movements.
Methods for automated tracking of hand and finger movements in interview situations are described by
Dente, Enrica, et al. [20]. Movements of hand and finger are described by coupling complex wavelet
decomposition with Posterior probability map. They have designed a hand tracking algorithm based on blob
feature extraction. It was observed that changes in hand velocity was not significant and hand, finger
movements of a subject may change during responses to questions that are high-stake compared to responses to
questions that are low-stake. It was also observed that the state of Hands togetherness is dominant that Hands
separated during high stake answers of the suspect.
5778
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Thermal imaging and FMRI
Amount of heat radiated from human face is dependent on stress level in the human mind. These stress
levels can be detected by a technology called Thermal imaging.
Rajoub et al. [21] tested whether the thermal variations at periorbital region are used for detecting
deception or not. They conducted experiments on 25 participants by extracting 492 thermal responses and these
thermal responses are classified using K-Nearest Neighbour Classifier. With-in person approach (86.88%) of
training the classifier has shown better result compared to between-person approach (58.62%).
Kozel, F. Andrew et al. [22] conducted several experiments to prove specific regions of brain will be
stimulated when the suspect is attempting to deceit. They framed the interview by including four types of
questions viz., control questions, Neutral questions, questions related the objects used in the experiment.
Statistical parametric mapping software was used to perform analysis on FMRI data. The analysis methods were
developed using a Model Building Group and the analysis methods were applied to an independent Model
Testing Group. This model has achieved an accuracy of 90%.
III. FINDINGS OF LITERATURE SURVEY:
In this area, we will list out the findings from the previous section.
Findings:
The widespread process of lie detection using facial expressions involves Feature extraction and
classification. Researchers are using different methods for Feature extraction but most researchers are using
SVM for classification.
Most Lie detectors based on Facial micro expressions use CASME database.
There are very few standard datasets available for conducting experiments on lie detection. Hence,
many researchers have created their own dataset.
In Lie detection system based on Eye Blinks, a single threshold value of eye blinks cannot be
considered for all the victims because the blink rate of each person may vary.
It was observed that accuracy of Lie detection will be less by detecting lies only with Hand and Head
movements where as the accuracy can be increased when additional input like voice is provided along with
Hand and Head movements.
Functional Magnetic Resonance Imaging (F-MRI) and Thermal imaging requires expensive apparatus
for lie detection.
Accuracy of Lie detectors developed by various researchers is compared in the following table.
Comparison of Lie Detectors:
TABLE 1. COMPARISION TABLE OF LIE DETECTORS
Lie Developed by Implementation Accuracy
detection
based on
Facial Owayjan et al. NI Labview 85%
Expression
da Rocha WeBSER model (Web-based System for 84.4%
Gracioso et al. Emotion Recognition)
Li Xiaobai et Feature Difference Contrast Method 92.98%
al.
Zhang et al. SVM 100% Recognition rate
Xu, Feng et al. Facial Dynamics Map 71.43%
5779
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
Su, Lin et al. Facial behaviour pattern vector 76.92%
Pfister et al. Temporal Interpolation Model with Multiple 76.2%
Kernel Learning
Eye blinks Birendar singh MAT Lab using HAAR Cascade algorithm 81%
et al.
Dibeklioglu, H Distance based and angular features 91%
et al.
Joshua Blinker algorithm using EEG 95.12%
Immanuel et
al.
Speech Nasri, Hanen et SVM Classifier 88.23%
al.
Srivastava, Artificial Neural Networks 71%
Nidhi et al.
Multi-Modal Krishnamurthy Convolution Neural Networks and 96.14%
et al. OpenSMILE tool
Hartwig, Maria Multiple correlation coefficient 70%
et al.
Thermal and Rajoub, Bashar KNN Classifier 87%
FMRI Aet al.
Kozel, F et al. Model Building Group and Model Testing 90%
Group
IV. CONCLUSION AND FUTURE SCOPE:
In this paper, Purpose of Lie detectors, their history and the need for automation of Lie detection is
discussed. We have presented the classification of Lie detectors based on Cues from human conversation. Lie
detectors based on various cues like Facial expressions, Eye blinks, Speech, Hand and Head movements, multi-
modal features, thermal imaging and Functional Magnetic Resonance Imaging are reviewed. Accuracy of
various Lie detectors are compared. It is observed that SVM is the most widely used by many researchers for
classification.
Earlier the efficiency of Lie detectors was based on the skill of the trainer and interviewer. Now
trainers are replaced with the algorithms being used in the development of Lie detectors. There is a great scope
of using Machine Learning algorithms in developing the lie detectors irrespective of the cues being used for lie
detection.
V. REFERENCES:
[1]. Owayjan, Michel, Ahmad Kashour, Nancy Al Haddad, Mohamad Fadel, and Ghinwa Al Souki. "The
design and development of a lie detection system using facial micro-expressions." In 2012 2nd
international conference on advances in computational tools for engineering applications (ACTEA), pp.
33-38. IEEE, 2012.
[2]. Yap, Moi Hoon, Bashar Rajoub, Hassan Ugail, and Reyer Zwiggelaar. "Visual cues of facial behaviour in
deception detection." In 2011 IEEE International Conference on Computer Applications and Industrial
Electronics (ICCAIE), pp. 294-299. IEEE, 2011.
[3]. da Rocha Gracioso, Ana Carolina Nicolosi, Claudia Cristina Botero Suárez, Clécio Bachini, and
Francisco Javier Ramírez Fernández. "Emotion recognition system using Open Web Platform." In 2013
47th International Carnahan Conference on Security Technology (ICCST), pp. 1-5. IEEE, 2013.
5780
JOURNAL OF CRITICAL REVIEWS
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020
[4]. Li, Xiaobai, Xiaopeng Hong, Antti Moilanen, Xiaohua Huang, Tomas Pfister, Guoying Zhao, and Matti
Pietikäinen. "Towards reading hidden emotions: A comparative study of spontaneous micro-expression
spotting and recognition methods." IEEE Transactions on Affective Computing 9, no. 4 (2018): 563-577.
[5]. Zhang, Peng, Xianye Ben, Rui Yan, Chen Wu, and Chang Guo. "Micro-expression recognition
system." Optik-International Journal for Light and Electron Optics 127, no. 3 (2016): 1395-1400.
[6]. Xu, Feng, Junping Zhang, and James Z. Wang. "Microexpression identification and categorization using
a facial dynamics map." IEEE Transactions on Affective Computing 8, no. 2 (2017): 254-267.
[7]. Su, Lin, and Martin D. Levine. "High-stakes deception detection based on facial expressions." In 2014
22nd International Conference on Pattern Recognition, pp. 2519-2524. IEEE, 2014.
[8]. Pfister, Tomas, Xiaobai Li, Guoying Zhao, and Matti Pietikäinen. "Recognising spontaneous facial
micro-expressions." In 2011 international conference on computer vision, pp. 1449-1456. IEEE, 2011.
[9]. Singh, Birender, Pooshkar Rajiv, and Mahesh Chandra. "Lie detection using image processing." In 2015
International Conference on Advanced Computing and Communication Systems, pp. 1-5. IEEE, 2015.
[10]. Immanuel, Joshua, Ajay Joshua, and S. Thomas George. "A Study on Using Blink Parameters from EEG
Data for Lie Detection." In 2018 International Conference on Computer Communication and Informatics
(ICCCI), pp. 1-5. IEEE, 2018.
[11]. Mitre Hernandez, Hugo, Jorge Sanchez‐Rodriguez, Ramon Zatarain‐Cabada, and Lucia Barron‐Estrada.
"Assessing cognitive load using oculometrics to identify deceit during interviews." Applied Cognitive
Psychology 33, no. 2 (2019): 312-321.
[12]. Dhanush, T., T. Sree Sharmila, and J. Sofia Jennifer. "Determining Response Credibility by Blink
Count." In 2018 International Conference on Recent Trends in Advance Computing (ICRTAC), pp. 143-
148. IEEE, 2018.
[13]. Dibeklioglu, H.; Valenti, R.; Salah, A.A.; Gevers, T. “Eyes Do Not Lie: Spontaneous Versus Posed
Smiles.” proceedings of the ACM Multimedia 2010 International Conference: October 25-29, 2010.
[14]. Nasri, Hanen, Wael Ouarda, and Adel M. Alimi. "ReLiDSS: Novel lie detection system from speech
signal." In 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications
(AICCSA), pp. 1-8. IEEE, 2016.
[15]. Srivastava, Nidhi and Sipi Dubey. “LIE DETECTION SYSTEM USING ARTIFICIAL NEURAL
NETWORK.” (2014).
[16]. Krishnamurthy, Gangeshwar, Navonil Majumder, Soujanya Poria, and Erik Cambria. "A deep learning
approach for multimodal deception detection." arXiv preprint arXiv:1803.00344 (2018).
[17]. Hartwig, Maria, and Charles F. Bond Jr. "Lie detection from multiple cues: A meta‐analysis." Applied
Cognitive Psychology28, no. 5 (2014): 661-676.
[18]. Vrij, Aldert, Lucy Akehurst, and Paul Morris. "Individual differences in hand movements during
deception." Journal of Nonverbal Behavior 21, no. 2 (1997): 87-102.
[19]. Noje, Dora-Ionut, and Raul Malutan. "Head movement analysis in lie detection." In 2015 Conference
Grid, Cloud & High Performance Computing in Science (ROLCG), pp. 1-4. IEEE, 2015.
[20]. Dente, Enrica, Anil Anthony Bharath, Jeffrey Ng, Aldert Vrij, Samantha Mann, and Anthony Bull.
"Tracking hand and finger movements for behaviour analysis." Pattern Recognition Letters 27, no. 15
(2006): 1797-1808.
[21]. Rajoub, Bashar A., and Reyer Zwiggelaar. "Thermal facial analysis for deception detection." IEEE
transactions on information forensics and security 9, no. 6 (2014): 1015-1023.
[22]. Kozel, F. Andrew, Kevin A. Johnson, Qiwen Mu, Emily L. Grenesko, Steven J. Laken, and Mark S.
George. "Detecting deception using functional magnetic resonance imaging." Biological psychiatry 58,
no. 8 (2005): 605-613.
5781