Lasri 2019
Lasri 2019
net/publication/338370921
CITATIONS READS
75 3,430
3 authors, including:
Imane Lasri
Mohammed V University of Rabat
10 PUBLICATIONS 111 CITATIONS
SEE PROFILE
All content following this page was uploaded by Imane Lasri on 11 October 2022.
Abstract— Nowadays, deep learning techniques know a big The rest of this paper is structured as follows: Section 2
success in various fields including computer vision. Indeed, a reviews the related work. Section 3 describes the proposed
convolutional neural networks (CNN) model can be trained to system. The implementation details are presented in section 4,
analyze images and identify face emotion. In this paper, we followed by the experimental results and discussion in section
create a system that recognizes students’ emotions from their 5. In the last section we conclude this paper with the future
faces. Our system consists of three phases: face detection using extensions of our work.
Haar Cascades, normalization and emotion recognition using
CNN on FER 2013 database with seven types of expressions. II. RELATED WORK
Obtained results show that face emotion recognition is feasible
in education, consequently, it can help teachers to modify their Many researchers are interested in improving the learning
presentation according to the students’ emotions. environment with Face Emotion Recognition (FER). Tang et
al. [3] proposed a system which is able to analyze students’
Keywords— Student facial expression, Emotion recognition, facial expressions in order to evaluate classroom teaching
Convolutional neural networks (CNN), Deep learning, Intelligent effect. The system is composed of five phases: data
classroom management system acquisition, face detection, face recognition, facial expression
recognition and post-processing. The approach uses K-nearest
I. INTRODUCTION neighbor (KNN) for classification and Uniform Local Gabor
The face is the most expressive and communicative part of Binary Pattern Histogram Sequence (ULGBPHS) for pattern
a human being [1]. It’s able to transmit many emotions analysis. Savva et al. [4] proposed a web application that
without saying a word. Facial expression recognition performs an analysis of students’ emotion who participating
identifies emotion from face image, it is a manifestation of the in active face-to-face classroom instruction. The application
activity and personality of a human. In the 20th century, the uses webcams that are installed in classrooms to collect live
American psychologists Ekman and Friesen [2] defined six recordings, then they applied machine learning algorithms on
basics’ emotions (anger, fear, disgust, sadness, surprise and its.
happiness), which are the same across cultures. In [5] Whitehill et al. proposed an approach that
Facial expression recognition has brought much attention recognizes engagement from students’ facial expressions. The
in the past years due to its impact in clinical practice, sociable approach uses Gabor features and SVM algorithm to identify
robotics and education. According to diverse research, engagement as students interacted with cognitive skills
emotion plays an important role in education. Currently, a training software. The authors obtained labels from videos
teacher use exams, questionnaires and observations as sources annotated by human judges. Then, the authors in [6] used
of feedback but these classical methods often come with low computer vision and machine learning techniques to identify
efficiency. Using facial expression of students the teacher can the affect of students in a school computer laboratory, where
adjust their strategy and their instructional materials to help the students were interacting with an educational game aimed
foster learning of students. to explain fundamental concepts of classical mechanics.
The purpose of this article is to implement emotion In [7] the authors proposed a system that identifies and
recognition in education by realizing an automatic system that monitors student’s emotion and gives feedback in real-time in
analyze students’ facial expressions based on Convolutional order to improve the e-learning environment for a greater
Neural Network (CNN), which is a deep learning algorithm content delivery. The system uses moving pattern of eyes and
that are widely used in images classification. It consist of a head to deduce relevant information to understand students’
multistage image processing to extract feature representations. mood in an e-learning environment. Ayvaz et al. [8] developed
Our system includes three phases: face detection, a Facial Emotion Recognition System (FERS), which
normalization and emotion recognition that should be one of recognizes the emotional states and motivation of students in
these seven emotions: neutral, anger, fear, sadness, happiness, videoconference type e-learning. The system uses 4 machine
surprise and disgust. learning algorithms (SVM, KNN, Random Forest and
Classification & Regression Trees) and the best accuracy rates
TABLE II. THE NUMBER OF IMAGE FOR EACH EMOTION OF FER 2013
DATABASE
Fig. 6. ReLU function.
Emotion label Emotion Number of image
TABLE I. CNN CONFIGURATION 0 Angry 4593
Original Transformed
To train our CNN model we splitted the database into
80% training data and 20% test data, then we compiled the
model using Stochastic gradient descent (SGD) optimizer. At
each epoch, Keras checks if our model performed better than
the models of the previous epochs. If it is the case, the new
best model weights are saved into a file. This will allow us to Fig. 13. Classification report of the proposed method on FER 2013 database.
load directly the weights without having to re-train it if we
want to use it in another situation.
Fig. 11. Students’ facial emotion recognition results.
VI. CONCLUSION AND FUTURE WORK [14] P. Viola and M. Jones, “Rapid object detection using a boosted cascade
of simple features,” in Proceedings of the 2001 IEEE Computer Society
In this paper, we presented a Convolutional Neural Conference on Computer Vision and Pattern Recognition. CVPR 2001,
Network model for students’ facial expression recognition. Kauai, HI, USA, 2001, vol. 1, p. I-511-I‑518.
The proposed model includes 4 convolutional layers, 4 max [15] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization
pooling and 2 fully connected layers. The system recognizes of On-Line Learning and an Application to Boosting,” Journal of
Computer and System Sciences, vol. 55, no 1, p. 119‑139, août 1997.
faces from students’ input images using Haar-like detector and
classifies them into seven facial expressions: surprise, fear, [16] Opencv. opencv.org.
disgust, sad, happy, angry and neutral. The proposed model [17] Keras. keras.io.
achieved an accuracy rate of 70% on FER 2013 database. Our [18] Tensorflow. tensorflow.org .
facial expression recognition system can help the teacher to [19] aionlinecourse.com/tutorial/machine-learning/convolution-neural-
recognize students’ comprehension towards his presentation. network. Accessed 20 June 2019
Thus, in our future work we will focus on applying [20] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a
convolutional neural network,” in 2017 International Conference on
Convolutional Neural Network model on 3D students’ face Engineering and Technology (ICET), Antalya, 2017, p. 1‑6.
image in order to extract their emotions. [21] ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/. Accessed
05 July 2019
ACKNOWLEDGMENT
We wish to thank the 9 students for their participation in
the experiment.
REFERENCES
[1] R. G. Harper, A. N. Wiens, and J. D. Matarazzo, Nonverbal
communication: the state of the art. New York: Wiley, 1978.
[2] P. Ekman and W. V. Friesen, “Constants across cultures in the face and
emotion,” Journal of Personality and Social Psychology, vol. 17, no 2,
p. 124‑129, 1971.
[3] C. Tang, P. Xu, Z. Luo, G. Zhao, and T. Zou, “Automatic Facial
Expression Analysis of Students in Teaching Environments,” in
Biometric Recognition, vol. 9428, J. Yang, J. Yang, Z. Sun, S. Shan,
W. Zheng, et J. Feng, Éd. Cham: Springer International Publishing,
2015, p. 439‑447.
[4] A. Savva, V. Stylianou, K. Kyriacou, and F. Domenach, “Recognizing
student facial expressions: A web application,” in 2018 IEEE Global
Engineering Education Conference (EDUCON), Tenerife, 2018, p.
1459‑1462.
[5] J. Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, “The
Faces of Engagement: Automatic Recognition of Student
Engagementfrom Facial Expressions,” IEEE Transactions on Affective
Computing, vol. 5, no 1, p. 86‑98, janv. 2014.
[6] N. Bosch, S. D'Mello, R. Baker, J. Ocumpaugh, V. Shute, M. Ventura,
L. Wang and W. Zhao, “Automatic Detection of Learning-Centered
Affective States in the Wild,” in Proceedings of the 20th International
Conference on Intelligent User Interfaces - IUI ’15, Atlanta, Georgia,
USA, 2015, p. 379‑388.
[7] Krithika L.B and Lakshmi Priya GG, “Student Emotion Recognition
System (SERS) for e-learning Improvement Based on Learner
Concentration Metric,” Procedia Computer Science, vol. 85, p.
767‑776, 2016.
[8] U. Ayvaz, H. Gürüler, and M. O. Devrim, “USE OF FACIAL
EMOTION RECOGNITION IN E-LEARNING SYSTEMS,”
Information Technologies and Learning Tools, vol. 60, no 4, p. 95,
sept. 2017.
[9] Y. Kim, T. Soyata, and R. F. Behnagh, “Towards Emotionally Aware
AI Smart Classroom: Current Issues and Directions for Engineering
and Education,” IEEE Access, vol. 6, p. 5308‑5331, 2018.
[10] D. Yang, A. Alsadoon, P. W. C. Prasad, A. K. Singh, and A.
Elchouemi, “An Emotion Recognition Model Based on Facial
Recognition in Virtual Learning Environment,” Procedia Computer
Science, vol. 125, p. 2‑10, 2018.
[11] C.-K. Chiou and J. C. R. Tseng, “An intelligent classroom management
system based on wireless sensor networks,” in 2015 8th International
Conference on Ubi-Media Computing (UMEDIA), Colombo, Sri
Lanka, 2015, p. 44‑48.
[12] I. J. Goodfellow et al., “Challenges in Representation Learning: A
report on three machine learning contests,” arXiv:1307.0414 [cs, stat],
juill. 2013.
[13] A. Fathallah, L. Abdi, and A. Douik, “Facial Expression Recognition
via Deep Learning,” in 2017 IEEE/ACS 14th International Conference
on Computer Systems and Applications (AICCSA), Hammamet, 2017,
p. 745‑750.