0% found this document useful (0 votes)
19 views9 pages

Final Draft

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views9 pages

Final Draft

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Engagement Detection in E-Learning

ABSTRACT- With the goal of completely changing the sessions. In traditional classroom teaching, teachers
paradigm of the teaching and learning process from evaluate their students’ learning effect, the level of
traditional classroom instruction to online learning understanding and comprehension, by mainly observing
platforms, e-learning is rapidly moving towards students’ behavior. The behavior aspects may include body
personalised learning. This change is predicated on the language, eye gaze, facial expressions, and emotions
notion that accurate emotion recognition in e-learning exhibited through vocal feedback. Analysing all these
platforms enables students' learning experiences to be attributes a tutor in a physical classroom could give
tailored to their preferences. Three modes are used by e- personalized feedback to a particular student. But in online
learners to express their sentiment: text, image, and
learning analyzing these attributes is a challenge in the
audio. Online students engage in a range of learning
absence of a physical tutor. Detection of a student’s
activities, such as writing, reading, watching tutorial
videos, taking online tests, and attending online emotional state is crucial to personalize their learning in an
conferences. They exhibit a range of engagement levels automated learning platform. Multiple researchers have
while taking part in these educational activities, proposed the use of natural language processing, hand
including neutral, confusion, delight, boredom, and gesture recognition, eye gaze estimation, facial emotion
frustration. It is critical for online educators to recognition, and body language detection to estimate
accurately and effectively assess the engagement level of learners' learning effects and provide a measure that will
their online learners in order to offer individualised provide a more effective learning experience. It is
pedagogical support through interventions. Various becoming essential for e-learning platforms to be able to
methods from the fields of machine learning and educate their learners according to their personalized
computer vision are used to detect engagement. We features.
investigate the viability of utilising machine learning on Due to the enormous development in the Machine
data from eye trackers and camera sensors to measure Learning sphere, the LMSs can now classify the learners
learner engagement and categorise the degree of according to their different personalized features like
engagement. After watching videos of the students and learning style, attentiveness, cognitive ability, specialized
their screens, we categorise the gathered data as requirements, etc. The learners belonging to different
Engaged or Not Engaged. Perceptual user features (e.g., classes are recommended different learning objects,
body posture, facial points, and gaze) are extracted from learning contents, tutors, etc. after discovering the most
the collected data. Feature selection and classification appropriate one for them by applying different Machine
techniques are used to create classifiers that can
Learning algorithms. Recognizing a learner’s emotional
determine whether or not a student is engaged. After
state (fear, anger, depression, joy, confusion, confidence,
identifying the difficulties in detecting engagement, we
investigate the datasets and performance metrics that etc.) is also very important for personalizing their learning
are currently available and offer suggestions for how to experience. Recommending appropriate learning content
improve the technology in the future to detect according to their emotional state could enhance their
engagement in online learning. learning quality as a result improving the overall quality of
the learning platform.
1. Introduction Researchers have developed several frameworks and
models predominantly based on Artificial Intelligence to
The usage of computer-based technology is growing in
recognize the emotional state of a person from textual,
numerous directions due to its easy availability,
effectiveness, etc. Technological advancements like audio, image, and video content. In this work, we have
smartphones, laptops, and other intelligent devices help us reviewed different research works for recognizing the
use online learning facilities termed E-learning. E-learning emotion of an e-learner through analyzing their video or
platforms have become a significant tool for knowledge image.
sharing and understanding for almost every student, The purpose of this survey is to encourage further study into
especially after the pandemic. E-learning has numerous ways to improve the learning drive of an e-learner. Since a
advantages like eco-friendliness as it saves paper, reduces person’s motivation is closely related to their emotion we
the cost of traveling and time, etc. The students can attend have surveyed recognizing emotions of a person from their
the classes from their places even if they are not feeling video mostly in the context of e-learning. This will direct
well. During a pandemic situation like COVID-19, all new avenues to enhance personalized learning in Intelligent
parents and students were worried about the futures of their Tutoring Systems to attract and sustain a greater number of
children in lockdown situations, but E-learning has solved students in online learning. Using the above exposition, we
this problem, and every student is capable of learning
justify the novelty of this work.
through their mobile phones and laptops.
On the other hand, the physical education system has This paper is organized in the following sections. In
benefits like interaction between teacher and students, section 2 we have conducted a brief literature review of the
assessment of student’s understandability, and hands-on topic. Section 3 represents a comparative study of the
research works discussed in section 2. In Section 4 we have system is useful because it works well even if faces are at
discussed the future direction of this research topic in brief. different angles or partly covered; making it more versatile
for real-world situations.
2. Brief Survey
In [5] authors emphasized more on multi-modal emotion
We observed as we explored through the previous works recognition than the single modal ones. They used
that frameworks developed for the emotional recognition of
a person are majorly based on two categories. Either deep Affectnet as their FER model. The use of predicted
learning-based or Conventional Machine learning-based. So emotions extends beyond understanding student behavior to
as per our findings we have divided our review in two include visual summarization of classroom films and
categories. In category 2.1 we have reviewed those works classification of the group-level emotions on videos.
which are developed around deep learning-based algorithms
and in 2.2 we have reviewed those works in which deep Based on their behavior and biological data, several
learning algorithms have hardly intervened attempts have been made to ascertain the e-learners'
level of focus through a neuro-fuzzy inference system that
tracks the position of the eye’s iris to find out the
concentration level. In it, SVM is also used to determine
concentration level [6]. This proposed model has an
overhead of timing for the preprocessing of data which in
the future could be eliminated by fully automating the
process so that it could be implemented in real time also.

Fig.1 Category for the Emotion Analysis


Figure 1. Insert the block diagram of the categories.
2.1 Deep Learning Based
All the deep learning-based works are reviewed in this
section.
Nonverbal behavior of forty-four undergraduate students
was observed using a USB front-facing camera while
participants completed an on-screen multiple-choice
question-and-answer test [1]. An ANN is fed the
information on their question-answer scores and behavioral
patterns to classify their comprehension states almost
instantly. Future work by the authors is planned to increase
the classifier's accuracy. Furthermore, this investigation
might be extended to examine the relationship between
behavior and the type of question asked. In the future, this Fig.2 Overview of Procedures
technique might also be used to analyze various kinds of
behaviors and mental states. Figure 2. Block diagram of the framework proposed in [6]
(draw a similar diagram with slight modification of figure 1
In [2] a concept of similarity is introduced generally to
reference 6)
preserve the actual data with features and group them
creating a degree of similarity among the pairs which is Pise et al. employed a two-phase method to categorize 3D
achieved through fuzzification. Authors have proposed to face expression images extracted from the video to quantify
develop theoretical methods to generate rules and select the optical flow intensity with the help of the Hidden
features based on the similarity relation in the future. Markov model and Naïve Bayes [7]. Also, any slight
change in the video can be measured using video rather
Zakkaetal. conducted a study between traditional
than just static images.
classrooms and e-learning to improve the e-learning
environment to match the traditional one [3]. A framework Using temporal appearance and facial landmark points
to detect the motivation level of learners is incorporated facial gestures are extracted and then integrated to best
which senses emotion and sends feedback to the teacher recognize the expression [8]. CNN is used for object
and further develops a response mechanism to distinguish detection, and feature extraction, in this model. In the
expressions and attention. future, the framework could be extended to work in GPU-
supported machines for a better training time.
The authors created a smart computer system (EAC Net)
that can better recognize facial expressions (AUs) [4]. Researchers utilize multimodal learning analytics in online
Unlike other methods, it doesn't need perfectly aligned education to comprehend the feelings and engagement of
faces and understands facial features more effectively. It students [9]. They use information from posture, gestures,
showed big improvements in accuracy on a dataset. This
and emotions to estimate students' level of engagement in detection in more detail as they found it more effective than
the classroom. Computer vision techniques examine lecture other methods in the case of online learning platforms.
footage and detect feelings such as joy or indifference. They further discussed the challenges of all of these
Some systems even use head and eye motions to categorize methods and future directions about how they could be
different levels of engagement. By taking into account the used in a more advanced way.
feelings and participation of the students, these methods [16] In this paper, a framework using computer vision to
detect student engagement is proposed. Facial expressions,
seek to improve online instruction.
body language, and other cues are used to determine
Thiruthuvanathan et al. discussed the challenges in whether a student is paying attention and engaged in the
detecting e-learners' engagement level from their facial material. The proposed model uses an SVM (Support
Vector Machines), Random Forest, Neural Networks, CNN
expression recognition. They have proposed to extend their
(Convolutional Neural Networks), LSTM (Long Short-
model for detecting group-level engagement in the future
Term Memory), InceptionV3, and VGG16 for object
[10]. recognition in video scenes to analyze students' facial
expressions, body postures, and hand gestures. The data set
In [11] the author is trying to remove the challenges and
considered for analyzing the model is too small consisting
weaknesses of the mostly used blended learning model.
of only 45 students. The number of features considered to
Additionally, the gamification concept has been applied. detect engagement is also very small which could be
The fer and gamification system were developed using an enhanced in the future for a more accurate detection with
object-oriented approach using unified modeling language an increased number of labels like neutral, low engaged,
(UML).The methods used are ANN, CNN, and JavaScript highly engaged, [Link] from only engaged or not
library with open-source code TensorFlowJS. It has two engaged labels.
stages of testing the facial expression recognition system
and gamification application. Ozdamli et [Link] various algorithms and models for
facial recognition in education [17]. For face detection and
In [12] the authors have used a video dataset called recognition, it mentions software like MATLAB and
Children's Spontaneous Facial Expressions (LIRIS-CSE) Python utilizing techniques like PCA and 3WPCA-MD.
and proposed a system that uses Convolutional-Neural- Classifications are done with diverse algorithms like SVM,
Network (CNN)-based models, such as VGG19, VGG16, Bayesian, or neural networks. When it comes to
and Resnet50, for feature extractions and Support Vector recognizing emotions from facial expressions, the paper
Machine (SVM) and Decision Tree (DT) for classification. discusses static models, Action Unit (AU) based models,
This system will automatically recognize children's and the Facial Action Coding System (FACS). Deep
learning frameworks like CNNs are also increasingly used.
expressions. Several experimental configurations, such as
To detect cheating in online exams, models like Multi-
80–20% split, K-Fold Cross-Validation (K-Fold CV), and
Class Markov Chain Latent Dirichlet Allocation and
leave-one-out cross-validation (LOOCV), are used to supervised dynamic Bayesian models are employed.
assess the system for both image and video-based Various datasets like GI4E for gaze tracking and FEI for
categorization. general face recognition tasks are used in this model.
The authors investigate methods for evaluating teaching
Keerthana et al. proposed a hybrid model for identifying quality and student engagement in classrooms [18].
student facial emotions by monitoring eye gaze and head Traditional approaches, including tests and observations,
movements to analyze student engagement levels [13]. are criticized for their limitations in providing
Haar cascade model and binary patterns are used to detect comprehensive and real-time data. Attention shifts towards
the head movements. CNN models are used for FER. Using assessing student engagement, defined across behavior,
the OpenCV object detection framework authors cognition, emotion, and social interaction dimensions.
determined the status of the student, whether he is Technological advancements, particularly in computer
Distracted or engaged. vision, offer non-invasive means to detect engagement
[14] In this paper, the proposed framework calculates the through facial expression analysis and gaze tracking.
concentration index of students. CNN model using Keras is Studies demonstrate high accuracy in predicting
used with fer2013 datasets for emotion recognition. engagement levels using deep learning models. Overall, the
Additionally, Mamdani MATLAB software is used to review highlights a growing interest in leveraging
create fuzzy rule sets and implement membership functions technology to enhance classroom evaluation and improve
utilizing the principles of Fuzzy Logic. It mainly deals with teaching practices.
three major steps: face detection, feature extraction, and
feature classification. Proposed models define structures for representing learning
Dewan et al. explore various methods for learners’ behaviors in classrooms, enabling effective feature
engagement detection and classify them into three main extraction [19]. Evaluation metrics demonstrate the efficacy
categories —automatic, semi-automatic, and manual [15]. of machine learning models in accurately detecting and
Each category is further divided into audio video and text categorizing student behaviors. More data sources could be
depending on the type of data used for detection. The added to the model in the future. Advances in the computer
authors reviewed the automatic methods for engagement vision field, specifically in Human Pose and Graph Neural
Networks could be incorporated also to increase the emotional gestures through which it distinguishes emotions
efficacy. like frustration, happiness, boredom, confusion, etc.
Sensors are being used in this research. Measures of the
The Authors have used Multimodal Emotion Recognition degree of engagement is divided into 3 categories namely
in Multiparty Conversations (MERMC) which focuses single-sensor, multiple-sensor, and sensor-free methods.
more on audio and text while ignoring visual information in
[20]. It incorporates two-stage frameworks. Facial [23] The primary objective of this research is to explore
expression-aware Multimodal Multi-Task learning and reliable facial information models that can describe how
Multimodal facial expression-aware emotion recognition people interact in a learning environment. Different
model which helps in extraction of face and help improve approaches for automated recognition of student
emotion recognition. They have planned to leverage engagement levels are studied. Authors stated that
multimodal fusion mechanisms to improve the performance engagement recognition will be more effective and variant
of this task in the future. in long-term learning situations rather than short-term
studies of current scenarios.
2.2 Conventional ML-based
In [24] authors wanted to highlight the growing interest in
Communication requires the use of facial expressions, facial expression recognition and eye tracking, to assess
which differ between individuals and civilizations. and enhance student engagement in digital learning
Effective teaching requires a knowledge of students' environments. Various methods, including deep learning
emotions, especially with the development of online models and facial action coding systems, have been
learning brought on by COVID-19. Understanding facial explored to measure concentration levels and emotional
expressions allows educators to modify their approaches states. These approaches aim to provide real-time feedback
and add interest to their lessons. While negative emotions to instructors, allowing for personalized adjustments to
might cause disengagement, positive emotions support content delivery. Despite challenges such as dropout rates
academic progress. in virtual classrooms, ongoing research underscores the
importance of understanding and addressing student
The authors conducted a thorough review of emotion
engagement for effective digital education.
classification on facial emotion recognition [21]. It
elaborates analysis on emotion classifiers and datasets used Dukicet al. discover connections between emotions,
in FER. Different approaches considered by the researchers activities, and gender in online learning [25]. Authors have
for preprocessing and feature extractions are discussed. The analyzed two different perspectives (1) classroom
authors highlighted the strengths and limitations of this experiment-related and (2) FER data-related. To gather
approach. Their study revealed that deep learning is the feedback on active teaching strategies, students' emotions
most commonly used approach for FER in the academic are tracked as they complete programming assignments.
arena. Whereas the most used dataset and emotion classifier The focus of the methodology in this research was
are DAiSEE and SVM respectively. primarily on the activity portion. However, the variance
due to age difference is not taken into consideration in this
study. Authors have planned to use more data to experiment
with this model with better camera positioning and sticker
conditions on the behavior of the participants in the future.

The authors have designed a framework to recognize


emotions and categorize into them into 3 parts. First is the
face tracker. The second one is the facial motion tracking
optical flow algorithm and the third is the recognition
engine.
This method proposed using a channel attention network
with depth separable convolution to enhance the linear
bottleneck structure. When tested on the FER2013 dataset,
it outperforms other methods significantly. Mainly because
Fig. 3 Research Design block diagram it pays more attention to extracting features, resulting in
better accuracy in recognizing emotions [26].
Figure 3: Block diagram of research model of [11]( draw a
similar diagram with slight modification of figure 1
reference 18)

Zhanget. Al. developed an algorithm that can accurately


detect students' engagement in online learning
environments [22]. Supervised learning is used to recognize
students into disgust, sadness, happiness, fear, contempt,
anger, and surprise. The significance of the correlation of
the student’s emotions with their departments, gender,
lecture hours, the location of the computer in the
classroom, lecture type, and session information is studied.
Finally, the association between student’s emotional change
and their achievements is analyzed to examine how the
emotional recognition of students could contribute to
increasing the overall quality of education.

In the research work done by Gong et al. [30], the high-


definition video of classroom teaching is recorded using the
camera in front of the classroom. The faces of every student
in the classroom are located and intercepted using the
AdaBoost algorithm from the sampled frame images, and
the images are pre-processed to produce an expression area
of 64 by 64 pixels. After PCA+ dimensionality reduction,
Fig. 4 The Proposed TLF-ResNet pipeline for facial Gabor and ULBPHS feature fusion is integrated with the
expression recognition with a face detector to detect KNN classification algorithm for expression classification.
At last, the assessment and results of the emotional learning
Figure 4. The working model proposed in [26]. ]( draw a of the students are achieved.
similar diagram with slight modification of figure 4
reference 15) 3. Comparative Study
In [27] Authors conducted a systematic review of existing In this section through table 1 we have expounded upon our
frameworks for Facial Emotion Recognition (FER)and how findings of this work. It is a concise tabular depiction of the
they are used in classifying academic emotions mainly in Brief Survey in section 2. Table 1 has four columnar data.
the context of online learning. Authors observed that low The work's goal is succinctly stated in the first column.
Which algorithm or method is employed is indicated in the
illumination, Lack of frontal pose, and small size datasets second column. The work's limitations are shown in the
are some of the major hindrances in FER in e-learning. third column. A summary of the works' future directions is
They suggested that long-term monitoring of facial included in the fourth column. This table in a single glance
emotions through wearable sensors, continuous video represents the significance of this whole work.
recording, and exclusion of potential human biases could
produce better accuracy in the case of FER in online
learning. Future work

Alkabany et al. proposed a methodology that assesses


students' degree of engagement in both traditional We investigated potential approaches for
classroom settings and online learning environments [28].
The suggested framework records the user's video and
engagement detection in a learning
follows their faces as they move through the frames. It can environment. While computer vision-based
be used for tracking the development of e-learners with approaches show promise in engagement
different levels of learning impairments and analyzing the
impact of nerve palsy on social interactions and facial
detection, they are not without limitations.
expressions. Different features like facial fiducial points, For computer vision based methods,
head pose, eye gaze, and learning features are extracted automatically collecting and analysing
from the video of the user’s face to detect the Facial Action
Coding System (FACS), which decomposes facial
behavioural data in naturalistic scenarios
expressions in terms of the fundamental actions of remains a challenge. For instance, it is
individual muscles or groups of muscles (i.e., action units). difficult for the current algorithms to analyse
The student's behavioral engagement (i.e., willingness to
participate in the learning process) and emotional
facial occlusions and head motion. Data loss
engagement (i.e., attitude toward learning) are then results from these algorithms' inability to
measured using these decoded action units (AUs). extract features from certain video segments
Emotional changes of 67 students during a lecture on in such a scenario. Due to segmentation
Information technology are studied in [29]. The software is error, extracting robust features from the
developed using Microsoft Emotion Recognition API and
C# programming language to categorize the feelings of
region of interest presents another difficulty.
While facial expression analysis has How often the choice about engagement
received a lot of attention, there are other detection should be made—frame by frame,
difficulties involved in these efforts besides for a brief segment of a video, or for the
technical ones. There are currently very few entire video clip—is not made clear enough.
online datasets that can be utilised for online What is the appropriate duration of a video
learning context engagement detection. clip to designate a single level in the case of
a brief segment? It is not clear what the
Nevertheless, the significance of these kinds
exact standard should be for determining
of datasets has been acknowledged. An which emotions a learner is actually
increasing number of researchers are experiencing when labelling training data.
focusing on producing these kinds of Which judges, those with training, or the
datasets and making them accessible to the student? Despite the fact that the trained
general public. Researchers face three main judges had the highest interpreter reliability,
challenges when creating datasets for facial this could just be an artefact of the training.
expression-based engagement detection. Furthermore, it's unclear what
Numerous studies have indicated that it is environmental restrictions must be taken
difficult to pinpoint the relationship between into account when filming videos in order to
a given set of facial expressions and specific detect engagement in the given context of
learning activities (such as reading, writing, online learning. Numerous studies also
stressed the significance of conducting
taking part in online meetings, and watching
additional research to determine the exact
online video tutorials).
relationship between the engagement
The number of affective states or detection results and task performance.
engagement levels (or types) that work best By tackling the aforementioned issues, we
for identifying an online learner when can further the study and creation of
precise engagement discrimination is automatic engagement detection in a
required is also not sufficiently understood. computerised learning environment, which
The frequency at which affective states will improve student engagement and
ought to be reported in an input video learning effectiveness.
presents another potential hazard.
.
[Link] Objective Algorithm Limitation Scope
.

1. Non-verbal behavioural pattern classification feed-forward MLP Question type, gender, and demographic NVB modelling and behaviour labelling
of e-learners & MSE variables are not considered for classification can be improved using deep learning
techniques.

2. Detecting Learning Affect in E-Learning OpenCV, HAAR, fuzzy rough theoretical methods for rule
Platform Using Facial Emotion Expression CNN2 generation and feature selection would be
developed.

3. Estimating Student Learning Affect Using CNN, FER2013 Using two predictions facial emotion for include the use of multi-model pattern
Facial Emotions dataset learning affect detection is more ideal than analysis such as body expression, eye
using one or three predictions. gaze, and head movements to achieve a
more accurate result

4. EAC-Net: Deep Nets with Enhancing and CNN, BP4D, and Automatic generation of attention maps Finding more responsive areas for
Cropping for Facial Action Unit Detection DISFA AU datasets enhancing and cropping nets rather than
manually locating the positions at present.

5. Classifying Emotions and Engagement in CNN, Affectnet Improve the quality of the engagement 1) Predict arousal and valence in addition
Online Learning Based on a Single Facial prediction engine to facial expressions 2) Face clustering
Expression Recognition Neural Network

6. A methodology to predict e-learners’ Recurrent neural 1)applied only to well-structured process 1) Automation of the process.
concentration networks, long models. 2) applied in a controlled environment. 2) Should be tested in a real environment.
short-term memory, 3) Expanding the model to CNN and
TCNs.

7. Implementation of a deep-learning-based temporal relational To learn the long-term temporal dependencies Combine the TRN technique with other
facial image analysis model to estimate the network, MLP, FER between frames deep neural networks suffer modalities such as the audio modality with
learning effect and to reflect on the level of other datasets.
student engagement.

8. Proposes a method for accurate facial Stacked sparse auto- Limited to training on CPU-based machines, Use a framework that could support GPU,
expression recognition using a lightweight encoder (SSAE) which is why it took a longer time for training. which will improve the training time.
deep learning model.

9.21. A comprehensive overview of the research on SVM, CNN, KNN, 1) High memory and sophisticated computing explore approaches for long-term
FER in online learning. DNN & LSTM requirements. 2)It has low illumination and a monitoring of facial emotions, such as
lack of frontal pose. wearable sensors or continuous video
recording.

10.2 An algorithm that can accurately detect Adaptive Weighted Many one-screen learning pages in which To deal with a short learning content
2. students' engagement in online learning Local Gray Code students need not scroll to update content. page..
environments Patterns (LGCP),

11.23 develop approaches for the automatic CNNs Limitation of short-term laboratory studies. Focusing on long-term learning situations
. recognition of student engagement from their
facial expression

12.2 To investigate student engagement levels LSTM Limited data set and computational power. They want to merge the information
4. context of online learning through the analysis currently provided by our system with the
of facial behaviour. information

13.2 To develop deep learning models for real-time CNNs This only limits the selection in the aspect of Collect more data for learning the models,
5 facial expression recognition (FER) in the the age range. and re-conduct the classroom experiment
context of active teaching with better camera positioning

14.9. Analyze online lecture videos, detect students' CNNS Camera Orientation, Obstructions and Lighting They aim to conduct a user study in the
engagement levels and emotions Conditions, Engagement Detection from future and solve the limitations.
Profile Pictures

15.26 The development of a robust and accurate ResNet18, Triplet Overfitting, network complexity, linear Multimodal approaches that combine
system for classifying human facial emotions Loss Function bottleneck structure, complex facial structures. image- and video-based methodologies,.

16.2 A systematic literature review on the use of SVM Low Illumination, Lack of Frontal Pose, 1) Long-term monitoring of facial
7 Facial Expression Recognition (FER) systems sample number of Dataset emotions.2) privacy, consent, and potential
in the classification of academic emotions. biases

17.1 Detecting engagement levels and emotions of CNNS Emotion Recognition Accuracy, Ethical and group engagement detections and evaluate
0 online learners using facial expression Privacy Concerns the valence and arousal of the group
recognition.

18.11 An autonomous monitoring system using UML, ANN, CNN 1. Developing an autonomous Facial expression recognition accuracy.
. facial expression recognition and gamification monitoring system. Generalization to different contexts
methods to support the learning process with a 2. Implementation using cloud
blended learning model. computing.
3. Testing facial expression
recognition and gamification
References extraction and visualization for engagement
detection in online learning”
1. Mike Holmes, Annabel Latham, Keeley Crockett, 15. Irfan Haider, Hyung-Jeong Yang, Guee-Sang Lee,
and James D. O’Shea “Near real-time Soo-Hyung Kim, Wataru Sato “Robust Human
comprehension classification with artificial Face Emotion Classification Using Triplet-Loss-
neural networks: decoding e-Learner non-verbal Based Deep CNN Features and SVM”
behaviour” 16. Jeniffer Xin-Ying Lek and Jason Teo, Sarah
2. Sukrit Bhattacharya, Vaibhav Shaw, Pawan [Link] “Academic Emotion Classification
Kumar Singh, Ram Sarkar, and Debotosh Using FER: A Systematic Review”
Bhattacharjee “SV-NET: A Deep Learning 17. Michael Moses Thiruthuvanathan, Balachandran
Approach to Video Based Human Activity Krishnan, Madhavi Rangaswamy “Engagement
Recognition” Detection Through Facial Emotional Recognition
3. Benisemeni Esther Zakka and Hina Vadapalli Using a Shallow Residual Convolutional Neural
“Detecting Learning Affect in E-Learning Networks”
Platform Using Facial Emotion Expression” 18. Indra Kurniawan, Yeffry Handoko Putra
4. Wei Li, Farnaz Abtahi, Zhigang Zhu, and Lijun “Autonomous Monitoring with Facial Expression
Yin “EAC-Net: Deep Nets with Enhancing and Recognition and Gamification to Support
Cropping for Facial Action Unit Detection” Blended Learning Model”
5. Andrey V. Savchenko, Lyudmila V. Savchenko, 19. Islam Alkabany, Asem Ali, Amal Farag, Ian
Ilya Makarov “Classifying Emotions and Bennett, Mohamad Ghanoum, Aly Farag
Engagement in Online Learning Based on a “Measuring Student Engagement Level Using
Single Facial Expression Recognition Neural Facial Information”
Network” 20. Guary Tonguc, Betul Ozaydin Ozkara
6. Young-Sang Jeong, Nam-Wook Cho “Evaluation “Automatic Recognition of Student Emotions
of e-learners’ concentration using recurrent neural from Facial Expressions during a Lecture”
networks” 21. Bing Gong and Jing Wei “Quantitative Analysis
7. Anil Pise, Hima Vadapalli, Ian Sanders “Facial of Facial Expression Recognition in Classroom
emotion recognition using temporal relational Teaching Based on FACS and KNN
network: an application to E-Learning” Classification Algorithm”
8. Mubashir Ahmad, Saira, Omar Alfandi, Asad 22. Unqua Laraib, Arslan Shaukat, Rizwan Ahmed
Masood Khattak, Syed Furqan Qadri, Iftikhar Khan, Zartasha Mustansar, Muhammad Usman
Ahmed Saeed, Salabat Khan, Bashir Hayat and Akram and Umer Asgher “Recognition of
Arshad Ahmad “Facial expression recognition Children’s Facial Expressions Using Deep
using lightweight deep learning modeling” Learned Features”
9. Jennifer Xin-Ying Lek and Jason Teo “Academic 23. K. Keerthana, D. Pradeep Dr. B. Vanathi
Emotion Classification Using FER: A Systematic “Learner’s Engagement Analysis for E-Learning
Review” Platform”
10. Zhaoli Zhang, Zhenhua Li, Hai Liu, Taihe Cao, 24. Ati Jain, Dr. Hare Ram Sah and Harsha Atre
and sannyuya Liu “Data-driven Online Learning “Student’s Emotion Recognition through Facial
Engagement and Mouse Behavior Recognition Expressions during E-Learning using Fuzzy
Technology” Logic and CNN classification”
11. Jacob Whitehill, Zewelanji Serpell, Yi-Ching Lin, 25. M. Ali Akber Dewan, Mahbub Murshed and
Aysha Foster, and Javier R. Movellan “The Faces Fuhua Lin “Engagement detection in online
of Engagement: Automatic Recognition of learning: a review”
Student Engagement from Facial Expressions” 26. Sana Ikram, Haseeb Ahmad, Nasir Mahmood,
12. Unqua Laraib, Arslan Shaukat, Rizwan Ahmed C.M. Nadeem Faisal, Qaisar Abbas, Imran
Khan, Zartasha Mustansar, Muhammad Usman Qureshi and Ayyaz Hussain “Recognition of
Akram, and Umer Asgher “Recognition of Student Engagement State in a Classroom
Children’s Facial Expressions Using Deep Environment Using Deep and Efficient Transfer
Learned Features” Learning Algorithm”
13. David Dukic, Ana sovic Krzic “Real-Time Facial 27. Fezile Ozdamli, Aayat Alijarrah, Damla
Expression Recognition Usinig Deep Learnin Karagozlu, and Mustafa Ababneh “Facial
with Application in the Active Classroom Recognition System to Detect Student Emotions
Environment ” and Cheating in Distance Learning”
14. Mohammad Nehal Hasnine, Huyen [Link], 28. Yi Chen, Jin Zhou, Qiating Gao, Jing Gao, and
Thuy Thi Thu Tran, Ho Tran Nguyen, Gokhan Wei Zhang “MDNN: Predicting Student
Akcapinar, Hiroshi Ueda “Students emotion Enagegement via Gaze Direction and Facial
Expression in Collaborative Learning”
29. Nha Tran, Hung Nguyen, Hien Luong, Minh
Nguyen, Khiet Luong, Huy Tran “Recognition of
Student Behaviour through Actions in the
Classroom”
30. Benyoussef Abdellaoui, Aniss MOUMEN,
Younes ELBOUZEKRI EL IDRISSI, Ahmed
Remaida “Face Detection to Recognize Students
Emotion and Their Engagement: A Systematic
Review”

You might also like