0% found this document useful (0 votes)
9 views6 pages

Intelligent_Student_Feedback_System_for

The document presents an Intelligent Student Feedback System designed for online education, utilizing deep learning techniques to analyze students' facial expressions and recognize their emotions. The system comprises four phases: face detection, normalization, emotion recognition using CNN, and concentration metric calculation, aiming to enhance teaching quality by providing real-time feedback to instructors. The implementation involves various technologies, including Python, OpenCV, TensorFlow, and Keras, to create a user-friendly interface for both students and faculty, facilitating effective learning experiences.

Uploaded by

fananiarkusuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Intelligent_Student_Feedback_System_for

The document presents an Intelligent Student Feedback System designed for online education, utilizing deep learning techniques to analyze students' facial expressions and recognize their emotions. The system comprises four phases: face detection, normalization, emotion recognition using CNN, and concentration metric calculation, aiming to enhance teaching quality by providing real-time feedback to instructors. The implementation involves various technologies, including Python, OpenCV, TensorFlow, and Keras, to create a user-friendly interface for both students and faculty, facilitating effective learning experiences.

Uploaded by

fananiarkusuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

Intelligent Student Feedback System for Online


Education
Hari Krishnan Liya K.V
Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering
Christ College of Engineering, Christ College of Engineering, Irinjalakuda
Irinjalakuda, Thrissur, India Thrissur, India

Lazar Tony Nova Mary Thomas


D Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering
Christ College of Engineering, Irinjalakuda Christ College of Engineering,
Thrissur, India Irinjalakuda, Thrissur, India

Remya K Sasi
Dept. of Computer Science and Engineering
Christ College of Engineering,
Irinjalakuda Thrissur, India

Abstract—Nowadays, deep learning techniques are education can improve not only the quality of teaching
gaining big success in various fields including computer but also the real-time nature of information transmission,
vision. Indeed, a convolutional neural networks (CNN) which is of great significance for the construction of
model can be trained to analyze images and identify face learner-oriented teaching to create personalized learning
emotion. Our project aims to create a system that
recognizes students’ emotions from their faces. Our system
and so on. Moreover, the general learning mood of
consists of four phases: face detection using MTCNN, learners also reflects theteaching quality of the
normalization, emotion recognition using CNN on FER instructors. Learning the emotions of learners in online
2013 database and calculation of concentration metric with education has also become an important indicator for
seven types of expressions. Obtained results show that face assessingtheteachingqualityofinstructors.
emotion recognition is feasible in education, consequently, it With the development of computer vision in recent
can help teachers to modify their presentation according to years, the accuracy of facial expression based on face
the students’ emotions. detection has continuously improved and It has become
easy to observe the students’ reaction on a particular
Keywords— Student facial expression; Emotion
recognition; Convolutional neural networks (CNN); Deep
topic which is being taught by theinstructor.
learning; Intelligent Student feedback system. The purpose of our project is to implement emotion
recognition in education by realizing an automatic
I. INTRODUCTION system that analyzes students’ facial expressions based
Learning is an exciting adventure during which both on Facial Emotion Recognition, which is a deep learning
the teacher and the students participate. The participation algorithm that is widely used in facial emotion detection
of the student is inevitable in improving the quality of and classification. It is a Convolutional Neural Network
education, and it is very important to receive live model that consists of a multi- stage image processing to
feedback from students to adjust their pedagogy and extract feature representations. Our system includes four
proceed with their classes in an effective manner. In a phases: face detection, normalization, emotion
conventional scenario, a teacher takes this input from the recognition and calculation of concentration metric.
facial and body expressions of their There are seven emotions under consideration: neutral,
studentsbutthisisnotpossibleinanonlinescenario. anger, fear, sadness, happiness, surprise and disgust.
The face is the most expressive and communicative
part of a person’s being. Facial expression recognition II. EXISTING WORKS
identifies emotion from a face image, it’s a manifestation The majority of the student feedback systems use
of the activity and personality of a person . According to FER. FER systems can be classified mainly into two
diverse research, emotion plays an important role in main categories:
education. The establish- ment of a learning-emotion
recognition model to guideonline

Volume 9, Issue 13 Published by, www.ijert.org 143


Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

Emotion prediction from extracted facial features and compiled by any standard compliant C++ compiler like
facial emotion recognition directly from the facial images Clang, GCC, ICC, MinGW and MSVC. Qt is available
Currently, CNN is the most widely used method for under various licenses: The Qt Company sells
FER, followed by SVM, FNN, HMM, Binary Classifiers commercial licenses, but Qt is also available as free
,and other forms of Neural Networks.CNN has the software under several versions of the GPL and the
flexibility of modification and raises some opportunities LGPL The latest version of PySide, PySide6 for Qt6,
for further researchers to develop a new method of was not used as several features essential for this project
recognition from CNN modification. are not available init.
III. PROPOSEDWORK C. OpenCV-Python
To implement a proof of concept ”Intelligentstudent OpenCV (Open Source Computer Vision Library:
feed- back system ” consisting of 2 interfaces: student http://opencv.org) is an open source library that includes
and faculty interfaces The student interface deals with the several hundreds of computer vision algorithms. It is a
contentdelivery, emotion recognition and the calculation C++ API. OpenCV 4.5.0 and higher versions are
of concentration metric The faculty interface enables licensed under the Apache 2 License. OpenCV 4.4.0
users to upload thecontent and lower versions, including OpenCV 3.x, OpenCV
, integrate the individual metric of each student and most 2.x, and OpenCV 1.x, are licensed under the 3-clause
importantly provides a user-friendly data visualization. BSD license. OpenCV-Python is the Python API for
IV. TECHNOLOGY OpenCV, combining the best qualities of the OpenCV
The project is implemented in Python 3.8.9, versions C++ API and the Python language. Compared to
above languages like C/C++, Python is slower. That said,
3.8.9 cannot be used as the current version of tensorflow Python can be easily extended with C/C++, which
framework does not support higher versions of Python allows us to write computationally intensive code in
and versions below 3.5 cannot be used as the PySide2 C/C++ and create Python wrappers that can be used as
module does not support those versions. The GUI for Python modules. This givesus two advantages: first,
the project is implemented using PySide2. Frameworks the code is as fast as the original C/C++ code (since it is
like OpenCV, TensorFlow and Keras are used to handle the actual C++ code working in the background) and
the web camera second, it is easier to code in Python than C/C++.
andtheneuralnetworks.SQLite3isusedasthedatabase OpenCV-Python is a Python wrapper for the original
A. Python OpenCV C++ implementation. OpenCV-Python makes
Python is an interpreted high-level general-purpose use of Numpy, which is a highly optimized library for
programming language. Python’s design philosophy numerical operations with a MATLAB-style syntax. All
emphasizes code readability with its notable use of the OpenCV array structures are converted to and from
significant indentation. Its language constructs as well as Numpyarrays. This also makes it easier to integrate with
its object-oriented approach aim to help programmers other libraries that use Numpy such as SciPy and
write clear, logical code for small and large-scale Matplotlib. Opencv-python package is available under
projects. Python is dynamically- typed and garbage- MIT license.
collected. It supports multiple programming paradigms, D. TensorFlow
including structured (particularly, procedural), object- TensorFlow is an end-to-end open source platform for
oriented and functional programming. Python is often ma- chine learning. Its flexible architecture allows
described as a ”batteries included” language due to its easydeployment of computation across a variety of
comprehensive standard library. All Python releases are platforms (CPUs, GPUs, TPUs), and from desktops to
Open Source clusters of servers to mobile and edge devices.
B. PySide2 Originally developed by researchers and engineers from
PySide2 is the official Python module from the Qt for the Google Brain team within Google’s AI organization,
Python project, which provides access to the it comes with strong support for machine learning and
complete Qt 5.12+ framework. Qt for Python is deep learning and the flexible numerical computation
available under LGPLv3/GPLv2 and commercial license. core is used across many other scientific domains.
Qt is a cross- platform application development TensorFlow pro- vides stable Python and C++ APIs, as
framework for desktop, embedded and mobile. Supported well as non-guaranteed backward compatible API for
Pl Linux, OS X, Windows, VxWorks, QNX, Android, other languages. Tensor Flowis cross-platform. It runs
iOS, BlackBerry, Sailfish OS and others. Qt is not a on nearly everything: GPUs and CPUs—including
programming language on its own. It is a framework mobile and embedded platforms—and even tensor
written in C++. A preprocessor, the MOC (Meta- Object processing units (TPUs), which are specialized hard-
Compiler), is used to extend the C++ language with ware to do tensor math on. The TensorFlow distributed
features like signals and slots. Before the compilation execution engine abstracts away the many supported
step, the MOC parses the source files written in Qt- devices and provides a high performance-core
extended C++ and generates standard compliant C++ implemented in C++ for the TensorFlow platform. On
sources from them. top of that sit the Python and C++ frontends. The Layers
Thustheframeworkitselfandapplications/librariesusingitca API provides a simpler interface for commonly used
nbe layers in deep learning models. On top of that sit

Volume 9, Issue 13 Published by, www.ijert.org 144


Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

higher level APIs, including Keras more on the Keras.io confidence scores with the predefined weight for
site) and the Estimator API, which makes training and thatemotion.
evaluating distributed models easier. It was released
under the Apache License2.0.
E. Keras
Keras is an open-source software library that provides
a Python interface for artificial neural networks. Keras
acts asan interface for the TensorFlow library. It provides
essential abstractions and building blocks for developing
and shipping machine learning solutions with high
iteration velocity. Keras contains numerous
implementations of commonly used neural network
building blocks such as layers, objectives, activation
functions, optimizers, and a host of tools to make
working with image and text data easier to simplify the
coding necessary for writing deep neural network code. Fig. 1. Flowchart
In addition to standard neural networks, Keras has
support for convolutional and recurrent neural networks. These predefined weights signifies there lation of
It supports other common utility layers like dropout, that emotion towards the level of concentration. These
batch normalization, and pooling. Keras allows users to scores are stored in the database. At the end of the
productize deep models on smartphones (iOS and lecture a set of predefined questions are popped on the
Android), on the web, or on the Java Virtual Machine. It screen and the student responses are also stored in the
also allows use of distributed training of deep-learning database. The dashboard visualizes the corresponding
models on clusters of Graphics processing units (GPU) analysis of each lecture. For a given lecture the whole of
and tensor processing units(TPU). the lecture is divided into segments of 2 seconds. The
F. SQLite average scores of every student for these segments are
SQLite is a relational database management system taken and these values are in turn averaged to get a
(RDBMS) contained in a C library that implements a self- single score for everysegment.
contained, serverless, zero-configuration, transactional A. Graphical Interfaces
SQL 1) Student Interface: The student interface consists
databaseengine.SQLiteisanembeddedSQLdatabaseengine of a QFrame for login purposes, a QListView for video
. Unlike most other SQL databases, SQLite does not have list, a QVideoWidget for displaying the lecture, a
a separate server process. SQLite reads and writes QLabel for camera preview, two QButtons for play and
directly to ordinary disk files. A complete SQL database pause and a QListWidget for displaying the frame
with multiple details. The login frame has two QLineEdit objects and
tables,indices,triggers,andviews,iscontainedinasingledisk two QPushButtons. Only after success- ful login will the
file. The database file format is cross-platform - one can other widgets be activated. On clicking an item from the
freely copy a database between 32-bit and 64-bit systems video list the corresponding lecture is displayed on the
or between big-endian and little-endian architectures. video widget and the live preview of the camera is
These features make displayed on theQLabel.
SQLiteapopularchoiceasanApplicationFileFormat.SQLite
database files are a recommended storage format by the
US Library of Congress. SQLite3 can be integrated with
Python using the sqlite3 module, which was written by
Gerhard Haring. It provides an SQL interface compliant
with the DB- API2.0specificationdescribedbyPEP249.

V.IMPLEMENTATION
The project consists of two graphical interfaces: a
student interface and a dashboard, a database and two
neuralnetworks: MTCNN and a custom keras model. The
student interface allows a student to login and attend the
lectures. While the lecture is being delivered the webcam
captures the images of the student from which the face of Fig. 2. Student interface
the student is recognized and cropped with the help of the
MTCNN model. This face is given as input to the 2) Dashboard: It is a graphical interface for faculty
custom keras model which predicts the probability which provides at - a glance views of student
scores for seven emotions, namely sad, happy, disgust, understanding level
surprise, fear, neutral, anger. A weighted average of these relevanttoeachvideoinauserfriendlymanner.
values is obtained by multiplying the corresponding

Volume 9, Issue 13 Published by, www.ijert.org 145


Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

Fig. 3. faculty
dashboard.

3) Database:The database consists of five tables:


a) Student: To store the details ofstudents
b) Video: To store the details oflectures
c) Question: To store the questions for eachlecture
d) Frame data: To store the emotion scores of Fig. 5. MTCNN
students withrespecttoframesofavideolecture
e) Answer: To store the student response a question boxes when the goal is detecting an object of some
predefined class, in this case faces. After obtaining the
bounding box vectors, some refinement is done to
combine overlapping regions. The final output of this
stage is all candidate windows after refinement to
downsize the volume of candidates.
b) Stage 2: The Refine Network (R-Net) All
candidates from the P-Net are fed into the Refine
Network. Notice that this network is a CNN, not a
FCN like the one before since there is a dense layer at
the last stage of the network architecture. The R-Net
further reduces the number of candidates, performs
calibration with bounding box regression and employs
non-maximum suppression (NMS) to merge overlapping
candidates. The R-Net outputs whether the input is a
face or not, a 4 element vector which is the bounding
Fig. 4. Database.
box for the face, and a 10 element vector for facial
B. NeuralNetworks landmark localization.
4) MTCNN: Facial detection is a technique used by c) Stage 3: The Output Network (O-Net) This
computer algorithms to detect a person’s face through stage is similar to the R-Net, but this Output Network
images. Accordingly, the objective of facial detection is aims to describe the face in more detail and output the
to get different features of human faces from images. five facial landmarks’ positions for eyes, nose and
Even thoughthere are many Face detection classifiers we mouth. The detector returns a list of JSON objects. Each
have used MTCNN MTCNN (Multi-task Cascaded JSON object contains three main keys: ‘box’,
Convolutional Neural Net- works) is an algorithm ‘confidence’ and‘ keypoints’:
consisting of 3 stages, which detects the bounding boxes The bounding box is formatted as [x, y, width, height]
of faces in an image along with their 5 Point under the key ‘box’.
FaceLandmarks The confidence is the probability for a bounding box to
a) Stage 1: The Proposal Network (P-Net) This first be matching a face.
stage is a fully convolutional network (FCN). The The keypoints are formatted into a JSON object with the
difference between a CNN and a FCN is that a fully keys ‘left eye’, ‘right eye’, ‘nose’, ‘mouth left’, ‘mouth
convolutional network does not use a dense layer as part right’. Each keypoint is identified by a pixel position (x,
of the architecture. This Proposal Network is used to y). We have tested 4 algorithms(MTCNN, Dlib,
obtain candidate windows and their bounding box OpenCV DNN, OpenCV Haar) using the same video
regression vectors. Bounding box regression is a popular and compared. After analyzing we could observe
technique to predict the localization of greatest number of correct face detection than
others.Greatestaccuracyprovidedwasthereasonforselectin
g MTCNN

Volume 9, Issue 13 Published by, www.ijert.org 146


Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

5) Custom Keras Model: We use a keras model for


facial emotion recognition. The faces from the MTCNN
model are
usedasinputtothisnetwork.Themodelconsistsofseveral2D
convolutional layers with ReLU activation function and
max pooling. Batch normalisation is used to stabilise the
learning process and dramatically reduce the number of
training epochs required to train the deep networks.
Softmax function is used as the last activation function of
the network to normalize the output to a probability
distribution over the 7 predicted output
classes.Thisnetworkgivesasetofpredictedconfidencescore
s for the seven emotionclasses.
VI. EXPERIMENTALRESULT
Fig. 7. Student interface:
The project was evaluated by 12 different volunteers. 7 happy.
out of the 12 volunteers affirmed the prediction of our
system. The major factor that accounted for the
classroom feedback system to virtual classrooms.
inaccuracy of the system in other students was that of
Though this is not a foolproof solution we hope that this
different baseline emotions. That is the effect of different
project will serve as a pioneer and guiding line for
emotions on the level of concentration was different for
future development in this field.
different students. A student with generally a sad face
always received a low concentration score no matter how
concentrated he was. In order to overcome this errorwe REFERENCES
[1] Sahla K. S, T. Senthil Kumar “Classroom Teaching Assessment
can compare the scores of the questionnaire and the Based on Student Emotion,” 2019 9th International Conference
predicted score and then make slight changes in the on Education and Socail Science(ICESS2019).
weights of the emotions accordingly for that particular [2] Chao Ma, Chongliang Sun, Donglei Song, Xuan Li, Hao Xu,” A
Deep Learning Approach for Online Learning Emotion
student. If there is significant change in both these scores Recognition” The 13th International Conference on Computer
we adjust the weights of the emotion to produce accurate Science & Education (ICCSE 2018)
results. The alterations in [3] ImaneLasri, Anouar Riad Solh, Mourad El Belkacemi, “Facial
weightsaredoneseparatelyforeachindividualstudent. Emotion Recognition of Students using Convolutional Neural
Network,” 2019 IEEE.
[4] JielongTang,Xiaotian Zhou, Jiawei Zheng , “Design of
Intelli- gent classroom facial recognition based on Deep
Learning”,Journal of Physics: Conf. Series 1168 (2019) 022043,
doi:10.1088/1742- 6596/1168/2/022043
[5] Archana Sharma, VibhakarMansotra,” Deep Learning based
Student Emotion Recognition from
Facial Expressions inClassrooms”,International Journal
of Engineering and Advanced Technology (IJEAT),ISSN: 2249
– 8958, Volume-8 Issue-6, August 2019
[6] OussamaElHammoumi, FatimaezzahraBenmarrakchi, Nihal
Ouherrou,
JamalElKafi,AliElHore,”UseoffacialemotionrecognitioninE
- learning systems”, Information Technologies and Learning Tools ·
September 2017, DOI: 10.33407/itlt.v60i4.1743
[7] Andreas Savva, VassoStylianou, Kyriacos Kyriacou, Florent
Domenach “Recognizing Student Facial Expressions:AWeb
Fig. 6. Student interface:neutral Application”, 2018
IEEEGlobalEngineeringEducationConference(EDUCON)
[8] ChinunBoonroungrut, Toe ToeOo, Kim One, “Exploring
VII. A CKNOWLEDGMENT Classroom Emotion with Cloud-Based Facial Recognizer in the
This project is realized as part of the B-TECH. project. Chinese Beginning
The authors acknowledge the support from Dr. Remya K Class:APreliminaryStudy1”,InternationalJournalofInstruction,
.Sasi, HOD, Department of Computer Science and January 2019
[9] Krithika L.B., Lakshmi Priya GG, Student Emotion Recognition
Engineering, Christ College of Engineering. System (SERS) for e-learning improvement based on learner
VIII. C ONCLUSION concentrationmetric 2016, doi:10.1016/j.procs.2016.05.264
Emotions of learning and acquiring knowledge are [10] Sheng Chen, Jianbang Dai, YizheYan, Classroom Teaching
inextricably intertwined. The establishment of a learning- Feedback
SystemBasedonEmotionDetection2019DOI.2526/icess.2019.179
emotion- recognition model to guide online education [11] ChuangaoTang, Pengfei Xu, Zuying Luo, GuoxingZhao,Tian
can improve the quality of teaching and leads to the Zou, Automatic Facial Expression Analysis of Students in
construction of learner- oriented teaching to create Teaching Envi- ronments 2015, DOI:10.1007/978-3-319-25417-
personalized learning. We have modelled a system that 352
[12]D.Yang, AbeerAlsadoon, P.W.C. Prasad, A. K. Singh, A.
has a wide scope in mimicking Elchouemi, An Emotion Recognition Model Based on Facial
Recognition in Virtual Learning Environment, 2018,
10.1016j.procs.2017.12.003

Volume 9, Issue 13 Published by, www.ijert.org 147


Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings

[13]Ug˘urAyvaz,Hu¨seyinGu¨ru¨ler,MehmetOsmanDevrim,USEOF
FACIAL EMOTION RECOGNITION IN E-LEARNING
SYSTEMS, 2017, ISSN:2076-8184.
[14]C.Sagonas, G. Tzimiropoulos, S. Zafeiriou and M. Pantic, ”300
Faces in-the-Wild Challenge: The first facial landmark
localization Challenge,” in The IEEE International Conference
on Computer Vision,
[15] Facial Expression Recognition for Learning Status Analysis.
2011, Human-Computer Interaction, Part IV,HCII 2011, LNCS
6764, pp. 131–138,2011.
[16] Abdulkareem Al-Alwani,Mood Extraction Using Facial Features
to Improve Learning Curves of Students in E-Learning Systems,
(IJACSA)
InternationalJournalofAdvancedComputerScienceandApplicatio
ns,Vol. 7, No. 11, 2016
[17] Pan Xiang, Facial Expression Recognition for Learning Status
Analysis, 2011,DOI10.1109/ICDMA.2011.255
[18] Kenichi Takahashi, Masahiro Ueno, Improvement of
detection for warning students in e-learning using web cameras,
2014, doi:10.1016/j.procs.2014.08.157

Volume 9, Issue 13 Published by, www.ijert.org 148

You might also like