Intelligent_Student_Feedback_System_for
Intelligent_Student_Feedback_System_for
ISSN: 2278-0181
NCREIS - 2021 Conference Proceedings
Remya K Sasi
Dept. of Computer Science and Engineering
Christ College of Engineering,
Irinjalakuda Thrissur, India
Abstract—Nowadays, deep learning techniques are education can improve not only the quality of teaching
gaining big success in various fields including computer but also the real-time nature of information transmission,
vision. Indeed, a convolutional neural networks (CNN) which is of great significance for the construction of
model can be trained to analyze images and identify face learner-oriented teaching to create personalized learning
emotion. Our project aims to create a system that
recognizes students’ emotions from their faces. Our system
and so on. Moreover, the general learning mood of
consists of four phases: face detection using MTCNN, learners also reflects theteaching quality of the
normalization, emotion recognition using CNN on FER instructors. Learning the emotions of learners in online
2013 database and calculation of concentration metric with education has also become an important indicator for
seven types of expressions. Obtained results show that face assessingtheteachingqualityofinstructors.
emotion recognition is feasible in education, consequently, it With the development of computer vision in recent
can help teachers to modify their presentation according to years, the accuracy of facial expression based on face
the students’ emotions. detection has continuously improved and It has become
easy to observe the students’ reaction on a particular
Keywords— Student facial expression; Emotion
recognition; Convolutional neural networks (CNN); Deep
topic which is being taught by theinstructor.
learning; Intelligent Student feedback system. The purpose of our project is to implement emotion
recognition in education by realizing an automatic
I. INTRODUCTION system that analyzes students’ facial expressions based
Learning is an exciting adventure during which both on Facial Emotion Recognition, which is a deep learning
the teacher and the students participate. The participation algorithm that is widely used in facial emotion detection
of the student is inevitable in improving the quality of and classification. It is a Convolutional Neural Network
education, and it is very important to receive live model that consists of a multi- stage image processing to
feedback from students to adjust their pedagogy and extract feature representations. Our system includes four
proceed with their classes in an effective manner. In a phases: face detection, normalization, emotion
conventional scenario, a teacher takes this input from the recognition and calculation of concentration metric.
facial and body expressions of their There are seven emotions under consideration: neutral,
studentsbutthisisnotpossibleinanonlinescenario. anger, fear, sadness, happiness, surprise and disgust.
The face is the most expressive and communicative
part of a person’s being. Facial expression recognition II. EXISTING WORKS
identifies emotion from a face image, it’s a manifestation The majority of the student feedback systems use
of the activity and personality of a person . According to FER. FER systems can be classified mainly into two
diverse research, emotion plays an important role in main categories:
education. The establish- ment of a learning-emotion
recognition model to guideonline
Emotion prediction from extracted facial features and compiled by any standard compliant C++ compiler like
facial emotion recognition directly from the facial images Clang, GCC, ICC, MinGW and MSVC. Qt is available
Currently, CNN is the most widely used method for under various licenses: The Qt Company sells
FER, followed by SVM, FNN, HMM, Binary Classifiers commercial licenses, but Qt is also available as free
,and other forms of Neural Networks.CNN has the software under several versions of the GPL and the
flexibility of modification and raises some opportunities LGPL The latest version of PySide, PySide6 for Qt6,
for further researchers to develop a new method of was not used as several features essential for this project
recognition from CNN modification. are not available init.
III. PROPOSEDWORK C. OpenCV-Python
To implement a proof of concept ”Intelligentstudent OpenCV (Open Source Computer Vision Library:
feed- back system ” consisting of 2 interfaces: student http://opencv.org) is an open source library that includes
and faculty interfaces The student interface deals with the several hundreds of computer vision algorithms. It is a
contentdelivery, emotion recognition and the calculation C++ API. OpenCV 4.5.0 and higher versions are
of concentration metric The faculty interface enables licensed under the Apache 2 License. OpenCV 4.4.0
users to upload thecontent and lower versions, including OpenCV 3.x, OpenCV
, integrate the individual metric of each student and most 2.x, and OpenCV 1.x, are licensed under the 3-clause
importantly provides a user-friendly data visualization. BSD license. OpenCV-Python is the Python API for
IV. TECHNOLOGY OpenCV, combining the best qualities of the OpenCV
The project is implemented in Python 3.8.9, versions C++ API and the Python language. Compared to
above languages like C/C++, Python is slower. That said,
3.8.9 cannot be used as the current version of tensorflow Python can be easily extended with C/C++, which
framework does not support higher versions of Python allows us to write computationally intensive code in
and versions below 3.5 cannot be used as the PySide2 C/C++ and create Python wrappers that can be used as
module does not support those versions. The GUI for Python modules. This givesus two advantages: first,
the project is implemented using PySide2. Frameworks the code is as fast as the original C/C++ code (since it is
like OpenCV, TensorFlow and Keras are used to handle the actual C++ code working in the background) and
the web camera second, it is easier to code in Python than C/C++.
andtheneuralnetworks.SQLite3isusedasthedatabase OpenCV-Python is a Python wrapper for the original
A. Python OpenCV C++ implementation. OpenCV-Python makes
Python is an interpreted high-level general-purpose use of Numpy, which is a highly optimized library for
programming language. Python’s design philosophy numerical operations with a MATLAB-style syntax. All
emphasizes code readability with its notable use of the OpenCV array structures are converted to and from
significant indentation. Its language constructs as well as Numpyarrays. This also makes it easier to integrate with
its object-oriented approach aim to help programmers other libraries that use Numpy such as SciPy and
write clear, logical code for small and large-scale Matplotlib. Opencv-python package is available under
projects. Python is dynamically- typed and garbage- MIT license.
collected. It supports multiple programming paradigms, D. TensorFlow
including structured (particularly, procedural), object- TensorFlow is an end-to-end open source platform for
oriented and functional programming. Python is often ma- chine learning. Its flexible architecture allows
described as a ”batteries included” language due to its easydeployment of computation across a variety of
comprehensive standard library. All Python releases are platforms (CPUs, GPUs, TPUs), and from desktops to
Open Source clusters of servers to mobile and edge devices.
B. PySide2 Originally developed by researchers and engineers from
PySide2 is the official Python module from the Qt for the Google Brain team within Google’s AI organization,
Python project, which provides access to the it comes with strong support for machine learning and
complete Qt 5.12+ framework. Qt for Python is deep learning and the flexible numerical computation
available under LGPLv3/GPLv2 and commercial license. core is used across many other scientific domains.
Qt is a cross- platform application development TensorFlow pro- vides stable Python and C++ APIs, as
framework for desktop, embedded and mobile. Supported well as non-guaranteed backward compatible API for
Pl Linux, OS X, Windows, VxWorks, QNX, Android, other languages. Tensor Flowis cross-platform. It runs
iOS, BlackBerry, Sailfish OS and others. Qt is not a on nearly everything: GPUs and CPUs—including
programming language on its own. It is a framework mobile and embedded platforms—and even tensor
written in C++. A preprocessor, the MOC (Meta- Object processing units (TPUs), which are specialized hard-
Compiler), is used to extend the C++ language with ware to do tensor math on. The TensorFlow distributed
features like signals and slots. Before the compilation execution engine abstracts away the many supported
step, the MOC parses the source files written in Qt- devices and provides a high performance-core
extended C++ and generates standard compliant C++ implemented in C++ for the TensorFlow platform. On
sources from them. top of that sit the Python and C++ frontends. The Layers
Thustheframeworkitselfandapplications/librariesusingitca API provides a simpler interface for commonly used
nbe layers in deep learning models. On top of that sit
higher level APIs, including Keras more on the Keras.io confidence scores with the predefined weight for
site) and the Estimator API, which makes training and thatemotion.
evaluating distributed models easier. It was released
under the Apache License2.0.
E. Keras
Keras is an open-source software library that provides
a Python interface for artificial neural networks. Keras
acts asan interface for the TensorFlow library. It provides
essential abstractions and building blocks for developing
and shipping machine learning solutions with high
iteration velocity. Keras contains numerous
implementations of commonly used neural network
building blocks such as layers, objectives, activation
functions, optimizers, and a host of tools to make
working with image and text data easier to simplify the
coding necessary for writing deep neural network code. Fig. 1. Flowchart
In addition to standard neural networks, Keras has
support for convolutional and recurrent neural networks. These predefined weights signifies there lation of
It supports other common utility layers like dropout, that emotion towards the level of concentration. These
batch normalization, and pooling. Keras allows users to scores are stored in the database. At the end of the
productize deep models on smartphones (iOS and lecture a set of predefined questions are popped on the
Android), on the web, or on the Java Virtual Machine. It screen and the student responses are also stored in the
also allows use of distributed training of deep-learning database. The dashboard visualizes the corresponding
models on clusters of Graphics processing units (GPU) analysis of each lecture. For a given lecture the whole of
and tensor processing units(TPU). the lecture is divided into segments of 2 seconds. The
F. SQLite average scores of every student for these segments are
SQLite is a relational database management system taken and these values are in turn averaged to get a
(RDBMS) contained in a C library that implements a self- single score for everysegment.
contained, serverless, zero-configuration, transactional A. Graphical Interfaces
SQL 1) Student Interface: The student interface consists
databaseengine.SQLiteisanembeddedSQLdatabaseengine of a QFrame for login purposes, a QListView for video
. Unlike most other SQL databases, SQLite does not have list, a QVideoWidget for displaying the lecture, a
a separate server process. SQLite reads and writes QLabel for camera preview, two QButtons for play and
directly to ordinary disk files. A complete SQL database pause and a QListWidget for displaying the frame
with multiple details. The login frame has two QLineEdit objects and
tables,indices,triggers,andviews,iscontainedinasingledisk two QPushButtons. Only after success- ful login will the
file. The database file format is cross-platform - one can other widgets be activated. On clicking an item from the
freely copy a database between 32-bit and 64-bit systems video list the corresponding lecture is displayed on the
or between big-endian and little-endian architectures. video widget and the live preview of the camera is
These features make displayed on theQLabel.
SQLiteapopularchoiceasanApplicationFileFormat.SQLite
database files are a recommended storage format by the
US Library of Congress. SQLite3 can be integrated with
Python using the sqlite3 module, which was written by
Gerhard Haring. It provides an SQL interface compliant
with the DB- API2.0specificationdescribedbyPEP249.
V.IMPLEMENTATION
The project consists of two graphical interfaces: a
student interface and a dashboard, a database and two
neuralnetworks: MTCNN and a custom keras model. The
student interface allows a student to login and attend the
lectures. While the lecture is being delivered the webcam
captures the images of the student from which the face of Fig. 2. Student interface
the student is recognized and cropped with the help of the
MTCNN model. This face is given as input to the 2) Dashboard: It is a graphical interface for faculty
custom keras model which predicts the probability which provides at - a glance views of student
scores for seven emotions, namely sad, happy, disgust, understanding level
surprise, fear, neutral, anger. A weighted average of these relevanttoeachvideoinauserfriendlymanner.
values is obtained by multiplying the corresponding
Fig. 3. faculty
dashboard.
[13]Ug˘urAyvaz,Hu¨seyinGu¨ru¨ler,MehmetOsmanDevrim,USEOF
FACIAL EMOTION RECOGNITION IN E-LEARNING
SYSTEMS, 2017, ISSN:2076-8184.
[14]C.Sagonas, G. Tzimiropoulos, S. Zafeiriou and M. Pantic, ”300
Faces in-the-Wild Challenge: The first facial landmark
localization Challenge,” in The IEEE International Conference
on Computer Vision,
[15] Facial Expression Recognition for Learning Status Analysis.
2011, Human-Computer Interaction, Part IV,HCII 2011, LNCS
6764, pp. 131–138,2011.
[16] Abdulkareem Al-Alwani,Mood Extraction Using Facial Features
to Improve Learning Curves of Students in E-Learning Systems,
(IJACSA)
InternationalJournalofAdvancedComputerScienceandApplicatio
ns,Vol. 7, No. 11, 2016
[17] Pan Xiang, Facial Expression Recognition for Learning Status
Analysis, 2011,DOI10.1109/ICDMA.2011.255
[18] Kenichi Takahashi, Masahiro Ueno, Improvement of
detection for warning students in e-learning using web cameras,
2014, doi:10.1016/j.procs.2014.08.157