0% found this document useful (0 votes)
16 views73 pages

NUMBERS33

This project report presents a machine learning-based recommender system aimed at enhancing student engagement and activity analysis in virtual classrooms for higher education. The system integrates real-time facial emotion recognition to adapt video content based on learners' emotional states and comprehension levels, promoting a personalized e-learning experience. The project highlights the potential of machine learning in creating intelligent educational tools that respond to individual learner needs.

Uploaded by

muthamizhcoding
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views73 pages

NUMBERS33

This project report presents a machine learning-based recommender system aimed at enhancing student engagement and activity analysis in virtual classrooms for higher education. The system integrates real-time facial emotion recognition to adapt video content based on learners' emotional states and comprehension levels, promoting a personalized e-learning experience. The project highlights the potential of machine learning in creating intelligent educational tools that respond to individual learner needs.

Uploaded by

muthamizhcoding
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

DEEP LEARNING-BASED STUDENT ENGAGEMENT

AND ACTIVITY ANALYSIS IN VIRTUAL CLASSROOMS


FOR HIGHER EDUCATION

A PROJECT REPORT

Submitted by

MUTHAMIZH KUMARAN L (411721104031)


MANIKANDAN S(411721104027)

in partial fulfilment for the award of the degree


Of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING

PRINCE SHRI VENKATESHWARA PADMAVATHY ENGINEERING


COLLEGE [AN AUTONOMOUS INSTITUTION], CHENNAI-127

ANNA UNIVERSITY: CHENNAI 600025


MAY 2025
BONAFIDE CERTIFICATE

Certified that this project report “DEEP LEARNING-BASED STUDENT


ENGAGEMENT AND ACTIVITY ANALYSIS IN VIRTUAL CLASSROOMS
FOR HIGHER EDUCATION” is the bonafide work of “MUTHAMIZH
KUMARAN L (411721104031),
MANIKANDAN S (411721104027)” carried out the project work under my
supervision.

SIGNATURE SIGNATURE
Dr. [Link], [Link]., Ph.D. Dr. [Link], [Link]., Ph.D.
HEAD OF THE DEPARTMENT PROJECT COORDINATOR
PROFESSOR PROFESSOR
Department of CSE Department of CSE
Prince Shri Venkateshwara Prince Shri Venkateshwara
Padmavathy College,Ponmar Padmavathy College,Ponmar
Chennai-600 127 Chennai-600 127

Submitted for the project Viva-voce Examination held on

INTERNAL EXAMINER EXTERNAL EXAMINE

i
ACKNOWLEDGEMENT
First and foremost, we wish to express our sincere thanks to our
FOUNDER AND CHAIRMAN Dr. K. VASUDEVAN, M.A., [Link].,
Ph.D., and VICE CHAIRMAN Dr. V. VISHNU KARTHIK, MBBS.,
M.D., for his endeavor in educating us in their premier institution.

We wish to express our appreciation and gracefulness to our DEAN,


Dr.V. MAHALAKSHMI, M.E., Ph.D., for her encouragement and
sincereguidance.

We are highly indebted to our PRINCIPAL, Dr. G. INDIRA, M.E.,


Ph.D., for her valuable guidance which has promoted our efforts.

We also wish to convey our thanks and regards to our HEAD OF


THE DEPARTMENT and PROJECT COORDINATOR,
[Link], M. TECH, Ph.D., for her support and for providing us
ample time to complete our project.

We would like to express our sincere gratitude to our PROJECT


SUPERVISOR, Mrs. UMA MAHESHWARI, Department of Computer
Science and Engineering, for her guidance and support throughout our
project.

We wish to convey our sincere thanks to all the teaching and non-
teaching staff of the Department of Computer Science and Engineering,
without whose coordination this venture would not have been a success.

ii
ABSTRACT

This project introduces a machine learning-driven recommender system to elevate

students' e-learning through video content. Developed in MATLAB, the system

features a user-friendly GUI displaying topic keywords that trigger corresponding

video playback. During the video, facial emotion recognition assesses learner

engagement. Detection of negative emotions (indicating difficulty or disinterest)

prompts the system to automatically recommend a more basic video using a dynamic

model. This adaptive approach personalizes learning by tailoring content to the

learner's comprehension and emotional state in real-time. By merging video-based

education with emotion analysis and customized suggestions, the system aims to

improve knowledge retention and foster a more engaging, student-focused online

learning experience, highlighting machine learning's potential in creating intelligent

educational tools.

iii
TABLE OF CONTENTS

CHAPTER TITLE PG NO
NO
1 INTRODUCTION 1
1.1 Introduction 1
1.2 Problem Definition 3
1.3 Objective 5
1.4 Motivation 6
2 LITERATURE SURVEY 9
3 ANALYSIS 15
3.1 Existing System 15
3.2 Proposed System 17
3.3 Requirement Specification 19
3.3.1 List of components 19
3.3.2 Software 20
3.3.3 Hardware 20
3.4 Purpose 21
3.5 Scope 22
4 SYSTEM ARCHITECTURE 24
4.1 Overview 24
4.2 System Components 26
4.3 Integration of components 31
5 DESIGN 33
5.1 UML Diagrams 33
5.1.1 Usecase Diagram 34

iv
5.1.2 Sequence Diagram 36
5.1.3 Activity Diagram 38
5.2 System Design 41
5.3 Constraints 43
5.3.1 Constraint Analysis 43
5.3.2 Constraints in design 44
5.3.3 Constraints in 45
implementation
5.4 Functional requirements 45
5.5 Non-Functional requirements 46
5.5.1 Performance requirements 46
5.5.2 Safety requirements 47
6 TESTING 48
6.1 Types of testing 48
6.1.1 Unit Testing 48
6.1.2 Integration Testing 50
6.1.3 Functional Testing 52
6.1.4 System Testing 55
7 CONCLUSION & FUTURE ENHANCEMENTS 58
7.1 Conclusion 58
7.2 Future Enhancements 59
APPENDICES 63
A1: OUTPUT & SCREENSHOTS 63
REFERENCES 66

v
LIST OF FIGURES

FIGURE TITLE PAGE NO


4.1 System Architecture 17
5.1 Usecase Diagram 22
5.2 Sequence Diagram 23
5.3 Activity Diagram 25
5.4 Circuit Diagram of Smart Bin 27

6
CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

The landscape of education is undergoing a profound transformation, largely

driven by the pervasive influence of digital technologies. E-learning, once considered

a supplementary tool, has evolved into a mainstream mode of instruction, offering

unprecedented flexibility, accessibility, and scalability. This shift has been

accelerated by factors such as increasing internet penetration, advancements in

multimedia technologies, and a growing demand for personalized learning

experiences. Within this dynamic environment, video-based learning has emerged as

a particularly powerful medium, capable of conveying complex information in an

engaging and visually rich manner. The ability of videos to demonstrate concepts,

illustrate processes, and connect with learners on a more personal level has made

them a cornerstone of modern e-learning platforms.

However, the widespread adoption of video-based e-learning also presents certain

challenges. One significant hurdle is maintaining learner engagement and ensuring

effective knowledge acquisition. Unlike traditional classroom settings where

instructors can gauge student understanding through visual cues, verbal interactions,

and real-time feedback, online video consumption can often be a passive experience.

Learners may struggle to remain focused, comprehend the material, or identify when

they are falling behind. This can lead to decreased motivation, frustration, and

1
ultimately, a less effective learning outcome. The "one-size-fits-all" approach

inherent in many static video resources often fails to cater to the diverse learning

paces and comprehension levels of individual students.

Furthermore, the emotional state of a learner plays a crucial role in their ability to

process and retain information. Feelings of confusion, frustration, or disinterest can

significantly impede the learning process. Recognizing these emotional cues and

adapting the learning experience accordingly holds immense potential for enhancing

educational effectiveness. Traditional e-learning systems often lack the sophistication

to understand and respond to these nuanced emotional states, leading to a disconnect

between the learning content and the learner's immediate needs.

The advent of machine learning and artificial intelligence offers promising avenues

to address these challenges. By leveraging techniques such as computer vision,

natural language processing, and predictive modeling, it becomes possible to create

more intelligent and responsive e-learning systems. These systems can analyze

various aspects of the learning process, including user interactions, learning patterns,

and even physiological responses, to gain a deeper understanding of the learner's

engagement and comprehension. This insight can then be used to personalize the

learning experience, providing tailored content, timely support, and adaptive

pathways that cater to individual needs.

This project aims to contribute to this evolving landscape by developing a machine

learning-based recommender system specifically designed to enhance students' e-

learning experiences with video content. By integrating a user-friendly interface,


2
real-time facial emotion recognition, and a dynamic recommendation model, this

system seeks to create a more engaging, personalized, and effective learning

environment. The core idea is to move beyond static video delivery towards an

adaptive system that responds to the learner's comprehension level and emotional

state, ultimately fostering deeper understanding and improved knowledge retention.

This initiative underscores the transformative potential of machine learning in

shaping the future of education, paving the way for more student-centric and

impactful learning experiences.

1.2 PROBLEM DEFITION

The current paradigm of video-based e-learning often suffers from a lack of


personalization and real-time responsiveness to individual learner needs. While video
offers a rich and engaging medium for content delivery, several key problems hinder
its effectiveness in fostering deep learning and sustained engagement:
Firstly, the passive nature of video consumption can lead to decreased attention
spans and a lack of active processing of the information. Learners may watch videos
without fully engaging with the content, resulting in superficial understanding and
poor retention. Traditional systems offer limited mechanisms to encourage active
participation or assess real-time comprehension during video playback.
Secondly, the "one-size-fits-all" approach fails to acknowledge the diverse learning
paces and prior knowledge of individual students. A video that is appropriately
challenging for one student might be too basic or too advanced for another, leading to
boredom or frustration. The absence of adaptive mechanisms to adjust the content
based on individual understanding limits the effectiveness and inclusivity of these
resources.

3
Thirdly, the lack of real-time feedback and support can leave learners feeling
isolated and unsupported when they encounter difficulties. In a traditional classroom,
instructors can observe student cues and provide immediate clarification or
alternative explanations. This crucial element of real-time interaction is often missing
in asynchronous video-based learning environments.
Fourthly, the emotional state of the learner is largely ignored in current e-learning
systems. Feelings of confusion, frustration, boredom, or even anxiety can
significantly impact learning outcomes. Without the ability to detect and respond to
these emotional cues, systems cannot adapt the learning experience to mitigate
negative emotions and promote a more positive and conducive learning environment.
Specifically focusing on video-based learning, the following challenges are
prominent:
 Difficulty in gauging comprehension during video playback: Current
systems lack the ability to assess whether a student is truly understanding the
concepts being presented in real-time.
 Lack of personalized content adjustment: If a student struggles with a
particular concept presented in a video, the system typically does not offer
immediate alternative explanations or simpler foundational material.
 Absence of mechanisms to re-engage disengaged learners: If a student loses
interest or becomes distracted during a video, the system does not proactively
intervene to re-capture their attention or adjust the content to be more relevant.
 Inability to cater to diverse learning styles and paces: Students learn at
different speeds and may benefit from different levels of detail or different
presentation styles. Static video content cannot inherently accommodate these
variations.
 Limited feedback mechanisms for instructors on student engagement:
Instructors often lack insights into how students are interacting with and
responding to their video content, making it difficult to identify areas for
improvement or provide targeted support.
4
These problems highlight a significant gap in current video-based e-learning systems.
The need for more intelligent, adaptive, and emotionally aware systems that can
personalize the learning experience and provide real-time support is paramount to
unlocking the full potential of video as an educational tool.

1.3 OBJECTIVE

The primary objective of this project is to design, develop, and evaluate


a machine learning-based recommender system that enhances students' e-learning
experiences by providing personalized and adaptive video content. This
overarching objective can be further broken down into the following specific
aims:
1. To develop a user-friendly Graphical User Interface (GUI): This interface will
enable students to easily access and select educational topics through
associated keywords. It will also serve as the platform for video playback and
interaction with the recommender system. The GUI should be intuitive and
facilitate a seamless learning experience.
2. To integrate video playback functionality: The system will be capable of
playing educational videos relevant to the selected keywords within the GUI.
This will provide a central platform for accessing the learning content.
3. To implement real-time facial emotion recognition: The system will
incorporate a machine learning model capable of analyzing a student's facial
expressions during video playback to infer their emotional state, specifically
focusing on indicators of engagement, confusion, and disinterest. This analysis
will be performed in real-time or near real-time to enable timely interventions.
4. To develop a dynamic recommendation model: This model will utilize the
insights gained from facial emotion recognition to adapt the learning
experience. If the system detects negative emotional cues indicative of
difficulty or disengagement midway through a video, the model will

5
automatically recommend a lower-level video that covers foundational
concepts related to the current topic.
5. To ensure seamless integration of the components: The GUI, video playback,
emotion recognition module, and recommendation model will be seamlessly
integrated to create a cohesive and functional e-learning system. Data flow
between these components will be efficient and reliable.
6. To evaluate the effectiveness of the system: The project will include an
evaluation phase to assess the impact of the recommender system on student
engagement and perceived understanding. This evaluation may involve user
studies, feedback collection, and potentially the analysis of learning outcomes.
7. To demonstrate the potential of machine learning in creating intelligent
educational tools: The project aims to showcase how machine learning
techniques, particularly facial emotion recognition and dynamic
recommendation, can be applied to create more responsive and student-centric
e-learning environments.
Ultimately, the objective is to create a proof-of-concept system that demonstrates
the feasibility and potential benefits of using real-time emotion analysis to
personalize video-based learning, paving the way for more adaptive and effective
e-learning platforms.

1.4 MOTIVATION

The motivation behind this project stems from a confluence of factors related to

the evolving landscape of education and the potential of artificial intelligence to

address existing challenges:

Firstly, the increasing prominence of online learning necessitates innovative

solutions to enhance its effectiveness. As more students turn to e-learning for

6
flexibility and accessibility, it becomes crucial to address the limitations of

traditional online learning models and create more engaging and personalized

experiences. This project seeks to contribute to this evolution by leveraging

machine learning to create a more adaptive learning environment.

Secondly, the limitations of static video content in catering to diverse learner

needs are a significant concern. While video offers a rich medium for

instruction, its inherent lack of adaptability can lead to disengagement and

ineffective learning for students with varying levels of prior knowledge and

learning paces. The motivation here is to develop a system that can overcome this

limitation by dynamically adjusting the content based on individual

comprehension.

Thirdly, the recognition of the crucial role of emotions in the learning process

provides a strong impetus for this research. Cognitive science and educational

psychology have consistently highlighted the impact of emotions on attention,

motivation, and memory. By incorporating real-time emotion analysis, this

project aims to create a system that is more attuned to the learner's emotional

state and can respond in a way that fosters a more positive and effective learning

experience.

Fourthly, advancements in machine learning, particularly in computer vision

and recommendation systems, offer powerful tools to address these

challenges. The availability of sophisticated algorithms and computational

resources makes it feasible to develop systems that can analyze facial expressions
7
and provide personalized recommendations in real-time. This project seeks to

harness these advancements for the benefit of education.

Fifthly, the potential to improve knowledge retention and learning outcomes

through personalized and engaging experiences is a key driving force. By

tailoring the content to the learner's comprehension level and emotional state, this

system aims to promote deeper understanding and more effective knowledge

acquisition compared to traditional, static video resources.

Finally, the desire to explore and demonstrate the transformative potential of

machine learning in the educational domain fuels this project. By creating a

working prototype, this research aims to inspire further innovation and

development of intelligent educational tools that can revolutionize the way

students learn and interact with educational content. The success of this project

could pave the way for more widespread adoption of AI-powered personalization

in e-learning, ultimately leading to more effective and equitable educational

opportunities for all learners.

In essence, the motivation behind this project is driven by the need to create more

effective, engaging, and personalized video-based e-learning experiences by

leveraging the power of machine learning to understand and respond to individual

learner needs and emotional states. The goal is to move beyond static content

delivery towards a dynamic and adaptive learning paradigm that truly caters to

the unique journey of each student.


8
CHAPTER 2

LITERATURE SURVEY

Existing research highlights the growing use of video in e-learning and the challenges
of maintaining learner engagement and catering to diverse needs. Studies in adaptive
learning systems emphasize the benefits of personalized content delivery based on
learner performance and interactions. Emotion recognition techniques, particularly
facial expression analysis, have shown promise in gauging learner states like
confusion and frustration. Recommendation systems have been successfully applied
in various domains to provide tailored content. However, the integration of real-time
facial emotion recognition specifically to dynamically adjust video content within an
e-learning environment, particularly by recommending lower-level foundational
videos upon detecting negative emotional cues, remains a relatively less explored
area. This project builds upon the foundations of adaptive learning, emotion
recognition, and recommender systems, aiming to contribute a novel approach to
enhance video-based e-learning by directly responding to learners' emotional states
with targeted content adjustments.

9
1. PROJECT TITLE

THE EFFECTIVENESS OF E-LEARNING SERVICE QUALITY IN


INFLUENCING E-LEARNING STUDENT SATISFACTION AND LOYALTY AT
TELKOM UNIVERSITY

Author: M. E. Saputri, F. N. Utami and D. Sari

Year: 2022

Blended learning method through the LMS system at Telkom University is


relatively new. This makes a big change in the learning process, especially during the
Covid pandemic which requires online learning. Online learning or e-learning raises
pros and cons at Telkom University. This study uses a quantitative method and
causality with SEM PLS as the analysis technique. Sampling techniques used are
purposive sampling with a sample of 334 active Telkom University students spread
across all majors starting from the Class of 2015 - 2020. The results highlight how
the quality of e-learning services impacts the quality of the e-learning system, the
quality of teaching and e-learning materials, the quality of staff and e-learning
support staff, and more. The satisfaction of e-learning users at Telkom University is
significantly impacted by the quality of e-learning services by 85.9%. In Telkom
University students, e-learning user happiness has a strong impact on e-learning user
loyalty by 80%. The use of the e-learning system determines the satisfaction and
loyalty of Telkom University students because the form of teaching is carried out
through LMS so that students experience easy access to materials and learn
independently.

10
2. PROJECT TITLE

APPFLX: PROVIDING PRIVACY-PRESERVING CROSS-SILO


FEDERATED LEARNING AS A SERVICE

Author: Z . L i e t a l .

Year: 2023

Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to


collaboratively train robust and generalized machine learning (ML) models without
sharing sensitive (e.g., healthcare of financial) local data. To ease and accelerate the
adoption of PPFL, we introduce APPFLx, a ready-to-use platform that provides
privacy-preserving cross-silo federated learning as a service. APPFLx employs
Globus authentication to allow users to easily and securely invite trustworthy
collaborators for PPFL, implements several synchronous and asynchronous FL
algorithms, streamlines the FL experiment launch process, and enables tracking and
visualizing the life cycle of FL experiments, allowing domain experts and ML
practitioners to easily orchestrate and evaluate cross-silo FL under one

11
3. PROJECT TITLE

THE SUCCESS OF E-LEARNING IMPLEMENTATION FOR


ENGINEERING COURSES: CASE STUDY - EIT

Author: A. Siddhpura, M. Siddhpura, A. Evangelista and I. V

Year: 2021

Globally, the ease of internet access has had a positive effect on the education
sector. Many higher education institutions are leveraging this opportunity and are
strongly investing in e-Learning systems offering courses in various disciplines.
Engineering courses are challenging on their own, but to teach and engage e-learners'
in first year cohorts requires creativity and innovation from the teaching teams and
providers. This paper aims to evaluate the success and learning quality by considering
the implementations of new technologies available, as well as the students'
knowledge gain and professional qualification via online courses. The paper
evaluates the success of e-Learning for various engineering units at the Engineering
Institute of Technology (EIT) by analyzing core units offered in EIT Bachelor of
Engineering programs both online and on-campus. Grades from past two years of the
online and on-campus cohorts were used. The main findings indicate stronger
commitment from the online students as compared to the on-campus students.
Furthermore, it was observed that over time with greater experience, lecturers were
able to engage and motivate students more effectively.

12
4. PROJECT TITLE

AGILE FRAMEWORK FOR THE ELABORATION OF E-LEARNING


MATERIALS

Author: G. Tabunshchyk and P. Arras

Year: 2023

Digital transformation in all industrial and societal spheres causes a shift on to


the required skills of people, educators, academics and researchers. To deliver these
skills and competences news methods of delivering contents have been explored.
Digital transformation in education and content delivery often means a shift to digital
content. This paper summarizes an approach for the development of digital content
for different digital learning modules and deliverables, based on the collaboration
concepts of the Open Communities of Practice. With a defined methodological
approach and a vision on instructional design, development of learning content and
digital media for digitalized education is structured. This is especially important in
international educational projects where developers and teachers are distributed over
the consortium partners

13
5. PROJECT TITLE

EXTRACTING THE MAIN ASPECTS OF E-LEARNING READINESS


ASSESSMENT FOR IRAQI UNIVERSITIES
Author: Y. K. Al-Rikabi and G. Ali Montazer

Year: 2023

In the post-corona era, Iraqi universities implemented blended learning in 2021


and 2022. However, given the uncertainty regarding the pandemic’s potential
recurrence or the country’s exposure to another crisis of a similar nature affecting the
educational process, it is necessary to assess Iraqi universities' genuine potential to
adopt the e-learning system as an alternative educational system. The purpose of this
study is to extract the main aspects including the dimensions and factors to assess the
readiness of e-learning level for Iraqi universities. To do this, the Fuzzy Delphi
Method (FDM) was applied to extract the vital aspects for assessing e-learning
readiness, and finally, 3 dimensions and 13 factors have been extracted from Iraqi
experts in e-learning and educational system. The results of this paper showed that
"Infrastructure" has essential effect and more important than other dimensions, and
"Technological" factor with a weight of 0.751 has the most effect on e-learning
readiness than the rest factors.

14
CHAPTER 3
ANALYSIS

This section delves into a comprehensive analysis of the existing landscape of


e-learning systems, outlines the architecture and functionality of the proposed
intelligent video-based learning system, and details the specific requirements for its
development.

3.1 EXISTING SYSTEM


Current e-learning platforms offer a wide array of features, including
video lectures, interactive quizzes, discussion forums, and downloadable
resources. Video content, in particular, has become a cornerstone of online
education due to its ability to convey complex information visually and
engagingly. Platforms like Coursera, edX, Khan Academy, and university
learning management systems (LMS) such as Moodle and Blackboard host vast
libraries of educational videos covering diverse subjects.

However, despite their widespread adoption, existing systems often


exhibit several limitations concerning personalization and real-time adaptation,
particularly in the context of video-based learning:

 Static Content Delivery: The majority of video resources are presented


in a linear and static manner. All learners, regardless of their prior
knowledge, learning pace, or current comprehension level, are exposed to
the same content in the same sequence. This "one-size-fits-all" approach
can lead to disengagement for advanced learners and frustration for those
struggling with foundational concepts.

 Limited Real-time Feedback Mechanisms: While some platforms


incorporate quizzes or interactive elements after video segments, real-time
feedback on a learner's understanding during video consumption is

15
generally absent. Learners may not realize they are struggling until they
attempt a subsequent assessment, by which point misconceptions may
have already taken root.

 Lack of Proactive Intervention: If a learner is exhibiting signs of


confusion or disinterest while watching a video (e.g., repeatedly pausing,
skipping, or spending minimal time on key sections), current systems
typically do not proactively intervene to offer assistance or alternative
resources.

 Absence of Emotional Intelligence: Existing systems largely ignore the


emotional state of the learner. Feelings of frustration, boredom, or
confusion can significantly hinder the learning process, but current
platforms lack the ability to detect these emotions and adapt the learning
experience accordingly.

 Rudimentary Recommendation Systems: While some platforms offer


recommendations for related courses or videos based on a learner's past
activity or declared interests, these recommendations are often static and
do not take into account the learner's real-time engagement or
comprehension during a specific learning session.

 Limited Integration of Affective Computing: The field of affective


computing, which focuses on the design and development of systems that
can recognize and respond to human emotions, has not been widely
integrated into mainstream e-learning platforms, particularly in the
context of video-based learning.

In summary, while current e-learning systems provide valuable access to


educational resources, they often lack the intelligence and responsiveness
needed to truly personalize the learning experience, especially during passive
video consumption. The absence of real-time feedback, proactive intervention,
and emotional awareness limits their ability to cater to the diverse needs of
16
individual learners and maximize learning outcomes.

3.2 PROPOSED SYSTEM


The proposed system aims to address the limitations of existing video-based e-
learning platforms by integrating machine learning techniques to provide a
more personalized and adaptive learning experience. The core components of
the proposed system and their interactions are as follows:

1. User Interface (GUI): A user-friendly GUI will serve as the central point of
interaction for the student. It will display a list of keywords associated with
various educational topics. Upon selecting a keyword, the system will retrieve
and play the corresponding video. The GUI will also incorporate a video player
with standard controls (play, pause, rewind, etc.) and a dedicated area for
displaying recommended videos.

2. Video Database: A database will store the educational video content along with
associated metadata, including keywords, topic level (e.g., introductory,
intermediate, advanced), and potentially transcripts or summaries. This
database will be organized to facilitate efficient retrieval of videos based on
user selection and the recommendations generated by the system.

3. Facial Emotion Recognition Module: This module will utilize a trained


machine learning model to analyze the student's facial expressions captured by
a webcam during video playback. The model will be trained to recognize key
emotional states relevant to learning, such as engagement (e.g., attentive gaze,
slight smile), confusion (e.g., furrowed brows, squinted eyes), and disinterest
(e.g., looking away, yawning). The output of this module will be a continuous
stream of emotion probabilities or classifications.

4. Engagement and Comprehension Assessment: Based on the output of the facial


emotion recognition module, this component will assess the learner's level of
engagement and potential comprehension. For instance, sustained expressions
17
of confusion or disinterest over a certain period will trigger a flag indicating a
potential learning difficulty. Conversely, consistent expressions of engagement
will suggest the learner is following the content effectively.

5. Dynamic Recommendation Model: This is the core adaptive component of the


system. It will operate based on the real-time assessment of the learner's
engagement and comprehension. If the system detects a significant and
sustained indication of difficulty (e.g., prolonged confusion or disinterest), the
recommendation model will automatically retrieve and suggest a lower-level
video that covers foundational concepts related to the current topic. The
selection of the lower-level video will be based on the metadata associated
with the videos in the database, ensuring relevance to the current learning
context.

6. System Controller: This module will act as the central orchestrator, managing
the flow of information between the different components. It will receive user
input from the GUI, initiate video playback, activate the emotion recognition
module, process the emotion data to assess engagement and comprehension,
invoke the recommendation model when necessary, and update the GUI with
recommended videos.

Workflow of the Proposed System:

1. The student interacts with the GUI and selects a topic keyword.

2. The system retrieves and plays the corresponding video.

3. During video playback, the webcam captures the student's facial expressions.

4. The Facial Emotion Recognition Module analyzes the facial expressions in


real-time.

5. The Engagement and Comprehension Assessment module interprets the


emotion data to gauge the student's learning state.

6. If the assessment indicates significant difficulty or disengagement, the


Dynamic Recommendation Model selects a relevant lower-level video from
18
the database.

7. The System Controller updates the GUI to display the recommended video,
potentially pausing the current video or offering the student the option to
switch to the recommended content.

8. The student can then choose to continue watching the original video or switch
to the recommended foundational video.

This proposed system introduces a proactive and personalized approach to


video-based e-learning by leveraging real-time emotional feedback to adapt the
learning content to the individual needs of the student.

3.3 REQUIREMENT SPECIFICATION

This section outlines the specific requirements for the development of the proposed
intelligent video-based learning system. These requirements are categorized into
functional and non-functional aspects.

3.3.1 LIST OF COMPONENTS


The key components of the proposed system are:

1. Graphical User Interface (GUI): For user interaction, video playback, and
displaying recommendations.

2. Video Player: To play educational video content.

3. Webcam Interface: To capture the student's facial expressions.

4. Facial Emotion Recognition Module: A machine learning model for real-time


emotion analysis.

5. Engagement and Comprehension Assessment Logic: Rules or algorithms to


interpret emotion data.

6. Dynamic Recommendation Model: An algorithm to select appropriate lower-


level videos.

7. Video Database: To store and manage educational video content and metadata.
19
8. System Controller: To manage the interaction and data flow between
components.

3.3.2 SOFTWARE

The software requirements for the development of the system include:

 Operating System: Windows, macOS, or Linux (for development and potential


deployment).

 Programming Language: MATLAB (as indicated in the abstract) will be the


primary language for development, leveraging its toolboxes for GUI, image
processing, and machine learning.

 GUI Development Environment: MATLAB App Designer or GUIDE for


creating the user interface.

 Machine Learning Libraries: MATLAB's Statistics and Machine Learning


Toolbox for training and implementing the facial emotion recognition model
and the recommendation logic.

 Image Processing Libraries: MATLAB's Image Processing Toolbox for


handling video frames and facial feature extraction (if required by the emotion
recognition model).

 Database Management System: MATLAB's capabilities for data storage and


retrieval, or potentially an external database system if scalability becomes a
concern.

 Webcam Driver Interface: Libraries or functions within MATLAB to access


and process video streams from a webcam.

3.3.3 HARDWARE
20
The minimum hardware requirements for the system are:

 Computer: A standard desktop or laptop computer with sufficient processing


power to run the MATLAB environment and the machine learning model in
real-time.

 Webcam: An integrated or external webcam capable of capturing clear video of


the student's face during the learning session.

 Sufficient RAM: To handle the video processing and machine learning


computations efficiently.

 Storage: To store the video database and the system software.

 Display: To show the GUI and play the video content.

3.4 PURPOSE

The primary purpose of the proposed system is to enhance the effectiveness and
engagement of video-based e-learning by providing a personalized and adaptive
learning experience. Specifically, the system aims to:
 Improve learner engagement: By proactively responding to signs of disinterest
and offering alternative content.
 Enhance comprehension: By identifying moments of potential difficulty and
providing access to foundational materials.
 Cater to individual learning needs: By dynamically adjusting the learning path
based on real-time feedback.
 Provide a more supportive learning environment: By implicitly recognizing
and responding to the learner's emotional state.
 Demonstrate the application of machine learning in education: By showcasing
the potential of facial emotion recognition and dynamic recommendation in
creating intelligent learning tools.

21
Ultimately, the purpose is to create a more student-centric and effective video-
based learning experience that can contribute to improved knowledge retention
and a more positive attitude towards online learning.

3.5 SCOPE

The scope of this project focuses on the development and initial evaluation of a
prototype intelligent video-based learning system with the following key aspects
within its boundaries:
 Facial Emotion Recognition: The system will focus on recognizing a limited
set of key emotions relevant to learning engagement and comprehension, such
as engagement, confusion, and disinterest, based on facial expressions. More
complex emotional states or physiological signals will be outside the scope of
this initial prototype.
 Dynamic Recommendation: The recommendation model will primarily focus
on suggesting lower-level foundational videos related to the current topic when
signs of difficulty are detected. Other forms of personalized recommendations,
such as alternative explanations at the same level or advanced content for
highly engaged learners, will be considered for future expansion.
 Video Content: The system will be designed to work with a pre-existing
database of educational videos. The creation of new video content is outside
the scope of this project. The videos will be assumed to be segmented by topic
and tagged with appropriate metadata (keywords, level).
 GUI Functionality: The GUI will provide essential functionalities for video
selection, playback, and displaying recommendations. Advanced features such
as user profiles, learning progress tracking, or social interaction will not be
included in the initial prototype.
22
 Evaluation: The evaluation of the system's effectiveness will likely involve
user studies with a limited number of participants and a focus on subjective
feedback and observed changes in engagement. Comprehensive quantitative
analysis of learning outcomes over an extended period may be considered for
future work.
 Development Platform: The primary development platform will be
MATLAB, leveraging its built-in tools and libraries. Deployment to other
platforms or languages is outside the scope of this initial project.
The project will serve as a proof-of-concept demonstrating the feasibility and
potential benefits of integrating real-time facial emotion recognition and dynamic
recommendation in video-based e-learning. While acknowledging the broader
possibilities of personalized learning, the scope will be focused on these core
functionalities within the chosen development environment.

23
CHAPTER 4

SYSTEM ARCHITECTURE

4.1 OVERVIEW
This section provides a detailed overview of the proposed intelligent video-based e-
learning system's architecture. It outlines the key components, their functionalities,
and how they interact to deliver a personalized and adaptive learning experience.

Agent Figure
Tool
4.1 Component

Knowledge
Interface

Projection

Central
Agent Student
Model Tool
Knowledge Projector
Interface

Agent Central
Knowledge

A B

Figure 4.1 System Architechiture

The architecture of the proposed system is designed to be modular and event-driven,


allowing for seamless integration of various functionalities. At a high level, the
system operates by first presenting educational video content to the student through a
user-friendly interface. Simultaneously, it monitors the student's facial expressions

24
using a webcam and a machine learning-based emotion recognition module. The
detected emotional cues are then processed to assess the student's engagement and
potential comprehension. Based on this assessment, a dynamic recommendation
model decides whether to intervene and suggest alternative learning materials,
specifically lower-level foundational videos, to address any identified difficulties or
disengagement. A central system controller orchestrates the interaction between these
modules, ensuring a cohesive and responsive learning environment.
The core philosophy behind this architecture is to move away from a static, one-size-
fits-all approach to video-based learning towards a dynamic system that adapts in
real-time to the individual learner's emotional and cognitive state. By integrating
affective computing principles with adaptive learning strategies, the system aims to
create a more engaging, supportive, and ultimately more effective learning
experience.
The system architecture can be visualized as a layered model, with the user interface
forming the presentation layer, the core logic encompassing the emotion recognition,
assessment, and recommendation modules forming the application layer, and the
video database constituting the data layer. The system controller acts as the
intermediary, facilitating communication and data flow between these layers.
Key Architectural Principles:
 Modularity: The system is designed with distinct, independent modules, each
responsible for a specific functionality. This promotes maintainability,
scalability, and ease of future enhancements.
 Real-time Processing: The emotion recognition and assessment modules
operate in real-time or near real-time to provide timely feedback and
adaptation.
 Data-Driven Adaptation: The recommendation model relies on the data
derived from the emotion recognition module and the metadata associated with
the video content to make informed decisions about content adjustments.

25
 User-Centric Design: The GUI is designed to be intuitive and user-friendly,
ensuring a seamless and positive learning [Link] subsequent sections
will delve deeper into the individual components of this architecture and their
integration.

4.2 SYSTEM COMPONENTS

The proposed system comprises several key components that work in concert to
achieve its objectives. Each component and its functionality are described in
detail below:
1. Graphical User Interface (GUI):
o Functionality: The GUI serves as the primary interface for the student
to interact with the system. It provides the following functionalities:
 Topic Selection: Displays a list of keywords representing
different educational topics, allowing the student to choose the
area they wish to learn about.
 Video Playback: Integrates a video player capable of streaming
educational videos corresponding to the selected topic. It includes
standard playback controls (play, pause, rewind, volume
adjustment, etc.).
 Recommendation Display: Provides a dedicated area to display
recommended videos, along with a brief description or title, when
the system detects a need for alternative content.
 User Feedback Mechanism (Optional): May include options for
the student to provide explicit feedback on the relevance or
helpfulness of the recommendations.
o Technology: Developed using MATLAB's App Designer or GUIDE,
providing a visual environment for creating interactive elements.
2. Video Player:

26
o Functionality: Responsible for playing the selected educational videos.
It needs to be compatible with common video formats and provide a
smooth playback experience.
o Integration: Embedded within the GUI, allowing seamless transition
between topic selection, video viewing, and accessing recommendations.
o

3. Webcam Interface:
o Functionality: Provides the necessary interface to access the video
stream from the student's webcam. It captures frames of the student's
face in real-time during video playback.
o Technology: Utilizes MATLAB's image acquisition toolbox or relevant
libraries to interact with the webcam driver.
4. Facial Emotion Recognition Module:
o Functionality: This is the core intelligence component responsible for
analyzing the captured facial expressions and inferring the student's
emotional state. It will be trained to recognize key emotions relevant to
learning, such as:
 Engagement: Indicated by attentiveness, focused gaze, and
potentially subtle positive expressions.
 Confusion: Characterized by furrowed brows, squinted eyes, and
a look of uncertainty.
 Disinterest: Displayed through averted gaze, yawning, or a
generally inattentive demeanor.

o Technology: Implemented using a trained machine learning model


within MATLAB's Statistics and Machine Learning Toolbox or Deep
Learning Toolbox. This could involve:
 Feature Extraction: Algorithms to extract relevant facial features
from the webcam feed (e.g., distances between key points, texture
27
analysis of facial regions).
 Classification Model: A machine learning classifier (e.g.,
Convolutional Neural Network (CNN), Support Vector Machine
(SVM), Random Forest) trained on a labeled dataset of facial
expressions associated with the target emotions.
o Output: The module will output a continuous stream of emotion
probabilities or classifications for each analyzed frame.
5. Engagement and Comprehension Assessment Logic:
o Functionality: This component processes the output from the Facial
Emotion Recognition Module over time to assess the student's overall
engagement and potential comprehension level. It will employ rules or
algorithms to:
 Temporal Analysis: Analyze the sequence and duration of
different emotional states. For example, a sustained period of
"confusion" or "disinterest" will be considered a stronger indicator
of difficulty than a brief fleeting expression.
 Thresholding: Define thresholds for the probabilities of different
emotions to trigger an alert. For instance, if the probability of
"confusion" exceeds a certain threshold for a defined duration, it
signals a potential comprehension issue.
 Weighted Averaging (Optional): Assign different weights to
different emotions or the duration of their occurrence to arrive at a
more nuanced assessment.
o Output: This module will generate a signal indicating the student's
current learning state (e.g., "engaged," "potentially confused,"
"disinterested").
6. Dynamic Recommendation Model:
o Functionality: This module is responsible for selecting and suggesting
alternative learning materials when the Engagement and Comprehension
28
Assessment Logic indicates a need for intervention. Its primary function
in this initial implementation is to recommend lower-level foundational
videos.
o Logic:
 Triggering Condition: Activated when the assessment module
signals a significant and sustained state of confusion or disinterest.
 Video Selection: It will query the Video Database based on the
metadata associated with the currently playing video (e.g., topic,
keywords). It will then filter for videos that are tagged as being at
a lower educational level and cover related foundational concepts.
The selection algorithm might prioritize videos that have a strong
overlap in keywords with the current video but are explicitly
marked as introductory or foundational.
 Ranking (Optional): If multiple lower-level videos are available,
the model might employ a simple ranking mechanism based on
factors like relevance score (keyword matching), popularity (if
usage data is available), or instructor ratings (if included in the
metadata).
o Output: A list of recommended lower-level video titles and potentially
brief descriptions.
7. Video Database:
o Functionality: This component serves as the repository for all
educational video content and associated metadata.
o Structure: It will contain:
 Video Files: The actual video files in a suitable format (e.g.,
MP4).
 Metadata: Structured information associated with each video,
including:
 Title: A descriptive title for the video.
29
 Keywords: A list of relevant keywords for topic
identification.
 Topic Level: An indicator of the video's difficulty level
(e.g., Introductory, Basic, Intermediate, Advanced).
 Related Concepts: Links to other videos covering related
topics or prerequisites.
 Summary/Abstract: A brief overview of the video content.
o Technology: Can be implemented using MATLAB's data storage
capabilities (e.g., MAT-files, tables) for a prototype or a more robust
database management system (e.g., SQLite, MySQL) for a more scalable
application.
8. System Controller:
o Functionality: This central module orchestrates the interaction between
all other components. It manages the flow of data and control signals
within the system.
o Responsibilities:
 Receives user input from the GUI (topic selection).
 Retrieves the corresponding video from the Video Database and
instructs the Video Player to play it.
 Activates the Webcam Interface to start capturing video frames.
 Feeds the captured frames to the Facial Emotion Recognition
Module.
 Receives the emotion data from the Emotion Recognition Module
and passes it to the Engagement and Comprehension Assessment
Logic.
 Receives the assessment of the learner's state.
 If the assessment indicates difficulty, it triggers the Dynamic
Recommendation Model.
 Receives the list of recommended videos from the
30
Recommendation Model and instructs the GUI to display them.
 Handles user interaction with the recommendations (e.g., selecting
a recommended video).
 Manages the transition between videos if the user chooses a
recommendation.

4.3 INTEGRATION OF COMPONENTS

The seamless integration of the aforementioned components is crucial for the


system to function effectively. The interaction and data flow between these modules
can be described as follows:

1. Initialization: When the system starts, the GUI is loaded, and the System
Controller initializes the necessary modules, including the Webcam Interface
and potentially loading the trained Emotion Recognition Model.

2. Topic Selection and Video Playback: The student selects a topic through the
GUI. The GUI communicates this selection to the System Controller. The
Controller queries the Video Database for the corresponding video and
instructs the Video Player (embedded in the GUI) to begin playback.
Simultaneously, the Controller activates the Webcam Interface to start
capturing the student's facial expressions.

3. Real-time Emotion Analysis: As the video plays, the Webcam Interface


continuously captures frames of the student's face and sends them to the Facial
Emotion Recognition Module. This module processes each frame and outputs
the probabilities or classifications of the recognized emotions (e.g., probability
of "confusion" being 0.8, probability of "engagement" being 0.6). This emotion
data is then transmitted to the Engagement and Comprehension Assessment
Logic.
31
4. Engagement and Comprehension Assessment: The Assessment Logic receives
the stream of emotion data over time. It applies its rules and algorithms (e.g.,
monitoring sustained high probabilities of negative emotions) to determine the
student's current learning state. If a significant and persistent indication of
difficulty or disengagement is detected, it sends a trigger signal to the System
Controller.

5. Dynamic Recommendation: Upon receiving the trigger signal, the System


Controller activates the Dynamic Recommendation Model. The
Recommendation Model accesses the Video Database and, based on the
metadata of the currently playing video, selects a set of relevant lower-level
foundational videos. This list of recommendations is then passed back to the
System Controller.

6. Displaying Recommendations: The System Controller instructs the GUI to


display the received recommendations to the student. This might involve
pausing the current video or presenting the recommendations in a non-intrusive
manner. The student can then choose to continue with the original video or
select one of the recommended alternatives.

7. Handling Recommendation Selection: If the student selects a recommended


video, the GUI informs the System Controller. The Controller then stops the
playback of the current video, retrieves the selected video from the Video
Database, and instructs the Video Player to play the new video. The emotion
monitoring process continues during the playback of the recommended video.

8. Continuous Monitoring: Throughout the video playback (both the initial video
and any recommended videos), the system continuously monitors the student's
facial expressions, assesses their engagement and comprehension, and triggers
recommendations as needed.

This intricate interplay between the different components ensures that the
learning experience is not static but dynamically adapts to the learner's real-time

32
emotional responses, aiming to provide the right level of support and foundational
knowledge when it is most needed. The modular design allows for future
enhancements and refinements of individual components without significantly
impacting the overall system architecture.

CHAPTER 5

DESIGN

This section outlines the design of the intelligent video-based e-learning system,
providing a visual representation through UML diagrams and detailing the system
design considerations, constraints, and requirements.

5.1 UML DIAGRAMS

This subsection presents UML diagrams to illustrate the system's functionality


and interactions.

33
Figure 5.1 UML Diagram

5.1.1 USECASE DIAGRAM

The Use Case Diagram depicts the interactions between the primary actor (the
Student) and the system's functionalities.

34
Figure 5.1.1 Usecase Diagram

Description:

The Use Case Diagram shows that the primary actor, the Student, can interact with
the system in several ways:

 Select Topic: The student initiates a learning session by selecting a topic of


interest. This use case can be extended by the Browse Topics use case,
allowing the student to explore available learning areas.

 Play Video: Once a topic is selected, the corresponding video is played. This
use case includes functionalities like Pause Video and Rewind Video, which
are part of the standard video playback experience.

 View Recommended Video: The system may present the student with
recommended lower-level videos based on the real-time analysis of their
engagement. The student can choose to view these recommendations.

 Provide Feedback on Recommendation (Optional): The student might have the


option to provide feedback on the helpfulness or relevance of the
recommended videos, which could be used to improve the recommendation

35
model in the future.

36
5.1.2 SEQUENCE DIAGERAM

The Sequence Diagram illustrates the interactions between the system's


components during a typical learning session where a recommendation is
triggered.

Figure 5.2 Sequence Diagram


37
Description:
The Sequence Diagram outlines the following steps:
1. The Student selects a topic through the GUI.
2. The GUI sends a request to the System Controller for the corresponding
video.
3. The System Controller queries the Video Database to retrieve the video path
and its metadata.
4. The Video Database returns the requested information to the System
Controller.
5. The System Controller instructs the VideoPlayer to play the video and tells
the WebcamInterface to start capturing the student's facial expressions.
6. A loop begins for each video frame captured by the WebcamInterface.
7. The WebcamInterface sends the captured frame to the EmotionRecognizer.
8. The EmotionRecognizer analyzes the facial expressions in the frame and
sends the emotion analysis to the EngagementAssessor.
9. The EngagementAssessor evaluates the emotion data and sends an assessment
of the student's engagement and comprehension to the System Controller.
[Link] alternative path is taken if the assessment indicates difficulty or
disengagement:
o The System Controller asks the Recommender for a video
recommendation, providing the metadata of the current video.
o The Recommender queries the Video Database to find lower-level
videos related to the current topic.
o The Video Database returns a list of suitable recommended videos to
the Recommender.
o The Recommender sends the list of recommended videos to the System
Controller.
o The System Controller instructs the GUI to display these
38
recommendations to the Student.
o The Student may then select a recommended video through the GUI.
o The GUI informs the System Controller about the selection.
o The System Controller retrieves the path of the selected recommended
video from the Video Database.
o The Video Database provides the path to the System Controller.
o The System Controller instructs the VideoPlayer to play the
recommended video.
[Link] loop of emotion analysis continues during the playback of any video.
Finally, the System Controller instructs the WebcamInterface to stop capturing
when the learning session ends or the video finishes.

5.1.3 ACTIVITY DIAGRAM

The Activity Diagram illustrates the flow of activities within the system during
a video learning session, including the emotion analysis and recommendation
process.

39
Figure 5.3 Activity Diagram

40
Description:
The Activity Diagram shows the parallel processes occurring within the system:
1. The process starts when the Student selects a topic.
2. The system then forks into two concurrent activities: playing the video and
monitoring the student's engagement.
3. The System plays the video.
4. Simultaneously, another fork begins for the engagement monitoring:
o The Webcam captures facial expressions.
o The Emotion recognizer analyzes these expressions.
o The Engagement assessor evaluates the level of engagement based on
the recognized emotions.
o A decision point checks if the engagement is low.
 If yes, the system triggers the recommendation process. The
Recommender suggests a lower-level video, and the system
displays this recommendation to the student. Another decision
point checks if the student selects the recommendation.
 If the Student selects the recommendation, the system plays
the recommended video.
 If the Student does not select the recommendation, the
system continues playing the original video.
 If no (engagement is not low), the system continues to monitor the
student's engagement.
5. The parallel processes continue until the learning session ends or the video
finishes, at which point the activity stops.

41
5.2 SYSTEM DESIGN

Agent Component
Tool
Knowledge
Interface

Projection

Central
Agent Student
Model Tool
Knowledge Projector
Interface

Agent Central
Knowledge

A B

The system is designed using a modular architecture, as highlighted in the UML


diagrams. Each module is responsible for a specific aspect of the system's
functionality, allowing for independent development and easier maintenance.

 Presentation Layer (GUI): Provides the user interface for interaction. It handles
user input (topic selection, recommendation selection) and displays output
(video playback, recommendations).

 Application Logic Layer (System Controller, Emotion Recognizer,


Engagement Assessor, Recommender): Contains the core logic of the system.
The System Controller orchestrates the flow, the Emotion Recognizer analyzes
facial expressions, the Engagement Assessor interprets the emotional data, and
the Recommender selects appropriate alternative content.

 Data Layer (Video Database): Manages the storage and retrieval of educational
video content and associated metadata.

 Hardware Interface Layer (Webcam Interface, Video Player): Provides the


41
necessary interfaces to interact with hardware components like the webcam
and the video playback capabilities of the operating system.

Data Flow:

1. User selects a topic via the GUI.

2. GUI sends the topic to the System Controller.

3. System Controller retrieves the video path and metadata from the Video
Database.

4. System Controller instructs the Video Player to play the video and the Webcam
Interface to start capturing.

5. Webcam Interface sends video frames to the Emotion Recognizer.

6. Emotion Recognizer analyzes frames and sends emotion data to the


Engagement Assessor.

7. Engagement Assessor evaluates the data and sends an engagement level to the
System Controller.

8. If engagement is low, the System Controller triggers the Recommender.

9. Recommender queries the Video Database for lower-level videos based on the
current video's metadata.

[Link] Database returns a list of recommendations to the Recommender.

[Link] sends the recommendations to the System Controller.

[Link] Controller instructs the GUI to display the recommendations.

[Link] may select a recommendation, which is communicated back to the


System Controller, leading to the playback of the new video.

42
5.3 CONSTRAINTS

Constraints are limitations or restrictions that influence the design and


implementation of the system.

5.3.1 CONSTRAINT ANALYSIS

Several factors impose constraints on the development of this system:


 Technical Limitations: The accuracy and speed of real-time facial emotion
recognition are subject to the capabilities of the chosen algorithms, the quality
of the webcam feed, and the processing power of the hardware. Lighting
conditions, facial occlusions (e.g., hands, glasses), and individual differences
in facial expressions can affect recognition accuracy.
 Computational Resources: Real-time video processing and machine learning
inference can be computationally intensive. The system needs to be designed to
operate efficiently within the expected hardware limitations.
 Data Availability: The performance of the emotion recognition model heavily
relies on the availability of a sufficiently large and well-labeled training
dataset. Acquiring or creating such a dataset can be time-consuming and
resource-intensive.
 Ethical Considerations: The use of facial emotion recognition raises ethical
concerns regarding privacy and potential biases in the algorithms. The system
design must consider these aspects and aim for responsible implementation.
 User Experience: The system should provide a seamless and non-intrusive
learning experience. Overly sensitive or inaccurate emotion detection leading
to frequent and irrelevant recommendations could be disruptive and negatively
impact user satisfaction.
 Development Time and Resources: The project's timeline and available
resources (e.g., personnel, budget) will constrain the scope and complexity of
the implemented features.

43
 MATLAB Environment: The decision to use MATLAB as the primary
development environment imposes constraints related to the available
toolboxes, its performance characteristics compared to other languages, and
potential deployment limitations.

5.3.2 CONSTRAINTS IN DESIGN

The identified constraints influence several aspects of the system's design:

 Emotion Model Simplicity: Due to computational limitations and the need for
robust real-time performance, the emotion recognition model might initially
focus on a limited set of core emotions (engagement, confusion, disinterest)
rather than a broader spectrum of human expressions.

 Recommendation Strategy: The initial recommendation strategy will focus on


suggesting lower-level videos. More complex recommendation algorithms
considering learning styles or past performance might be deferred due to
development time constraints.

 GUI Intrusiveness: The display of recommendations needs to be designed to be


informative but not overly disruptive to the learning flow.

 Data Privacy: The system will be designed to process facial data locally without
storing it persistently to address privacy concerns.

 Algorithm Selection: The choice of machine learning algorithms for emotion


recognition will be influenced by their accuracy, real-time performance, and
suitability for implementation within the MATLAB environment.

44
5.3.3 CONSTRAINTS IN IMPLEMENTATION

The constraints also impact the implementation phase:

 MATLAB Toolbox Dependencies: The implementation will rely on the

specific functionalities and limitations of MATLAB's toolboxes for GUI, image

processing, and machine learning.

 Real-time Performance Optimization: Code optimization techniques within

MATLAB might be necessary to ensure smooth real-time processing of video

frames and emotion analysis.

 Webcam Compatibility: The system needs to be implemented to be

compatible with a range of standard webcams.

 Error Handling: Robust error handling will be crucial to manage


potential issues such as poor lighting conditions, webcam malfunctions, or
errors in emotion recognition.

5.4 FUNCTIONAL REQUIREMENTS

Functional requirements define what the system should do.


 FR1: Topic Selection: The system shall allow the student to select an
educational topic from a list of available keywords presented through the GUI.
 FR2: Video Playback: Upon topic selection, the system shall retrieve and play
the corresponding video within the GUI. The video player shall provide
standard controls (play, pause, rewind, volume).
 FR3: Webcam Access: The system shall be able to access and capture video

45
streams from the student's webcam during video playback.
 FR4: Facial Emotion Recognition: The system shall analyze the captured
facial expressions in real-time to identify and classify the student's emotional
state (at least for engagement, confusion, and disinterest).
 FR5: Engagement Assessment: The system shall assess the student's level of
engagement and potential comprehension based on the analyzed emotional
data over time.
 FR6: Recommendation Triggering: The system shall automatically trigger
the recommendation process when the assessed engagement level falls below a
predefined threshold or when indicators of confusion persist.
 FR7: Lower-Level Video Recommendation: Upon triggering, the system
shall identify and suggest relevant lower-level videos related to the current
topic from the video database.
 FR8: Recommendation Display: The system shall display the recommended
videos to the student through the GUI.
 FR9: Recommended Video Playback: The system shall allow the student to
select and play a recommended video.
 FR10: Continuous Monitoring: The system shall continuously monitor the
student's facial expressions and assess engagement throughout the video
playback (both original and recommended videos).

5.5 NON-FUNCTIONAL REQUIREMENTS

Non-functional requirements define the qualities of the system.

5.5.1 PERFORMANCE REQUIREMENTS

46
 NFR1: Real-time Emotion Analysis: The emotion recognition process should
operate in near real-time, with a minimal delay (e.g., processing within 1-2
seconds per frame) to provide timely feedback.

 NFR2: Video Playback Smoothness: The video playback should be smooth


and uninterrupted.

 NFR3: Recommendation Response Time: When a recommendation is


triggered, the system should display the recommendations to the student within
a reasonable timeframe (e.g., within 3-5 seconds).

 NFR4: System Responsiveness: The GUI should be responsive to user


interactions (topic selection, playback controls, recommendation selection)
without significant delays.

5.5.2 SAFETY REQUIREMENTS

 NFR5: Data Privacy: The system should not store or transmit the captured
facial video data persistently. The emotion analysis should be performed
locally and the raw video discarded after processing.
 NFR6: Secure Webcam Access: The system should securely access the
webcam stream through the operating system's standard mechanisms and
should clearly indicate when the webcam is in use (e.g., through a visual
indicator).
 NFR7: No Unauthorized Data Collection: The system should not collect any
personal data beyond what is necessary for its core functionality (i.e., facial
expressions during learning sessions for real-time analysis). Any optional
feedback mechanisms should be explicitly consented to by the user.

47
CHAPTER 6
TESTING

A robust testing strategy is crucial to ensure the quality, reliability, and


effectiveness of the intelligent video-based e-learning system. This section outlines
the different types of testing that will be employed throughout the development
lifecycle.

6.1 TYPES OF TESTING

The testing process will encompass various levels and types to validate different
aspects of the system, from individual components to the integrated whole.

6.1.1 UNIT TESTING

Purpose: Unit testing focuses on verifying the functionality of individual


software components or modules in isolation. The goal is to ensure that each unit of
code performs its intended task correctly and without errors.
Scope: In this project, unit testing will be applied to individual MATLAB functions
and modules, such as:
 Facial Feature Extraction Functions (if implemented separately): Testing
functions responsible for identifying and extracting key facial landmarks from
an image frame.
 Emotion Classification Functions: Verifying the logic and accuracy of the
emotion classification model for various input facial feature sets. This would
48
involve testing with known "ground truth" data to assess the model's precision
and recall for each target emotion (engagement, confusion, disinterest).
 Engagement Assessment Logic: Testing the rules and algorithms used to
interpret the sequence of emotion classifications and determine the overall
engagement level. This would involve feeding the module with simulated
emotion data representing different engagement scenarios and verifying that
the output assessment is correct.
 Recommendation Logic: Testing the function that selects lower-level videos
based on the metadata of the current video. This would involve setting up test
video metadata and ensuring that the recommendation function returns the
expected set of lower-level videos based on defined criteria (keywords, topic
level).
 GUI Component Logic: Testing the individual event handlers and callback
functions associated with GUI elements (e.g., button clicks for topic selection,
video playback controls).
Methodology:
 Test Cases: For each unit, specific test cases will be designed to cover various
input scenarios, including normal cases, boundary cases, and error conditions.
 Test Data: Test data will be created to simulate different inputs to the units
under test. For the emotion classification unit, this would involve labeled sets
of facial features. For the recommendation logic, this would involve sample
video metadata.
 Automation (where feasible): MATLAB's testing framework (e.g.,
[Link]) will be utilized to automate the execution of unit tests and
generate reports on the test results. This allows for efficient and repeatable
testing throughout the development process.
 Assertions: Within the test cases, assertions will be used to verify that the
actual output of the unit matches the expected output.
Example (Conceptual - MATLAB Test Script for Emotion Classifier):
49
Matlab
import [Link]
import [Link]

class TestEmotionClassifier extends TestCase

properties (TestParameter)
% Sample facial feature data and expected emotion labels
testData = struct('engagedFeatures', [0.1, 0.2, 0.3], 'engagedLabel', 'engaged', ...
'confusedFeatures', [0.5, 0.4, 0.2], 'confusedLabel', 'confused', ...
'disinterestedFeatures', [0.2, 0.6, 0.1], 'disinterestedLabel',
'disinterested');
end

methods (Test, TestTags={'EmotionClassifier'})


function testEmotionClassification(testCase, testData)
% Assume 'classifyEmotion' is the function under test
predictedEmotion = classifyEmotion(testData.(extractBefore([Link],
'Features') + 'Features'));
expectedLabel = testData.(extractBefore([Link], 'Features') + 'Label');
[Link](predictedEmotion, IsEqualTo(expectedLabel));
end
end
end
6.1.2 INTEGRATION TESTING
Purpose: Integration testing focuses on verifying the interaction and
communication between different units or modules that have been individually
tested. The goal is to ensure that these integrated components work together
correctly.
50
Scope: In this project, integration testing will focus on the interactions between:

 Webcam Interface and Emotion Recognition Module: Ensuring that the video
frames captured by the webcam are correctly passed to the emotion recognition
module and that the module processes them without errors.

 Emotion Recognition Module and Engagement Assessment Logic: Verifying


that the emotion classifications from the recognition module are correctly
received and processed by the engagement assessment logic to produce
meaningful engagement levels.

 Engagement Assessment Logic and Recommendation Model: Ensuring that the


recommendation model is triggered correctly based on the output of the
engagement assessment logic and that the relevant context (current video
metadata) is passed to the recommender.

 Recommendation Model and Video Database: Verifying that the


recommendation model can correctly query the video database and retrieve
appropriate lower-level videos based on the provided criteria.

 System Controller and All Other Modules: Testing the central orchestration of
the system, ensuring that the System Controller correctly initiates and manages
the interactions between the GUI, video player, webcam interface, emotion
recognition, assessment, and recommendation modules.

 GUI and System Controller: Ensuring that user actions in the GUI (topic
selection, recommendation selection) are correctly transmitted to the System
Controller and that the GUI updates correctly based on the Controller's
instructions (video playback, displaying recommendations).

Methodology:

 Scenario-Based Testing: Integration tests will be designed based on typical


user scenarios, such as selecting a topic, watching a video, experiencing low
engagement, receiving a recommendation, and selecting a recommended video.
51
 Stubbing and Mocking: In cases where a module is not yet fully developed or
has external dependencies, stubs (simplified replacements for a module) and
mocks (objects that simulate the behavior of dependencies for verification
purposes) might be used to isolate the interaction being tested.

 Incremental Integration: Modules will be integrated and tested incrementally,


starting with closely related units and gradually integrating more components.
This makes it easier to identify and isolate integration issues.

 Test Data: Test data relevant to the interaction between modules will be used.
For example, for testing the interaction between the assessment logic and the
recommender, simulated engagement levels and current video metadata will be
used.

Example (Conceptual - Integration Test Scenario):

1. Setup: Start the system and select a topic that has associated lower-level videos
in the database.

2. Action: Simulate a scenario where the emotion recognition module


consistently outputs "confusion" for a certain duration (this might involve
manually feeding simulated emotion data if the full emotion recognition
pipeline isn't ready).

3. Expected Outcome: Verify that the engagement assessment logic correctly


identifies low comprehension and triggers the recommendation process. Verify
that the GUI displays a list of relevant lower-level videos.

6.1.3 FUNCTIONAL TESTING

Purpose: Functional testing focuses on verifying that the system meets the
specified functional requirements. It tests the system from an end-user perspective,
ensuring that all the defined functionalities work as expected.

Scope: Functional testing will cover all the functional requirements outlined in Section

52
5.4, including:

 FR1: Topic Selection: Verifying that the student can successfully select a topic
from the GUI and that the system correctly identifies the corresponding video.

 FR2: Video Playback: Testing the video player controls (play, pause, rewind)
and ensuring smooth playback of the selected video.

 FR3: Webcam Access: Verifying that the system can access the webcam and that
the webcam is active during video playback (potentially through a visual
indicator).

 FR4: Facial Emotion Recognition: Testing the system's ability to detect and
classify emotions under various simulated conditions (e.g., different lighting,
facial expressions). This might involve manual observation of the emotion
output for controlled scenarios.

 FR5: Engagement Assessment: Observing whether the system correctly assesses


engagement levels based on simulated or actual facial expressions during video
playback.

 FR6: Recommendation Triggering: Verifying that the recommendation process is


triggered when the system detects low engagement or persistent confusion
(again, potentially through controlled scenarios).

 FR7: Lower-Level Video Recommendation: Ensuring that the system


recommends relevant lower-level videos when triggered. This involves checking
the titles and descriptions of the recommended videos.

 FR8: Recommendation Display: Verifying that the recommended videos are


displayed correctly within the GUI.

 FR9: Recommended Video Playback: Testing the student's ability to select and
play a recommended video.
53
 FR10: Continuous Monitoring: Observing that the system continuously monitors
facial expressions throughout the learning session.

Methodology:

 Black-Box Testing: Functional testing will primarily be black-box testing,


meaning that the internal workings of the system are not considered. Testers will
interact with the system as end-users and verify the outputs against the expected
behavior defined in the functional requirements.

 Test Scenarios: Detailed test scenarios will be created for each functional
requirement, outlining the steps to be performed and the expected outcomes.

 Test Data: Realistic test data, including various topic selections and simulated
user interactions, will be used. For emotion recognition testing within functional
testing, controlled scenarios with testers exhibiting different expressions might
be used for qualitative evaluation.

 User Acceptance Testing (UAT) Considerations: While formal UAT might be


outside the initial scope, functional testing will aim to cover key user workflows
and ensure the system is usable and meets the basic functional needs.

Example (Conceptual - Functional Test Case):

Test Case ID: FT_006

Functional Requirement: FR6: Recommendation Triggering

Test Steps:

1. Start the system and select a topic.

2. During video playback, the tester intentionally exhibits facial expressions


associated with confusion (e.g., furrowed brows, squinted eyes) for a sustained
period (e.g., 30 seconds).
54
3. Observe the system's behavior.

Expected Result:

The system should trigger the recommendation process and display a list of lower-
level videos related to the current topic within a reasonable timeframe.

6.1.4 SYSTEM TESTING

Purpose: System testing aims to evaluate the complete integrated system as a


whole. It verifies that all the components work together correctly and that the system
meets the overall system requirements, including both functional and non-functional
aspects.
Scope: System testing will encompass:
 End-to-End Functionality: Testing complete user workflows, from topic
selection to video playback, emotion monitoring, recommendation triggering,
and playing recommended videos.
 Non-Functional Requirements: Evaluating aspects such as performance
(real-time processing speed, responsiveness), usability (ease of use of the
GUI), reliability (stability of the system during continuous use), and security
(data privacy regarding webcam feed).
 Stress Testing (if applicable): Subjecting the system to extreme conditions
(e.g., rapid changes in facial expressions, prolonged use) to identify potential
bottlenecks or failure points.
 Recovery Testing (if applicable): Testing the system's ability to recover from
errors or unexpected interruptions.
 Compatibility Testing (if applicable): Ensuring the system functions
correctly on different hardware configurations (e.g., different webcams,
processing speeds) and operating systems (if the scope includes multi-platform

55
support).
Methodology:
 Scenario-Based Testing: System tests will be based on comprehensive end-to-
end scenarios that mimic real user interactions.
 Performance Monitoring: Tools might be used to monitor the system's
performance (CPU usage, memory consumption) during video processing and
emotion analysis.
 Usability Evaluation: Heuristic evaluation or user feedback sessions might be
conducted to assess the usability of the GUI and the overall learning
experience.
 Error Simulation: Testers might intentionally introduce errors (e.g.,
disconnecting the webcam) to observe the system's error handling capabilities.
 Real-World Simulation: System testing should be conducted in an
environment that closely resembles the intended deployment environment
(e.g., using typical student hardware).
Example (Conceptual - System Test Scenario):
Test Case ID: ST_001
Test Objective: Verify the complete learning flow with recommendation triggering.
Test Steps:
1. Start the system on a standard laptop with an integrated webcam.
2. Select a topic.
3. Play the video for several minutes while exhibiting periods of attentive
expressions and then sustained expressions of confusion.
4. Observe if the system triggers a recommendation for a lower-level video.
5. Select and play the recommended video.
6. Continue watching the recommended video while exhibiting attentive
expressions.
7. Observe if the system continues to function without errors.
Expected Result:
56
The system should play the initial video smoothly, detect the sustained confusion,
display relevant lower-level video recommendations, play the selected recommended
video, and continue to monitor facial expressions throughout the session without
crashing or exhibiting performance issues. The webcam should be active only during
video [Link] employing a comprehensive testing strategy encompassing unit,
integration, functional, and system testing, the project aims to deliver a high-quality,
reliable, and effective intelligent video-based e-learning system that meets the
defined requirements and provides a positive learning experience for students. The
results of these testing phases will provide valuable feedback for iterative
development and refinement of the system.

57
CHAPTER 7

CONCLUSION AND FUTURE ENHANCEMENTS

7.1 CONCLUSION

This project embarked on the development of an intelligent video-based e-learning


system, leveraging machine learning techniques to personalize and enhance the
student learning experience. The core innovation lies in the integration of real-time
facial emotion recognition to dynamically adapt the learning content by
recommending lower-level foundational videos when signs of confusion or
disengagement are detected. By moving beyond the traditional static delivery of
video content, this system aimed to create a more responsive and student-centric
learning environment.
The design phase outlined a modular architecture comprising a user-friendly GUI, a
video player, a webcam interface, a facial emotion recognition module, an
engagement and comprehension assessment logic component, a dynamic
recommendation model, a video database, and a central system controller. UML
diagrams, including Use Case, Sequence, and Activity diagrams, provided a visual
representation of the system's functionality and the interactions between its
components. The design considerations also addressed various constraints, including
technical limitations of emotion recognition, computational resources, data
availability, ethical considerations, and the need for a positive user experience.
The functional requirements specified the core capabilities of the system, such as
topic selection, video playback, real-time emotion analysis, engagement assessment,
recommendation triggering, lower-level video recommendation, and continuous
monitoring. Non-functional requirements focused on performance (real-time
processing, responsiveness), and safety (data privacy, secure webcam access).
The testing strategy emphasized a multi-level approach, encompassing unit testing of
individual modules, integration testing of component interactions, functional testing
58
of end-user functionalities, and system testing of the complete integrated system. This
comprehensive testing plan aimed to ensure the reliability, effectiveness, and
usability of the developed system.
While the detailed implementation and evaluation are beyond the scope of this design
document, the conceptualization and planning presented here lay a strong foundation
for the development of an innovative e-learning tool. The potential benefits of such a
system are significant. By proactively addressing moments of learning difficulty with
targeted foundational content, the system can potentially improve student
comprehension, boost engagement, and reduce frustration. The integration of emotion
recognition adds a layer of affective intelligence, allowing the system to respond not
just to explicit user actions but also to implicit emotional cues, paving the way for a
more empathetic and supportive learning experience.
In conclusion, this project presents a promising approach to enhancing video-based e-
learning through the intelligent application of machine learning. The proposed system
architecture, design considerations, and testing strategy provide a roadmap for
creating a more personalized, adaptive, and emotionally aware learning environment.
The successful implementation of this concept could contribute significantly to the
evolution of online education, making it more effective and engaging for a diverse
range of learners.

7.2 FUTURE ENHANCEMENTS

The current scope of this project focuses on the core functionality of


recommending lower-level videos based on detected confusion or
disengagement. However, several avenues exist for future enhancements to build
upon this foundation and further enrich the learning experience:
1. Enhanced Emotion Recognition: The current system focuses on a limited set
of emotions (engagement, confusion, disinterest). Future enhancements could
involve:

59
o Recognizing a wider range of emotions: Incorporating the detection of
frustration, boredom, curiosity, or even subtle indicators of
understanding. This would provide a more nuanced understanding of the
learner's emotional state.
o Integrating intensity of emotions: Not just classifying emotions but
also gauging their intensity could allow for more fine-grained
adjustments to the learning experience. For example, a mild expression
of confusion might trigger a subtle hint, while intense frustration could
lead to a more direct recommendation.
o Personalized Emotion Baselines: Recognizing that emotional
expressions can vary between individuals, future systems could learn
personalized baselines for each user to improve the accuracy and
relevance of emotion detection.
2. More Sophisticated Recommendation Strategies: The current
recommendation model primarily suggests lower-level videos. Future
enhancements could include:
o Recommending alternative explanations at the same level: If a
student shows confusion, the system could offer a video explaining the
same concept but using a different teaching style, examples, or visual
aids.
o Suggesting interactive exercises or quizzes: Instead of just
recommending another video, the system could suggest a short
interactive exercise or quiz to help the student actively engage with the
material and identify specific areas of difficulty.
o Providing summaries or key takeaways: If disengagement is detected,
the system could offer a brief summary of the current video or highlight
the key takeaways to help the student refocus.
o Learning Style Adaptation: Integrating models that infer a student's
preferred learning style (e.g., visual, auditory, kinesthetic) and
60
recommending videos that align with that style.
o Collaborative Recommendations: If multiple students are learning the
same material, the system could potentially leverage anonymized data on
common points of difficulty to provide more effective recommendations.
3. Integration with Other Learning Resources: The system could be enhanced
to integrate with other learning resources, such as:
o Text-based explanations or articles: Offering alternative formats for
learning the same concepts.
o Discussion forums: Suggesting relevant discussions where students
might find answers to their questions or connect with peers.
o External knowledge bases: Linking to relevant articles or resources on
the web for further exploration.
4. Instructor Feedback and Analytics: The system could provide valuable
feedback to instructors on student engagement and areas of difficulty within
their video content. This could include:
o Aggregated emotion data: Showing instructors where students
commonly exhibit confusion or disinterest in their videos.
o Effectiveness of recommendations: Tracking whether students who
receive recommendations find them helpful.
o Identifying areas for content improvement: Providing insights into
sections of videos that might need clearer explanations or additional
foundational material.
5. Gamification and Engagement Techniques: To further enhance engagement,
the system could incorporate gamification elements, such as:
o Points or badges for attentive learning.
o Interactive challenges related to the video content.
o Progress tracking and visualization.
6. Multi-Modal Emotion Recognition: Relying solely on facial expressions
might not capture the full spectrum of a learner's emotional state. Future
61
systems could integrate other modalities, such as:
o Analysis of speech patterns: Detecting changes in tone, pauses, or
hesitant speech that might indicate confusion.
o Eye-tracking: Monitoring gaze patterns to understand where the
student's attention is focused.
o Physiological signals: Integrating data from wearable sensors (e.g.,
heart rate, skin conductance) to gain a deeper understanding of the
learner's emotional and cognitive state.
7. Personalized Learning Paths: Over time, the system could learn about a
student's individual learning patterns, strengths, and weaknesses to create
personalized learning paths through a series of videos and other resources.
8. Improved GUI and User Experience: Continuous improvement of the GUI
based on user feedback and usability testing can further enhance the learning
experience. This could include more intuitive navigation, clearer presentation
of recommendations, and customizable learning settings.
9. Scalability and Deployment: Future work could focus on making the system
more scalable and deployable across different platforms (e.g., web-based,
mobile applications) to reach a wider audience.
Implementing these future enhancements would require further research,
development, and testing. However, they represent exciting possibilities for
creating even more intelligent, adaptive, and engaging e-learning experiences that
truly cater to the individual needs of each student. The integration of advanced
emotion recognition, sophisticated recommendation strategies, and a holistic
view of the learning process holds the key to unlocking the full potential of
technology in education.

62
APPENDICES
A1: OUTPUT AND SCREENSHOT

63
REFERENCES

1. M. E. Saputri, F. N. Utami and D. Sari, "The Effectiveness of E-Learning


Service Quality in Influencing E-Learning Student Satisfaction and Loyalty at
Telkom University," 2022 International Conference Advancement in Data Science,
E-learning and Information Systems (ICADEIS), Bandung, Indonesia, 2022, pp. 01-
05, doi: 10.1109/ICADEIS56544.2022.10037454.
2. Z. Li et al., "APPFLx: Providing Privacy-Preserving Cross-Silo Federated
Learning as a Service," 2023 IEEE 19th International Conference on e-Science (e-
Science), Limassol, Cyprus, 2023, pp. 1-4, doi: 10.1109/e-
Science58273.2023.10254842.
3. A. Siddhpura, M. Siddhpura, A. Evangelista and I. V, "The success of e-
Learning implementation for engineering courses: Case study - EIT," 2021
International e-Engineering Education Services Conference (e-Engineering), Petra,
Jordan, 2021, pp. 1-8, doi: 10.1109/e-Engineering47629.2021.9470650.
4. G. Tabunshchyk and P. Arras, "Agile Framework for the Elaboration of E-
learning Materials," 2023 IEEE European Technology and Engineering Management
Summit (E-TEMS), Kaunas, Lithuania, 2023, pp. 164-167, doi: 10.1109/E-
TEMS57541.2023.10424617.
5. Y. K. Al-Rikabi and G. Ali Montazer, "Extracting the main aspects of e-
Learning readiness assessment for Iraqi universities," 2023 10th International and
the 16th National Conference on E-Learning and E-Teaching (ICeLeT), Tehran,
Iran, Islamic Republic of, 2023, pp. 1-5, doi: 10.1109/ICeLeT58996.2023.10139868.
6. F. Garcia-Loro et al., "e-Engineering Learning, Teaching and Training at
UNED," 2021 International e-Engineering Education Services Conference (e-
Engineering), Petra, Jordan, 2021, pp. 62-67, doi: 10.1109/e-
Engineering47629.2021.9470695.
64
7. G. Andrieu and C. Dalmay, "About compensation arrangements of teachers
involved in e-learning trainings," 2021 International e-Engineering Education
Services Conference (e-Engineering), Petra, Jordan, 2021, pp. 98-99, doi: 10.1109/e-
Engineering47629.2021.9470532.
8. B. V. Babu, S. S. Aravinth, K. Gowthami, P. Navyasri, A. Jivitha and T.
Yasaswini, "Enhancing Personalized Learning Experiences by Leveraging Deep
Learning for Content Understanding in E-Learning Recommender Systems," 2024
International Conference on Computing and Data Science (ICCDS), Chennai, India,
2024, pp. 1-6, doi: 10.1109/ICCDS60734.2024.10560438.
9. A. Komolafe et al., "E-Textiles in STEM Outreach and Education," 2024
International Conference on the Challenges, Opportunities, Innovations and
Applications in Electronic Textiles (E-Textiles), Berlin, Germany, 2024, pp. 265-
268, doi: 10.23919/E-Textiles63767.2024.10914221.
10. Ł. Tmczyk, K. Potyrała, N. Demeshkant and K. Czerwiec, "University teachers
and crisis e-learning : Results of a Polish pilot study on: attitudes towards e-learning,
experiences with e-learning and anticipation of using e-learning solutions after the
pandemic," 2021 16th Iberian Conference on Information Systems and Technologies
(CISTI), Chaves, Portugal, 2021, pp. 1-6, doi: 10.23919/CISTI52073.2021.9476521.

65

You might also like