0% found this document useful (0 votes)
29 views60 pages

Face Recoog

Uploaded by

subasaran2107
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views60 pages

Face Recoog

Uploaded by

subasaran2107
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

FACE RECOGNITION BASED STUDENT

ATTENDANCE SYSTEM

A PROJECT REPORT

Submitted by

SUBALAKSHMI (191IG137)
UMAIYAL M R (191IG141)

In partial fulfilment for the award of the degree


of
BACHELOR OF ENGINEERING
in

INFORMATION SCIENCE AND ENGINEERING

BANNARI AMMAN INSTITUTE OF TECHNOLOGY


(An Autonomous Institution Affiliated to Anna University,
Chennai) SATHYAMANGALAM-638401

ANNA UNIVERSITY: CHENNAI 600025

MARCH 2023

i
BONAFIDE CERTIFICATE

Certified that this project report “FACE RECOGNITION BASED


STUDENT ATTENDANCE SYSTEM” is the Bonafide work of
SUBALAKSHMI S (191IG137) and UMAIYAL M R (191IG141),) who
carried
out the project work under my supervision.

SIGNATURE
SIGNATU
RE
Ms. Nandhini S S Mrs. Kiruthiga R
HEAD OF THE DEPARTMENT, SUPERVISOR,
Assistant Professor Level III, Assistant Professor ,
Department of Information Science and Department of Artificial Intelligence
Engineering, and Data science,
Bannari Amman Institute of Technology Bannari Amman Institute of
Erode – 638401. Technology Erode - 638401.

Submitted for Project Viva Voce examination held on ………………

Internal Examiner External Examiner

ii
DECLARATION

We affirm that the project work titled “FACE RECOGNITION BASED


STUDENT ATTENDANCE SYSTEM” being submitted in partial fulfilment
for the award of the degree of Bachelor of Engineering in INFORMATION
SCIENCE AND ENGINEERING is the record of original work done by us
under the guidance of Mrs. Kiruthiga R, Assistant Professor , Department of
Artificial Intelligence and Data Science. It has not formed a part of any other
project work(s) submitted for the award of any degree or diploma, either in this
or any other University.

SUBALAKSHMI S UMAIYAL M R
(191IG137) (191IG141)

I certify that the declaration made above by the candidates is true.

SIGNATURE
Mrs. Kiruthiga R
Supervisor,
Assistant Professor,
Department of Artificial Intelligence and
Data Science,
Bannari Amman Institute of Technology,
Erode - 638401.

iii
ACKNOWLEDGEMENT

We would like to enunciate heartfelt thanks to our esteemed Chairman


Thiru. Dr. Balasubramaniam. S. V, Trustee Dr. Vijayakumar. M. P, and the
respected Principal Dr. Palanisamy. C for providing excellent facilities and
support during the course of study in this institute.

We are grateful to Ms Nandhini S S, Head of the Department,


Department of Information Science and Engineering for her valuable
suggestions to carry out the project work successfully.

We wish to express our sincere thanks to Faculty guide Mrs. Kiruthiga R,


Assistant Professor , Department of Artificial Intelligence and Data Science for
her constructive ideas, inspirations, encouragement, excellent guidance and
much needed technical support extended to complete our project work.

We would like to thank our friends, faculty and non-teaching


staff who have directly and indirectly contributed to the success of this project.

SUBALAKSHMI S
(191IG137) UMAIYAL M R
(191IG1
41)

iv
ABSTRACT

Face is the representation of one’s identity. Hence, we have proposed


an automated student attendance system based on face recognition with
student presence assurance system. Face recognition system is very useful
in life applications especially in security control systems. For better user
interface we have accustomed to android app developed using Android
studio. Under usual circumstances, Student face will be detected and
attendance will be marked. However, in this proposed methodology, there
is student presence assurance system in which student attendance will not
be marked if not above required percentage, to achieve this we present
Geodataframe whereas we can visualize student location in form of map of
entire classroom. And regular notification system based alert of absence
will be presented. After presence assurance system only, we get to
implement face recognition based attendance system. The face ROI is
detected and segmented from the video frame by using Viola-Jones
algorithm. In the pre-processing stage, scaling of the size of images is
performed if necessary in order to prevent loss of information. The median
filtering is applied to remove noise followed by conversion of colour
images to grayscale images. After that, contrast-limited adaptive histogram
equalization (CLAHE) is implemented on images to enhance the contrast
of images. In face recognition stage, enhanced local binary pattern (LBP)
and principal component analysis (PCA) is applied correspondingly in
order to extract the features from facial images. In our proposed approach,
the enhanced local binary pattern outperforms the original LBP by reducing
the illumination effect and increasing the recognition rate. The facial
images are then classified and recognized based on the best result obtained
from the combination of algorithm, enhanced LBP and PCA. Finally, the
attendance of

v
the recognized student will be marked. The student who is not registered
will also be able to register on the spot and notification will be given if
students sign in more than once. The average accuracy of recognition is 100
% for good quality images, 94.12 % of low-quality images and 95.76 % for
Yale face database when two images per person are trained.

vi
TABLE OF CONTENTS

CHAPTER TITLE PAGE


NO NO

iv
ACKNOWLEDGEMENT
ABSTRACT v

TABLE OF CONTENTS vii

LIST OF FIGURES x

LIST OF TABLES xii

LIST OF SYMBOLS AND xiii


ABBREVIATIONS
1 INTRODUCTION 1

1.1 Problem Statement 1

1.2 Objectives 2

2 LITERATURE REVIEW

2.1 Attendance management 4

2.1.1 Viola-jones algorithm 8

3 PROPOSED SYSTEM 12
3.1 Face recognition 12

3.2 Input images 15

3.2.1 Limitations of Images 16

3.3 Face Detection 17

vii
3.4 Pre Processing 17
3.5 Feature Extraction 19

3.5.1 Working Principle of 20


original LBP

3.5.2 Working Principle of


21
proposed LBP

3.5.3 Working principle of PCA 22

3.6 Feature Classification 24

3.7 Location Generation 25

3.8 GeoDataFrame - CRS 26

3.9 Connect Android 27


studio and Google
colab

4 REQUIREMENT SPECIFICATION 31
4.1 Hardware Requirements 31

4.2 Software Requirements 31

4.3 Software Descriptions 31

4.3.1 Python 31

4.3.1.1 Numby 32

4.3.1.2 Pandas 32

4.3.1.3 Sklearn 32

4.3.1.4 MatPlotlib 33

4.3.1.5 Seaborn 33

viii
4.3.2 Java 33
4.3.3 Google Colab 34

4.3.4 Android Studio 34

4.3.5 Flask API 34

5
RESULT AND DISCUSSION 36
5.1 Results 36

5.2 Discussions 40

5.3 Previous Research 40

6 REFERENCES 42

ix
LIST OF FIGURES

FIGURE NO DESCRIPTION PAGE NO

2.1 Image integration 8

3.1 Training part 13


3.2 Recognition part 14
Sample Images in Yale
3.3 15
Face Database
3.4 Sample of High quality images 15

3.5 Sample of low quality images 16


Median filtering done on
3.6 18
three channels
Median filtering done on
3.7 19
single channel
Conversion of image to
3.8 19
grayscale image
3.9 Contrast Improvement 19

3.10 Example of LBP Conversion 20

3.11 LBP with different size 21


Proposed LBP operator with radius 2
3.12 22
and its Encoding pattern
3.13 Complete Flow Chart 28

5.1 Login Page 36

x
5.2 Access location 36

5.3 Permission Page 36

5.4 Generation Page 36


5.5 Classroom selection 37

xi
LIST OF TABLES

FIGURE PAGE
TITLE
NO NO

Advantages & Disadvantages


2.1 5
recorded from 2017 paper
proposed by Arun Katara et al.

2.2 Difficulties faced during Face 6

detection

2.3 Advantages & Disadvantages of 7

various methodologies

2.4 Different methods of Contrast 9

Improvement

2.5 Accuracy of Different face 11

recognition methodologies.

Summary of Comparison
5.1 39
with Previous Researches

xii
LIST OF SYMBOLS / ABBREVIATONS

χ2 Chi-square statistic

𝑑 distance

𝑥 input feature points

𝑦 trained feature points

𝑚𝑥 mean of x

𝑆𝑥 covariance matrix of x
𝑋𝑐 x coordinate of center pixel

𝑌𝑐 y coordinate of center pixel

𝑋𝑝 x coordinate of neighbour pixel

𝑌𝑝 y coordinate of neighbour pixel

𝑅 radius

𝜃 angle

𝑃 total sampling points


𝑁 total number of images

𝑀 length and height of images

𝛤𝑖 column vector

𝜑 mean face

xiii
Φ𝑖 mean face subtracted vector

𝐴 matrix with mean face removed

𝐴 𝑇 transpose of 𝐴

𝐶 covariance matrix

𝑢𝑖 eigenvector of 𝐴𝐴 𝑇

𝑣𝑖 eigenvector of 𝐴 𝐴𝑇

λ eigenvalue

𝑈 eigen face

𝑈 𝑇 transpose of eigen face

Ω projected image

Ω𝑖 projected image v

xiv
CHAPTER 1

INTRODUCTION

To build an app for face recognition based student attendance system is the
main objective. Considering various aspects of face recognition to achieve
better results, Limiting the test and training images to frontal and upright facial
images and using the Viola-Jones algorithm for segmentation can help improve
performance. Applying median filtering to remove noise and using CLAHE
for contrast enhancement are also great techniques to enhance image. Along
with face recognition, location generation every once 5 minutes are refreshed
to ensure the accurate attendance. Accurate attendance involves how many
minutes does student stays in the class. Also presence time if below minimum
percentage will be considered absent. Automatic message delivery system for
not being present in class will be accomplished and accustomed in admin
bonus features. The students have to register their prime details in the
portal/app to be recognized. The registration of student images can be done
easily with efficient user interface.

1.1 Problem Statement

Existing or already classical student attendance system involves man


power and time consumption. Other form of attendance system involves
hardware installation, which involves man power and maintenance cost.
Considering hardware installation for greater number of students involves
numerous machine installation along electricity providence. Maintaining
hardware machines include frequent upgradation and updates that requires
tech assistance. Any

1
disputes or damages can only be handled by tech assistants. It's true that face
recognition systems have been around for a while, but there is always room
for improvement. One of the main challenges of face recognition is dealing
with variations in head position and facial expressions, which can make it
difficult to recognize faces accurately.

To address these challenges, researchers have developed various


techniques such as deep learning-based approaches and 3D face recognition.
These approaches have shown promising results in improving the accuracy of
face recognition systems.

In addition to accuracy, it's also important to consider real-time


performance and memory usage. With advancements in hardware and
software technologies, it's now possible to develop face recognition systems
that are both highly accurate and fast.
Combining a face recognition system with a student presence assurance
system can be a powerful tool for tracking attendance in real-time. This can
help reduce the time and effort required for manual attendance tracking and
improve accuracy. Overall, developing a real-time face recognition app with
high accuracy and fast computation time can provide numerous benefits in
various applications.

1.2 Objectives

To build a real time face recognition app with student presence assurance
system, the following are objectives

● To generate latitude and longitude.

2
● To create a selection form for choosing classroom size
● To visualize geo – dataframe map.
● To record the data from unwanted noises.
● To classify the data in order to proceed recognition part.
● To record the attendance and mark depending on attendance percentage.

3
CHAPTER 2

LITERATURE REVIEW

2.1 Attendance management

RFID card systems can be simple to implement and use, but they do have
limitations. For example, they can be lost or stolen, and it's possible for
someone to use another person's card to gain access or mark attendance.
Fingerprint and iris recognition systems can be more secure, but they do
require more time for verification and can be vulnerable to fraudulent
approaches such as spoofing. These methods also involve the integration of
sensitive biometric data, which raises privacy concerns. Voice recognition can
also be less accurate, especially in noisy environments or for individuals with
speech impediments.

Given these limitations, face recognition can be a viable solution for


attendance tracking. It offers a good balance of accuracy, speed, and ease of
use. With advances in technology, face recognition systems have become more
reliable and secure, making them a popular choice for various applications.
Additionally, face recognition does not require physical contact or sensitive
data integration, which can be important factors for privacy and convenience.

4
Table 2.1 Advantages & Disadvantages recorded from 2017 paper proposed by
ArunKataraetal.,

5
Table 2.2 Difficulties faced during Face detection

Table 2.3 Advantages & Disadvantages of various methodoligies

6
7
2.1.1 Viola-Jones Algorithm

The Viola-Jones algorithm, introduced by P. Viola and M. J. Jones in 2001, is a


popular algorithm for face detection in images or video frames.

The algorithm consists of four main parts:

● Haar feature selection: A set of rectangular features are computed for


each image window to distinguish between faces and non-faces.

● Integral image creation: The integral image is computed to speed


up the calculation of Haar features.

● Adaboost: A machine learning algorithm is used to select the best


features and create a strong classifier that can classify face and non-
face regions.

Cascading process: The classifier is applied to image windows in a


cascading manner, where the image windows that are classified as non-face
regions are discarded early on to reduce computation time.

Overall, the Viola-Jones algorithm has been widely adopted due to its high
accuracy and fast computation time, making it suitable for real-time face
detection applications.

Figure 2.1 Image integration

The value of integrating image = the sum of pixels on the left and
the

8
Table 2.4 Different methods of Contrast Improvement

9
10
Table 2.5 Accuracy of Different face recognition methodologies.

11
CHAPTER 3

PROPOSED SYSTEM

3.1 Proposal system – Face recognition

The system described here is the methodology for performing face recognition-
based student attendance system. The approach involves several stages, including
image capture, pre-processing, feature extraction, subjective selection. The first
stage of the methodology is image capture. This involves using a simple interface
to capture facial images. Next, the captured facial images are pre-processed to
remove any noise. It's worth noting that the use of both LBP and PCA feature
extraction methods is a common approach in many face recognition systems. LBP
is particularly effective at capturing local facial features, while PCA can reduce
the dimensionality of the feature space, making the classification process more
efficient. The combination of these two techniques can lead to improved accuracy
in face recognition systems. Additionally, the use of subjective selection to choose
the most relevant features can further improve accuracy, as it allows the algorithm
to focus on the most distinctive features of each individual's face.
The flow chart for the proposed system is categorized into two parts,
shown in Figure 3.1 and Figure 3.2 respectively.

12
13
14
3.2 Input Images

Using existing face databases like the Yale face database can be useful in
designing and evaluating a real-time face recognition student attendance system.
The Yale face database provides a diverse set of images that can be used to train
and test the performance of the system. By using a pre-existing database, the
system can be developed and tested with a standardized set of images that have
already been curated and labeled. This can save time and resources that would
otherwise be required to create a custom database from scratch. Additionally,
using a pre-existing database can provide a benchmark for performance
comparison with other system that have used the same database. The Yale face
database is a popular choice for evaluating system because it includes images of
individuals in a variety of poses, expressions, and lighting conditions. This ensures
that the system can recognize faces consistently in different situations and
conditions. Overall, while using a custom database is ideal for a real-time face
recognition student attendance system, using existing databases like the Yale face
database can still be a valuable tool in designing and evaluating the vitality of the
system.

15
3.2.1 Limitations of the Images

The proposed approach has certain conditions that must be satisfied in order for
it to perform well. These conditions include:
Frontal, upright and single face: The input image for the proposed
approach must be a frontal, upright image of a single face. This is because
the system is designed to recognize specific facial features that are only
visible in a front- facing image.
Facial images with and without glasses: While the system is designed to
recognize faces with and without glasses, it is recommended that students
provide both facial images with and without glasses to increase the
accuracy of recognition.
Same device for training and testing: The training and testing images
should be captured using the same device to avoid any quality differences
that may affect the accuracy of recognition.
Student registration: In order to be recognized by the system, students must
register and provide their facial images for training. This can be done
through a user-friendly interface that facilitates the enrolment process.
By satisfying these conditions, the proposed approach can be optimized to
achieve high accuracy in recognizing students and recording attendance.

16
3.3 Face Detection

Viola-Jones object detection framework will be used to detect the face from the
video camera recording frame. The Viola-Jones algorithm works by using a
cascade of simple Haar-like features to classify sub-windows of an image as
either "face" or "non-face". Viola-Jones algorithm has limitations, and one of
them is that the facial image has to be a frontal upright image. This means that
the algorithm is optimized for detecting faces that are oriented towards the
camera in a video frame. If a face is not oriented towards the camera, or is
partially obscured, the algorithm may not be able to detect it accurately. Despite
its limitations, the Viola-Jones algorithm is still widely used for face detection
in real-world applications due to its speed and accuracy in detecting faces that
are oriented towards the camera. By using this framework in a student attendance
system, the system can quickly and accurately detect and recognize the faces of
registered students and record their attendance.
3.4 Pre-Processing

Pre-processing steps are essential for improving the quality of the images before
proceeding to feature extraction. Here are the details of the pre-processing steps
you mentioned:
● Scaling of image: Scaling of the image involves resizing the image to a

particular size, which is usually smaller or larger than the original size. This
step is important because it reduces the computational load during the feature
extraction process, which makes it faster and more efficient.
● Median filtering: Median filtering is a popular pre-processing technique used
to remove noise from images. This technique involves replacing each pixel
in the image with the median value of its neighboring pixels. This helps to
remove salt-

17
and-pepper noise and other types of random noise in the image.
● Conversion of color images to grayscale images: This step involves
converting a color image to a grayscale image. Grayscale images have only
one channel (i.e., black and white), which makes it easier to perform certain
image processing tasks.
● Adaptive histogram equalization: Histogram equalization is a technique used

to improve the contrast of an image. Adaptive histogram equalization is a


variant of this technique that enhances local contrast by dividing the image
into smaller regions and equalizing the histogram of each region separately.
This helps to improve the overall quality of the image by enhancing the
details in the darker and brighter regions.
By applying these pre-processing steps, you can improve the quality of the
images and make them more suitable for feature extraction.

18
3.5 Feature Extraction

In order to perform face recognition, features have to be extracted from


facial images and classified appropriately. There are many methods for feature
extraction, including Local Binary Patterns (LBP), Principal Component
Analysis (PCA), and many others. Enhanced LBP and PCA are popular
techniques for feature extraction in face recognition. Enhanced LBP is a local
feature extraction method that computes a histogram of the binary patterns
within a small region of the image. This method is effective at capturing the
texture information within the image and can be used to distinguish between
different facial expressions or facial hair patterns. PCA, on the other hand, is a
global feature extraction method that

19
uses linear algebra to identify the principal components of the image. This
method is effective at capturing the overall shape and structure of the face,
which can be used to distinguish between different individuals. By combining
local and global feature extraction methods like enhanced LBP and PCA, we
can capture both the texture and geometric information present in facial images,
which can improve the accuracy of face recognition. However, it's important to
note that the performance of these methods can depend on the specific
characteristics of the images being analyzed, such as lighting conditions, pose,
and expression. Therefore, it's important to carefully select and optimize the
feature extraction methods based on the specific requirements of the
application.

3.5.1 Working Principle of Original LBP

where Pc indicates center pixel and Pn (n = 0,…, 7) are 8 of its


neighboring pixels respectively.

20
3.5.2 Working Principle of Proposed LBP

● A larger radius size, R is implemented in LBP operator.


● R indicates radius from the center pixel,

● 𝜃 indicates the angle of the sampling point with respect to the center pixel
● P indicates number of sampling points on the edge of the circle
taken to compare with the center pixel.

(3.3)

𝑋𝑝 = 𝑋𝑐 + 𝑅𝑐𝑜𝑠(𝜃/𝑃)
𝑌𝑝 = 𝑌𝑐 + 𝑅𝑠𝑖𝑛(𝜃/𝑃)

21
3.5.3 Working Principle of PCA

Step1: Prepare the data,

22
Step 2: Obtain the mean/average face vector

Step 3:

Subtract the mean/average face vector

23
Step 4: Calculate the covariance matrix

Step 5: Calculate the eigenvectors and eigenvalues from the covariance matrix.

Step 6: Projection of facial image to Eigen face.

24
3.6 Feature Classification

In face recognition systems, after feature extraction, the next step is to classify
the images by comparing them to a set of known images (the training set) to
determine the closest match. This is typically done using a distance metric,
which measures the dissimilarity between two images. In the case of LBP
feature extraction, the chi-square statistic is commonly used as a dissimilarity
measure to determine the shortest distance between the training image and the
testing image. The chi-square statistic is a statistical measure that quantifies the
difference between two probability distributions. By using this measure, we can
determine how similar two LBP histograms are and use this information to
classify the images. Similarly, in the case of PCA feature extraction, the
Euclidean distance is commonly used to compute the shortest distance between
the trained and test images. The Euclidean distance is a geometric measure that
quantifies the distance between two points in a high-dimensional space. By
using this measure, we can determine how similar two feature vectors are and
use this information to classify the images. However, as you correctly pointed
out, the nearest result might not always be true. This is because different images
may have similar feature vectors or LBP histograms, and it can be difficult to
distinguish between them based on these features alone. Therefore, combining
the results of multiple feature extraction methods, such as enhanced LBP and
PCA, can improve the accuracy of the system. One common approach to
combining multiple feature extraction methods is to use a weighted average or
a fusion algorithm to combine the results.

25
3.7 Location Generation

The Android Location API is a set of classes and interfaces provided by the
Android SDK that allows developers to access location data from various
sources, such as GPS, Wi-Fi, and cellular networks. This API provides a simple
and consistent way to request location updates, retrieve location information,
and track user movements.
The Android Location API is based on the concept of location providers, which
are sources of location data. The three main types of location providers are:
● GPS Provider: This provider uses GPS to determine the location of the

device. GPS is a very accurate method of determining location but can be


slow and may not work well indoors or in areas with poor GPS reception.
● Network Provider: This provider uses a combination of Wi-Fi and cellular

network data to determine the location of the device. This method is faster
and more reliable than GPS in many cases, especially indoors, but may not
be as accurate as GPS.
● Fused Location Provider: This provider combines data from multiple

sources, including GPS, Wi-Fi, and cellular networks, to provide the most
accurate and reliable location data possible. It automatically selects the best
available source based on the current location and other factors.
To use the Android Location API, developers can create an instance of the
LocationManager class, which is responsible for managing location updates
and
retrieving location information. They can then request location updates by
specifying the desired provider and minimum time and distance intervals
between updates. The Location API also provides a range of features, such as
geocoding (converting addresses to latitude/longitude coordinates), reverse
geocoding (converting latitude/longitude coordinates to addresses), and
location tracking.

26
These features can be used to create location-based apps, such as mapping and
navigation apps, social networking apps, and more.

3.8 GeoDataFrame - CRS

A GeoDataFrame is essentially a Pandas DataFrame that has an additional


column called 'geometry' which contains the location data in a specific format
such as points, lines, or polygons. The 'geometry' column is usually created
using the shapely library in Python.
Geopandas is a Python library that provides tools to work with geospatial data,
including reading and writing data in various file formats such as shapefiles,
GeoJSON, and others. Geopandas can also read data from databases that
support spatial data, such as PostGIS.
With Geopandas, we can manipulate the data in the GeoDataFrame just like we
would with a regular Pandas DataFrame, but we can also perform spatial
operations on the 'geometry' column, such as calculating distances, intersecting
geometries, and more. Geopandas provides a convenient and efficient way to
work with geospatial data in Python, making it a popular choice for spatial data
analysis and visualization.
Latitude and longitude coordinates are numerical values that represent positions
on the Earth's surface in a spherical coordinate system. In order to locate a
position on the Earth's surface accurately, we need to associate these
coordinates with a specific coordinate system or CRS.
A CRS defines a reference framework for measuring positions on the Earth's
surface. It includes a datum, which defines how the 3D spherical surface of the
Earth is mapped onto a 2D surface, such as a paper map or a computer screen.
Different CRSs can use different units of measurement, reference points, and

27
projections, which can affect the accuracy and precision of location data.

3.9 Connect Android studio and Google colab

To connect Android Studio and Google Colab with a Flask API, you will need
to expose the Flask API on a publicly accessible web server. Basic steps to do
so:
● Deploy your Flask API to a cloud service, such as Heroku, Google Cloud

Platform, or AWS Elastic Beanstalk. These services provide web servers


and other infrastructure that can run your Flask API and make it publicly
accessible.
● Obtain the URL of your deployed Flask API. This will typically be in the
format of a domain name or IP address.
● In Android Studio, use the HTTP client library (such as Retrofit) to make
HTTP requests to your Flask API using the URL obtained in step 2.
● In Google Colab, use the Python requests library to make HTTP requests
to your Flask API using the URL obtained in step 2.
● Test your connections by sending HTTP requests from Android Studio
and Google Colab to your Flask API and checking that you receive the
expected responses.
It's important to secure your Flask API with authentication and authorization
mechanisms. You can use Flask extensions like Flask-HTTPAuth or Flask-JWT
to add authentication and authorization to your API, and HTTPS to encrypt data
transmitted over HTTP.

28
Figure 3.13 Complete Flow chart

29
30
CHAPTER 4

REQUIREMENT SPECIFICATION

4.1 Hardware Requirements

Processor - Intel

core i5 RAM - 8 GB

Keyboard - Standard 104

keys Mouse - Optical

Mouse

4.2 Software Requirements

Operating System - Windows

10 Front End – Python, Java

Tool - Google Colab, Android studio

4.3 Software Descriptions


4.3.1 Python

Python is an interpreted, high-level, general-purpose programming. Language It


functions on Mac, Windows, Linux, and Unix, and there are also unofficial
Android builds available. Among the basic data types available are dictionaries,
lists, strings,

31
numbers, and lists. Classes and multiple inheritance are available in Python for
object- oriented programming. The language's exception raising and catching
capabilities make error management simpler.
The following Python packages are used in this project:

1. Numpy

2. Pandas

3. Sklearn

4. Matplotlib

5. Seaborn

4.3.1.1 Numby

One of the most common Python packages is called Numpy. Numerical Python
is the short form of it. Working with large arrays and matrices are primarily its
uses. Complex numerical operations can be calculated with ease as well.
Numpy can be used for code optimization because it occupies a very small
amount of memory and specifies the data types of the content that is present
inside.

4.3.1.2 Pandas

Working with "relational" or "labelled" data is made simple and intuitive with
the help of the Python module Pandas, which offers quick, adaptable, and
expressive data structures. It seeks to serve as the essential, high-level building
block for using Python for actual, useful data analysis.

4.3.1.3 Sklearn

32
The efficient and robust able Python machine learning library Sklearn(skit-
learn) offers statistical modelling and provides variety of effective tools. This
involves variety of machine learning techniques classification, regression,
clustering, dimensionality reduction.

4.3.1.4 Matpoltlib

Data visualization is accomplished with the help of the general-purpose


library Matplotlib. It is an enhancement to Numpy for numbers. It provides
animated and interactive visualizations and includes a module called pyplot for
plotting graphs with various line styles, formatting axes, typefaces, and other
things.

4.3.1.5 Seaborn

For Python statistical graphics plotting, Seaborn is a fantastic visualization


library. To make statistics charts more appealing, it offers lovely default styles
and colour schemes. It is built over Matplotlib toolkit and has a close integration
with the Pandas data structures. With Seaborn, data exploration and
comprehension will increasingly revolve around visualization. It offers dataset-
oriented APIs, enabling us to switch between various visual representations of
the same variables for a deeper knowledge of the dataset.

4.3.2 Java

Java is used by developers and techies to create applications in desktops,


mobile and other devices. Java is third popular language. C and C++ are base
module for Java. Java is built as object oriented language. Several client
platforms are used to create java applications. The basic form of java is object
and platform independent language. Which plays major role as convenience
for developers.

33
4.3.3 Google Colab

Google created Google Colab to give anyone who requires access to GPUs and
TPUs for building a machine learning or deep learning model free access to
them. A more advanced version of Jupyter Notebook is Google Colab. Using a
web browser or an Integrated Development Environment, Jupyter Notebook is
a program that enables editing and running of Notebook documents (IDE).
Notebooks will be used for working rather than files.

4.3.4 Android Studio

Android Studio is an Integrated Development Environment (IDE) developed by


Google for building applications for Android mobile devices. It is based on the
IntelliJ IDEA platform and is designed specifically for Android development.
Android Studio provides a range of tools and features to help developers create
high- quality, efficient, and robust Android applications. Some of the key features
of Android Studio include: A visual layout editor for designing and previewing
user interfaces A code editor with syntax highlighting, code completion, and
refactoring tools A rich set of debugging and testing tools for identifying and
fixing bugs Integration with the Android SDK, which includes libraries, sample
code, and emulators for testing apps on different devices

4.3.5 Flask API

Flask is a micro web framework written in Python that allows developers to build
web applications quickly and easily. Flask is known for its flexibility, and ease of
use, making it a popular choice for developing web applications, including APIs.
Flask APIs can be deployed to a variety of hosting platforms, including cloud

34
providers like AWS or Google Cloud, and can be used to build a wide range of
applications, including mobile apps, web applications, and Internet of Things
(IoT) devices.

35
CHAPTER 5

RESULT AND DISCUSSION

5.1 Result

From the above proposed system in Chapter 3, User gets to login, if user is
Teacher he gets to switch ON his Location option and generate longitude and
latitude of the classroom. and students will login and their location will be
generated in terms of longitude and latitude. Our database of class room shape
area and shape length will be collected. With these integrated data, we will be
visualizing student location in GeoDataFrame. If the student moves outside the
GeoDataFrame for about 5 min, Professor/Admin will get notification from the
app he is using. To ensure this, the location generation will be held every 3 sec
and data will be refreshed. Once, student is within GeoDataFrame he will asked
to provide access to camera. After providing access, to perform face recognition
automatically he "register" button allows for the enrollment or registration of
students, which involves capturing their facial images and storing them in the
system's database. The "update" button is used to retrain the system with the latest
images that have been registered. This is important to ensure that the system is
always up- to-date and able to accurately recognize students.
The "browse" button is used to select facial images from the database for testing
purposes. This allows the user to verify the accuracy of the system's recognition
capabilities. Finally, the "recognize" button is used to initiate the face recognition
process on a selected image. The system will compare the features of the selected
image with those in the database and attempt to identify the student in the image.

36
Figure 5.1 Login page Figure 5.2 Access Location

Figure 5.3 Permission page Figure 5.4 Generation page

37
Figure 5.5 Classroom selection

38
39
5.2 Discussion

This pre-processing step involves applying the Contrast Limited


Adaptive Histogram Equalization (CLAHE) algorithm to improve the image
quality and reduce illumination differences. The Local Binary Pattern (LBP)
operator is then applied to extract the texture features of the facial image. After
the features are extracted, Principal Component Analysis (PCA) is applied to
reduce the dimensionality of the feature vectors. The reduced feature vectors
are then used to train a Support Vector Machine (SVM) classifier for face
recognition. To evaluate the performance of the proposed approach,
experiments are conducted on the Yale face database and the custom database
with high quality and low-quality images. Overall, the proposed approach for
face recognition in a student attendance system based on texture-based features
and pre-processing techniques to reduce the effects of illumination and image
quality has the potential to be a reliable and accurate solution for attendance
management in educational institutions.

5.3 Comparison with Previous Researches

Table 5.1 Summary of Comparison with Previous Researches

40
41
REFERENCES

1. Arun Katara, Mr. Sudesh V. Kolhe, Mr. Amar P. Zilpe, Mr. Nikhil D. Bhele,
Mr. Chetan J. Bele. (2017). “Attendance System Using Face Recognition and
Class Monitoring System”, International Journal on Recent and Innovation
Trends in Computing

2. Ashley DuVal. (2012).Face Recognition Software -History of Forensic


Psychology.
3. DeAgonia, M. (2017). Apple's Face ID [The iPhone X's facial recognition
tech explained].

4. Jesse Davis West. (2017). History of Face Recognition - Facial recognition


software. [online] Available at: https://www.facefirst.com/blog/brief-history-
of- face- recognition-software/ [Accessed 25 Mar. 2018].

5. Margaret Rouse. (2012). What is facial recognition? - Definition from


WhatIs.com. [online] Available at:
http://whatis.techtarget.com/definition/facial-recognition [Accessed 25 Mar. 2018].

6. Naveed Khan Balcoh. (2012). Algorithm for Efficient Attendance


Management: Face Recognition based approach.International Journal of
Computer Science Issues, V9 (4), No 1.

7. Reichert, C. (2017). Intel demos 5G facial-recognition payment technology |


ZDNet. [online] ZDNet. Available at:
http://www.zdnet.com/article/intel-demos-5g-facial-recognition-payment-
technology/ [Accessed 25 Mar. 2018].

8. Robert Silk. (2017). Biometrics: Facial recognition tech coming to an airport


near you: Travel Weekly. [online] Available at:
http://www.travelweekly.com/Travel-News/Airline-News/Biometrics-
Facial- recognition-tech-coming-airport-near-you [Accessed 25 Mar.
2018].

9. Robinson-Riegler, G., & Robinson-Riegler, B. (2008). Cognitive psychology:


applying the science of the mind. Boston, Pearson/Allyn and Bacon.

42
10. Sidney Fussell. (2018). NEWS Facebook's New Face Recognition
Features:
What We Do (and Don't) Know. [online] Available at:
https://gizmodo.com/facebooks-new-face-recognition-features-what-we-
do- an- 1823359911 [Accessed 25 Mar. 2018].

11. Solon, O. (2017). Facial recognition database used by FBI is out of control,
House committee hears. [online] the Guardian. Available
at: https://www.theguardian.com/technology/2017/mar/27/us-facial-
recognition- database-fbi-drivers-licenses-passports [Accessed 25 Mar.
2018].

43
CONTRIBUTION OF WORK

INDIVIDUAL CONTRIBUTION OF STUDENT 1

Name: SUBALAKSHMI S Register Number: 191IG137

My Contribution to the project is combining latitude longitude and classroom size


value and generate geodataframe map. Forming layouts which are user friendly
with Android studio. In depth study in yale database to check the accuracy.
Working with Notification system in Android studio.

INDIVIDUAL CONTRIBUTION OF STUDENT 2

Name: UMAIYAL M R Register Number: 191IG141

My Contribution to the project is generating latitude and longitude with android


location API. Forming layouts which are user friendly with Android studio. In
depth study in algorithm selection which acquires maximum accuracy. Working
Google colab and Android studio connection part by freezing the model.

The Report is Generated by DrillBit Plagiarism Detection Software


Submission Information

Author Name Umaiyal


Title paper
Paper/Submission ID 710862
Submission Date 2023-03-10 12:14:32
Total Pages 31
Document type Research Paper

Result Information

Similarity 23 %
90

Sources Type Report Content


ExcludeStudent
Information Quotes
Paper
0.18%
0.18% Internet
6.97%

Quotes Not Excluded


Journal/
References/Bibliography Not
Publicatio
Excluded Sources: Less than 14 Words Similarity Not Words
n 15.85%
Excluded Excluded Source 0% <14,
Excluded Phrases Not Excluded 10.9%

A Unique QR Code use to View/Download/Share Pdf File

You might also like