0% found this document useful (0 votes)
13 views

Report

The document presents a project report on the design and implementation of an Automated Attendance System using contactless facial recognition technology. It details the integration of hardware components like Raspberry Pi and Pi CAM Module, and the use of algorithms such as Local Binary Pattern Histogram (LBPH) and Haar cascades for accurate face detection and recognition. The system aims to enhance attendance tracking in educational settings by providing a user-friendly interface and efficient management tools for teachers and students.

Uploaded by

Divya N
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Report

The document presents a project report on the design and implementation of an Automated Attendance System using contactless facial recognition technology. It details the integration of hardware components like Raspberry Pi and Pi CAM Module, and the use of algorithms such as Local Binary Pattern Histogram (LBPH) and Haar cascades for accurate face detection and recognition. The system aims to enhance attendance tracking in educational settings by providing a user-friendly interface and efficient management tools for teachers and students.

Uploaded by

Divya N
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

i

DESIGN AND IMPLEMENTATION OF


AUTOMATED ATTENDANCE SYSTEM USING
CONTACTLESS FACIAL RECOGNITION
A PROJECT REPORT
Submitted by

DIVYA DHARSHINI N (203001030)


MANIKANDAN S (203001055)
MONA ABISHEK A (203001057)

in partial fulfilment for the award of the degree of

BACHELOR OF ENGINEERING
IN
ELECTRICAL AND ELECTRONICS
ENGINEERING

Department of Electrical and Electronics Engineering


Sri Sivasubramaniya Nadar College of Engineering
(An Autonomous Institution, Affiliated to Anna University)
Rajiv Gandhi Salai (OMR), Kalavakkam - 603 110
MAY 2024
ii

Sri Sivasubramaniya Nadar College of Engineering


(An Autonomous Institution, Affiliated to Anna University)

BONAFIDE CERTIFICATE

Certified that this report titled “DESIGN AND IMPLEMENTATION OF


AUTOMATED ATTENDANCE SYSTEM USING CONTACTLESS
FACIAL RECOGNITION” is the bonafide work of “DIVYA DHARSHINI
N (203001030), MANIKANDAN S (203001055), MONA ABISHEK A
(203001057)” who carried out the work under my supervision.

Certified further that to the best of my knowledge the work reported herein does
not form part of any other thesis or dissertation on the basisof which a degree
or award was conferred on an earlier occasion on this or any other candidate.

SIGNATURE SIGNATURE

Dr. V. Rajini Dr.G.R.Venkatakrishnan

HEAD OF THE DEPARTMENT SUPERVISOR


PROFESSOR ASSOCIATE PROFESSOR

Department of Electrical and Department of Electrical and


Electronics Engineering Electronics Engineering
SSN College of Engineering SSN College of Engineering
Kalavakkam–603110 Kalavakkam–603110

Submitted for project viva-voice examination held on ……………

INTERNAL EXAMINER EXTERNAL EXAMINER


iii

ABSTRACT

This project introduces an Automated Attendance System designed for


classroom environments, utilizing facial recognition technology for seamless
and contactless attendance tracking. The integration of hardware
components, including the Raspberry Pi and Pi CAM Module, enables real-
time capture of facial images, enhancing the performance, functionality and
efficiency of the attendance process. Our approach to facial recognition
encompasses the utilization of Local Binary Pattern Histogram (LBPH) and
Haar cascades classifier algorithms. These algorithms, integrated within the
Python programming language, empower our system to accurately detect and
recognize faces in varying environmental conditions. The adoption of
SQLite, a relational database management system (RDBMS), which
facilitates storage and retrieval of recognized student names and timestamps
in a simple and efficient way, making it suitable for managing attendance
records in the automated system. A user-friendly interface complements the
system, empowering teachers with efficient attendance management tools
and providing students with access to their attendance history. This
comprehensive solution promotes transparency and active engagement
within the educational setting, fostering a technologically advanced and user-
centric approach to attendance tracking.
iv

ACKNOWLEDGEMENT

We would like to express our gratitude to the below mentioned people who
have been instrumental in the completion of this project.
We express our deep respect to Padma Bhushan Dr. Shiv Nadar,
Chairman, SSN Institutions for providing the best infrastructure and
helping us realize our project.
We would like to thank Dr. Kala Vijaykumar, President of SSN College
of Engineering, Dr. V.E. Annamalai, Principal of SSN College of
Engineering, Dr. S. Radha, Vice Principal of SSN College of Engineering
for providing us the facilities to carry out the project successfully.
We would like to sincerely thank Dr. V. Rajini Head of Department of
Electrical and Electronics Engineering and our project guide Dr.
G.R.Venkatakrishnan Assistant professor who has been kind, patient and
provided great support and her expert guidance and constant support
throughout the project.
We express immense pleasure in thanking all the faculty members of the
Department of Electrical and Electronics engineering for their constant
guidance and cooperation.

DIVYA DHARSHINI N

MANIKANDAN S

MONA ABISHEK A
v

TABLE OF CONTENTS

CHAPTER PAGE
TITLE
NO. NO.
1 ABSTRACT iii

LIST OF FIGURES viii

LIST OF TABLES x

LIST OF SYMBOLS, ABBREVIATIONS 11

1 INTRODUCTION 12
1.1 OVERVIEW 11
1.2 RASPERRY PI MODULE 13
1.3 LITERATURE SURVEY 14
1.4 OBJECTIVE OF REPORT 16
1.5 ORGANIZATION OF REPORT 17
1.6 CONCLUSION 17
2 HARDWARE APPLICATION 19
2.1 INTRODUCTION 19
2.2 RASPBERRY PI 20
2.3 PI CAM MODULE 24
2.4 CIRCUIT CONNECTION 27
2.5 CONCLUSION 38
3 FACIAL RECOGNITION 40
IMPLEMENTATION
3.1 INTRODUCTION 40
3.2 PYTHON 41
3.2.1 PYTHON IN FACIAL 41
RECOGNITION
vi

3.2.2 PYTHON FOR 42


ALGORITHM
DEVELOPMENT
3.3 LIBRARIES 42
3.3.1 OPENCV 42
3.3.2 NUMPY 43
3.4 ALGORITHM EXPLANATION 43
3.4.1 LOCAL BINARY 44
PATTERN HISTOGRAM
(LBPH) ALGORITHM
3.4.2 HAAR CASCADE 46
CLASSIFIER
3.5 CODE OVERVIEW 47
3.5.1 IMPORT LIBRARIES AND 48
VARIABLE DEFINITIONS
3.5.2 VARIABLE DEFINITIONS 49
AND CONFIGURATION
3.5.3 DATABASE CONNECTION 51
AND TABLE CREATION
3.5.4 FACE RECOGNITION 52
LOGIC
3.6 CONCLUSION 54
4 DATABASE INTEGRATION 56
4.1 INTRODUCTION 56
4.2 DATABASE CHOICE:SQL Vs. 57
MONGODB
4.2.1 WHY SQL? 57
4.2.2 COMPARISON WITH 58
MONGODB
4.3 PSYCOPG2 LIBRARY 59
4.4 DATABASE STRUCTURE 61
vii

4.5 CODE OVERVIEW 62


4.5.1 DATABASE CONNECTION 62
URI DEFINITION
4.5.2 DATABASE CONNECTION 62
ESTABLISHMENT
FUNCTION
4.5.3 ATTENDANCE TABLE 63
CREATION FUNCTION
4.5.4 ATTENDANCE RECORD 64
INSERTION OPERATION
4.6 WEB APPLICATION 64
4.7 WEB APPLICATION 66
FRAMEWORK: PYTHON FLASK
4.8 LOGIN PAGE 67
4.9 ATTENDANCE DISPLAY PAGE 68
4.10 CONCLUSION 70
5 CONCLUSION AND FUTURE SCOPE 63
5.1 CONCLUSION 63
5.2 FUTURE SCOPE 64
REFERENCE 65
viii

LIST OF FIGURES

FIGURE
LIST OF FIGURES PAGE NO.
NO.

1.1 Objectives of Report 15


2.1 Raspberry Pi 19
2.2 Schematic Diagram of Raspberry Pi 22
Schematic Diagram of Raspberry Pi Cam
2.3 24
Module
2.4 Pi Can Connection Port 26
2.5 Pushing the black switch to connect 27
2.6 Connection Diagram 27
2.7 Raspbian interface 28
2.8 Mounting SD Card 29
2.9 Choosing memory interface 29
2.10 Creating ssh file 30
2.11 Configuring RPi 30
2.12 Connecting WiFi to RPi 31
2.13 Enabling required interfaces 31
2.14 After successfully connected to WiFi 32
2.15 Ssh to Connect with RPi 33
2.16 Terminal interface 33
2.17 Choose Interfacing Options 34
2.18 Enable/Disable Camera Connection to RPi 34
2.19 Enabling camera interface 35
2.20 Confirm enabling Camera interface 35
2.21 Terminal command to read image 36
2.22 Quick Connect of Pi Cam 36
2.23 Download Captured Image 37
3.1 Python Code 1 47
ix

3.2 Python Code 2 48


3.3 Python Code 3 50

4.1 MongoDB setup 1 52


4.2 MongoDB setup 2 53
4.3 MongoDB Schema 55
4.4 Data Conversion 57
4.5 Inserted Data 58
4.6 Database Code 60
x

LIST OF TABLES

TABLE PAGE
LIST OF TABLES
NO. NO.

2.1 Specification of Raspberry Pi 21

2.2 Specification of Pi CAM Module 26


11

LIST OF ABBREVIATIONS AND SYMBOLS

C - Celsius
DB - Data Base
GND - Ground
GPIO - General Purpose Input/Output
I2C - Inter-Integrated Circuit
IDE - Integrated Development Environment
KB - Kilo Byte
MHz - Mega Hertz
PWM - Pulse Width Modulation
SPI - Serial Peripheral Interface
TTL - Transistor-Transistor Logic
UART - Universal Asynchronous Receiver/Transmitter
USB - Universal Serial Bus
V - Voltage
CNN - Convolutional Neural Network
HDMI - High Definition Multimedia Interface
DIP - Dual In-Line Package
12

CHAPTER 1

INTRODUCTION

1.1 Overview

To maintain the attendance record with day-to-day activities is a


challenging task. The conventional method of calling name of each
student is time consuming and there is always a chance of proxy
attendance. The following system is based on face recognition to maintain
the attendance record of students. The daily attendance of students is
recorded subject-wise which is stored already by the administrator. As the
time for corresponding subject arrives the system automatically starts
taking snaps and then apply face detection and recognition technique to
the given image and the recognize students are marked as present and their
attendance update with corresponding time and subject ID. We have used
deep learning techniques to develop this system, histogram of oriented
gradient method is used to detect faces in images and deep learning
method is used to compute and compare feature facial of students to
recognize them. Our system is capable to identify multiple faces in real
time. The main objective of this project is to develop face recognition
based automated student attendance system. In order to achieve better
performance, the test images and training images of this proposed
approach are limited to frontal and upright facial images that consist of a
single face only. The test images and training images have to be captured
by using the same device to ensure no quality difference. In addition, the
students have to register in the database to be recognized. The enrolment
can be done on the spot through the user-friendly interface.
13

1.2 Raspberry Pi Module

Raspberry Pi is a series of small, affordable, single-board computers


developed by the Raspberry Pi Foundation. These versatile computers
are designed for educational purposes, but their low cost and compact
size have made them incredibly popular among hobbyists, makers, and
tech enthusiasts worldwide. With their powerful performance, GPIO pins
for hardware interfacing, HDMI output for display, multiple USB ports,
built-in Ethernet and Wi-Fi connectivity, and support for various
operating systems including Raspbian, Ubuntu, and Windows 10 IoT
Core, Raspberry Pi boards offer a wide range of possibilities for projects
ranging from simple DIY electronics to advanced IoT applications.
Moreover, Raspberry Pi boards are highly customizable, with a wide
range of accessories and add-ons available to expand their functionality.
Additionally, Raspberry Pi boards are widely used in educational settings
to teach programming, electronics, and computer science, further
contributing to the growth of its community and ecosystem.

One of the key advantages of Raspberry Pi is its affordability, making it


accessible to a wide range of users. Despite its low cost, Raspberry Pi
offers impressive performance and features, including low power
consumption, a large and active community of users and developers, and
easy expandability through its GPIO pins and USB ports. Whether you're
interested in learning to code, building custom hardware projects, setting
up a media center or retro gaming console, or experimenting with IoT
applications, Raspberry Pi provides an accessible and versatile platform
for all skill levels.
14

1.3 Literature Survey

• Mandal S and Nath N.S, "A Novel Approach for Automated


Attendance System using Face Recognition Technique" (2023). Their
approach employs face recognition techniques for automating the
attendance system, offering a potentially more efficient and accurate
alternative to traditional methods. It discusses the development and
implementation of this system, highlighting its effectiveness in various
educational or organizational settings. They presented a novel
approach for an automated attendance system utilizing face
recognition techniques which aims to streamline the attendance
process, offering improved efficiency and accuracy compared to
traditional methods. By harnessing face recognition technology, the
proposed system eliminates the need for manual attendance marking,
reducing administrative workload and providing real-time monitoring
of attendance. It discusses the technical details of the system, including
the algorithms and methodologies used, as well as its potential
applications in educational institutions and other organizational
settings.

• Mohammad Gouse Galety; Firas Husham Almukhtar; Rebaz Jamal


Maaroof; Fanar Rofoo; S. Arun, “Marking Attendance using Modern
Face Recognition (FR): Deep Learning using the OpenCV Method”
(2022). Face Recognition and Detection encompasses an ocean of
study and development involving picture analysis and algorithm-based
comprehension, sometimes known as computer vision. Attendance is
a right that no one can reject, and to support this right, many efforts
and studies are being conducted around the world. A Deep
15

Convolutional Neural Network (CNN) using the OpenCV model has


been suggested for marking Attendance in this work. Convolutional
Neural Network is employed to gain the unique features of the faces
based on the distance. A wide variety of parameters influence the
training of a Convolutional Neural Network (CNN) based classifier.
These aspects include assembling an appropriate dataset, choosing a
suitable Convolutional Neural Network (CNN), processing the dataset,
and choosing training parameters to get the required classification
results. The current publication compiles state-of-the-art research that
used dataset preparation and artificial augmentation before training.
Accuracy rates are achieved using the proposed model.

• Sudhir Bussa, Ananya Mani, Shruti Bharuka, Sakshi Kaushik, “Smart


Attendance System using OPENCV based on Facial Recognition”
(2020). Face recognition being a biometric technique implies
determination if the image of the face of any particular person matches
any of the face images that are stored in a database. This difficulty is
tough to resolve automatically because of the changes that several
factors, like facial expression, aging and even lighting can affect the
image. Facial recognition among the various biometric techniques may
not be the most authentic but it has various advantages over the others.
Face recognition is natural, feasible and does not require assistance.
The expected system engages the face recognition approach for the
automating the attendance procedure of students or employees without
their involvement. A web cam is used for capturing the images of
students or employees. The faces in the captured images are detected
and compared with the images in database and the attendance is
marked.
16

1.4 Objectives of Report

The objective of this project is to develop face recognition based


automated student attendance system. Expected achievements in order to
fulfil the objectives are:
• To detect the face segment from the video frame.
• To extract the useful features from the face detected.
• To classify the features in order to recognize the face detected.
• To record the attendance of the identified student.

Figure 1.1 Objectives of Report


17

1.5 Organization of Report

Chapter 1: This chapter provides the overview of the project.


Chapter 2: This chapter provides the detailed information about the
Raspberry Pi Module, Pi CAM Module and its implementation.
Chapter 3: The chapter summarizes about the libraries and algorithms
used to implement facial recognition.
Chapter 4: The chapter summarizes in detail about SQLite database and
the secured way in which the data has been managed.
Chapter 5: This chapter consists the conclusion of the report and future
scope.

1.6 Conclusion

Automated Attendance System has been envisioned for the purpose of


reducing the errors that occur in the traditional (manual) attendance taking
system. The aim is to automate and make a system that is useful to the
organization such as an institute. The efficient and accurate method of
attendance in the office environment that can replace the old manual
methods. This method is secure enough, reliable and available for use.
No need for specialized hardware for installing the system in the office.
In this system we have implemented an attendance system for a lecture,
section or laboratory by which lecturer or teaching assistant can record
students’ attendance. It saves time and effort, especially if it is a lecture
with huge number of students. This attendance system demonstrates the
use of image processing techniques in classroom. This system can not
only merely help in the attendance system, but also improve the goodwill
18

of an institution. The use of python programming and the algorithms used,


makes it an easier and handy tool or system which can be made by anyone
according to their requirement. The proposed system discussed in this
project will be helpful for many as it is user-friendly and cost-efficient
system. Hence by the use of python language and OpenCV the face
recognition system can be designed for various purposes.
19

CHAPTER 2

HARDWARE APPLICATION

2.1 Introduction

The evolution of the 'Design and Implementation of Automated


Attendance System Using Contactless Facial Recognition' project brings
forth a paradigm shift in hardware implementation, now harnessing the
power of Raspberry Pi and Pi-CAM for enhanced functionality and
performance. This upgraded hardware configuration signifies a strategic
leap towards greater versatility and efficiency in attendance management
systems.

By integrating the Raspberry Pi's robust computational capabilities and the


high-resolution imaging prowess of Pi-CAM, our project enters a realm of
heightened precision and reliability. The Raspberry Pi's extensive
community support and diverse connectivity options ensure seamless
integration with existing infrastructures, promising a scalable solution for
diverse deployment scenarios.

The utilization of Pi-CAM's advanced imaging features, coupled with


Raspberry Pi's processing prowess, elevates our contactless facial
recognition system to new heights of accuracy and responsiveness. This
transition not only reinforces our commitment to innovation but also
underscores our adaptability to leverage cutting-edge technologies for
practical applications.

As we delve into the intricacies of this upgraded hardware setup, we


unravel a tapestry of efficiency and effectiveness, poised to redefine the
20

landscape of attendance tracking with sophistication and user-centric


design. The subsequent sections elucidate the unique contributions of
Raspberry Pi and Pi-CAM, elucidating their pivotal roles in shaping the
success of this progressive attendance system.

2.2 Raspberry Pi

The Raspberry Pi is a versatile, credit card-sized computer that has gained


immense popularity in the realm of embedded systems and IoT
applications. It offers a powerful yet cost-effective solution for a wide
range of projects, including automated attendance systems.

The Raspberry Pi is a very cheap computer that runs Linux, but it also
provides a set of GPIO (general purpose input/output) pins, allowing you
to control electronic components for physical computing and explore the
Internet of Things (IoT). Due to the nature of the IoT technology in the
Raspberry Pi, it can be used to communicate with other devices around a
home, and even automate some processes with some smart applications of
a few simple lines of code.

Figure 2.1 Raspberry Pi


21

Features:

• Raspberry Pi boards feature ARM-based processors with clock speeds


ranging from 1.2 GHz to 1.8 GHz, offering significant computing power
for various applications.

• They are equipped with varying amounts of RAM, from 1 GB to 8 GB,


providing ample memory for multitasking and resource-intensive tasks

• Raspberry Pi boards support multiple connectivity options, including


Wi-Fi (802.11b/g/n/ac), Bluetooth (4.2/BLE), Ethernet, and USB ports,
enabling seamless networking and peripheral connectivity.

• Raspberry Pi boards have dedicated camera interfaces compatible with


Raspberry Pi Camera Modules, such as the Raspberry Pi Camera Module
V2, offering high-resolution imaging capabilities for projects like yours.

• They support microSD card storage for operating system and data
storage, with options for expanding storage using external drives.

• Raspberry Pi boards feature GPIO pins that support various protocols


like UART, SPI, I2C, PWM, ADC, and DAC, allowing interfacing with a
wide range of sensors, displays, and other peripherals.

• They run on Linux-based operating systems such as Raspbian (now


known as Raspberry Pi OS) or other distributions, providing a robust and
customizable software environment.

• Raspberry Pi boards come with development tools and software


libraries, making it easy to program and deploy applications in languages
like Python, C/C++, and more.

Specification and Description:

Table 2.1 Specification of Raspberry Pi

SPECIFICATION DESCRIPTION
Module model Raspberry Pi
Package DIP (Dual In-line Package)
22

Size Varies by Model


SPI Flash Varies by Model
RAM Varies by Model
Bluetooth Bluetooth 5.0
Wi-Fi 802.11 b/g/n/ac
HDMI, USB 2.0/3.0, Ethernet, GPIO
Support interface
pins
Support TF card MicroSD card slot
IO port Varies by Model
Serial port rate Varies by Model
Image output format JPG , BMP, GRAYSCALE
Spectrum range 2.4 GHz and 5 GHz
Antenna form External Wi-Fi antennas
Transmit power 9.5 -28.2 mW (9.8 - 14.5 dBm)

Compliance with Wi-Fi standards for


Receiving sensitivity
signal sensitivity

Typical power consumption ranges


Power consumption from 2.5W to 7.6W, depending on
load and peripherals
Security WPA/WPA2/WPA3
Power supply range 5V
Operating temperature -20 °C ~ 85 °C
-20°C to 85°C recommended, up to
Storage environment
125°C for short periods
23

Raspberry Pi - Pinout:

A powerful feature of the Raspberry Pi is the row of GPIO pins along the
top edge of the board. A 40-pin GPIO header is found on all current
Raspberry Pi boards, although it is unpopulated on Raspberry Pi Zero,
Raspberry Pi Zero W, and Raspberry Pi Zero 2 W. The GPIO headers on
all boards have a 0.1in (pin pitch.

Figure 2.2 Schematic Diagram of Raspberry Pi

Two 5V pins and two 3.3V pins are present on the board, as well as a
number of ground pins (GND), which can not be reconfigured. The
remaining pins are all general-purpose 3.3V pins, meaning outputs are set
to 3.3V and inputs are 3.3V-tolerant. A GPIO pin designated as an output
pin can be set to high (3.3V) or low (0V). This is made easier with the use
of internal pull-up or pull-down resistors. Pins GPIO2 and GPIO3 have
fixed pull-up resistors, but for other pins this can be configured in
software.
24

2.3 Pi CAM Module :

The Raspberry Pi Camera Board is a custom designed add-on module for


Raspberry Pi hardware. It attaches to Raspberry Pi hardware through a
custom CSI interface. The sensor has 5 megapixel native resolution in
still capture mode. In video mode it supports capture resolutions up to
1080p at 30 frames per second. The camera module is light weight and
small making it an ideal choice for mobile projects.

Features:

• Resolution – 5 MP
• HD Video recording – 1080p @30fps, 720p @60fps, 960p
@45fps and so on.
• It Can capture wide, still (motionless) images of a resolution
2592x1944 pixels.
• CSI Interface enabled.
• Compact Static Images Resolution: 2592×1944
• Aperture (F): 1.8
• Visual Angle: 65 degree
• Supported OS: Raspbian (latest version recommended)
The Raspberry Pi Camera Module 2 replaced the original Camera Module
in April 2016. The v2 Camera Module has a Sony IMX219 8-megapixel
sensor (compared to the 5-megapixel OmniVision OV5647 sensor of the
original camera).
The Camera Module 2 can be used to take high-definition video, as well
as stills photographs. It’s easy to use for beginners, but has plenty to offer
advanced users if you’re looking to expand your knowledge. There are lots
of examples online of people using it for time-lapse, slow-motion, and
25

other video cleverness. You can also use the libraries we bundle with the
camera to create effects.
The camera works with all models of Raspberry Pi 1, 2, 3 and 4. It can be
accessed through the MMAL and V4L APIs, and there are numerous third-
party libraries built for it, including the Picamera Python library. See the
Getting Started with Picamera resource to learn how to use it.
All models of Raspberry Pi Zero require a Raspberry Pi Zero camera
cable; the standard cable supplied with the camera module is not
compatible with the Raspberry Pi Zero camera connector. Suitable cables
are available at low cost from many Raspberry Pi Approved Resellers, and
are supplied with the Raspberry Pi Zero Case.
The camera module is very popular in home security applications, and in
wildlife camera traps.

Figure 2.3 Schematic Diagram of Raspberry Pi Cam Module


26

Specification and Description:

Table 2.2 Specification of Pi CAM Module

SPECIFICATION DESCRIPTION
Image Sensor Sony IMX 219 PQ CMOS image
sensor in a fixed-focus module.
Resolution 8-megapixel
Still picture resolution 3280 x 2464
Max image transfer rate 1080p: 30fps
(encode and decode)
720p: 60fps
Connection to Raspberry Pi 15-pin ribbon cable, to the
dedicated 15-pin MIPI Camera
Serial Interface (CSI-2).
Image control functions Automatic exposure control
Automatic white balance
Automatic band filter Automatic
50/60 Hz luminance detection
Automatic black level calibration
Temp range Operating: -20º to 60º Stable
image: -20º to 60º
Lens size 1/4”
Dimensions 23.86 x 25 x 9mm
Weight 3g
27

2.4 Circuit Connection

To connect the Pi-CAM Module in Raspberry Pi Module , then Get the


other side of the pin header connected with ports of RPi camera. To do
this, you need first to pull up the black switch near the slot, and then to
insert the FFC in the slot with the blue side facing the RPi USB port
(refer figure 2.3).

Figure 2.4 Pi Cam Connection Port

Insert FFC firmly until it hits the deepest part of the FFC entry and push
down the black switch.
28

Figure 2.5 Pushing the black switch to connect

Figure 2.6 Connection Diagram


29

Steps to download RPi System:

Step1:Download RPi system:

i) Click in to access the Raspberry Pi official


website https://www.raspberrypi.org/downloads/,and then select to
download the image you need according to your compute types .

ii) After downloading, click to start the installation of imager.exe. The


RPi Imager page appears when the installation is done. You need to
click CHOOSE OS then on the new page, select the first one.

Figure 2.7 Raspbian interface

3. Insert the SD card in the card reader then plug the card reader into the
computer. You should click CHOOSE SD CARD to let the following SD
Card page appear. Now, select the SD card information bar. Back to the
RPi page and click WRITE to download and flash the RPi system.
30

Figure 2.8 Mounting SD Card

Figure 2.9 Choosing memory interface


31

4. Replug the card reader once the flash is finished and create a file named
ssh without suffix name in / boot directory.

Figure 2.10 Creating ssh file

Step 2: Connect the RPi to the screen, configure RPi wifi & password,
choose country & language, and click the icon to check the RPi ip.

Figure 2.11 Configuring RPi


32

Figure 2.12 Connecting Wifi to RPi

Figure 2.13 Enabling required interfaces


33

Figure 2.14 After successfully connected to wifi

Got the ip, you can use putty tool to control RPi remotely via ssh remote
connection.

Putty download
link: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

Download finished, putty page is as shown. You need to input RPi ip, and
type in 22 in the port box. Check ssh and click open to connect with RPi.
34

Figure 2.15 ssh to connect with RPi

The terminal page appears once you click open. By default, the ID is "pi"
and the password is "raspberry". Note the alphabetic case.

Figure 2.16 Terminal interface


35

Step 3: Configure the camera


Type in the command "sudo raspi-config" on putty, and the following page
will appear. What you need to do is to select Interfacing Options, press
Enter --> click Camera, press Enter --> select "YES", and press Enter --
>select "OK" and then press Enter.

Figure 2.17 Choose Interfacing Options

Figure 2.18 Enable/Disable Camera Connection to RPi


36

Figure 2.19 Enabling camera interface

Figure 2.20 Confirm enabling Camera interface

On putty, type "raspistill -v -o test.jpg " in the command line, and


the information runs successfully.
37

Figure 2.21 Terminal command to read image

Figure 2.22 Quick Connect of Pi Cam


38

Figure 2.23 Download Captured Image

2.5 Conclusion

In Chapter 2, our exploration delves into the core hardware elements


driving our Automated Attendance System using Contactless Facial
Recognition, now powered by the versatile Raspberry Pi and Pi-CAM
module. The Raspberry Pi, renowned for its computational prowess and
connectivity options, forms the cornerstone of our system, while the Pi-
CAM module brings high-resolution imaging capabilities to the forefront.
The chapter intricately details the setup, highlighting the Raspberry Pi's
robust processing capabilities, extensive connectivity interfaces, and
seamless integration with the Pi-CAM module. The Pi-CAM's image
capture functionalities, coupled with the Raspberry Pi's GPIO pins for
interfacing with sensors and peripherals, lay a solid foundation for our
attendance solution's functionality and efficiency.Circuit connections for
39

power supply, camera interfacing, and data transmission are elucidated,


ensuring a smooth deployment process. Moreover, the chapter shares
snippets of code for capturing facial images, showcasing the harmonious
interplay between hardware and software components.
By focusing on the Raspberry Pi and Pi-CAM module, this chapter
encapsulates the project's hardware-centric approach, emphasizing
reliability, scalability, and performance. The comprehensive
understanding of specifications, connectivity options, and coding
intricacies presented here sets the stage for a seamless transition towards
a modern and efficient attendance tracking system, bridging the gap
between hardware innovation and practical
40

CHAPTER 3

FACIAL RECOGNITION IMPLEMENTATION

3.1 Introduction

Facial recognition technology has revolutionized numerous industries,


offering a seamless and efficient means of identity verification. In our
project, we leverage facial recognition as a fundamental component to
automate attendance tracking within educational environments. By
incorporating facial recognition, we aim to streamline traditional
attendance processes, providing a modern and sophisticated solution to an
age-old task.

Traditional methods of attendance tracking are often laborious and prone


to errors. Facial recognition technology, however, presents a non-intrusive
and accurate alternative, enabling real-time identification of individuals
without physical interaction. This implementation aligns with our project's
overarching objective: to develop an efficient, contactless, and
technologically advanced attendance system.

Our approach to facial recognition encompasses the utilization of Local


Binary Pattern Histogram (LBPH) and Haar cascades classifier
algorithms. These algorithms, integrated within the Python programming
language, empower our system to accurately detect and recognize faces in
varying environmental conditions.

Throughout this chapter, we delve into the intricacies of the facial


recognition algorithm, discussing the rationale behind the selection of
41

Python as the primary programming language and the indispensable role


played by key libraries such as OpenCV, NumPy, and face recognition.
We provide a detailed breakdown of the algorithm's steps, accompanied
by code implementation examples and insights into its integration within
our broader attendance system.

3.2 Python

Python plays a pivotal role in the development of the facial recognition


algorithm, providing a versatile and intuitive programming environment.
The choice of Python for this project is driven by its simplicity, readability,
and extensive support for libraries and frameworks relevant to image
processing and machine learning.

3.2.1 Python in Facial Recognition

Python offers a conducive ecosystem for developing facial recognition


systems due to the following attributes:
Readability and Expressiveness: Python's clear and concise syntax
enhances code readability, making it easier to develop and maintain
complex algorithms. This is particularly advantageous in image
processing tasks where precision and clarity are paramount.
Extensive Libraries: Python boasts a rich collection of libraries and
frameworks tailored for image processing, machine learning, and
computer vision. Leveraging these libraries significantly accelerates the
development process and ensures robust functionality.
Community Support: Python's active and vibrant community contributes
to a wealth of resources, tutorials, and open-source projects. This
42

collaborative environment is invaluable when addressing challenges and


implementing state-of-the-art techniques in facial recognition.

3.2.2 Python for Algorithm Development

In the context of the facial recognition algorithm, Python facilitates the


following key functionalities:
Video Frame Capture: Python provides libraries like OpenCV for
capturing video frames from a webcam, a fundamental step in real-time
facial recognition.
Image Processing: The simplicity of NumPy allows efficient handling of
image arrays, aiding in preprocessing tasks such as resizing and color
space conversion.

3.3 Libraries
The choice of libraries for this project stems from their collective ability
to provide a comprehensive and efficient framework for facial recognition.
Each library brings unique advantages, contributing to the seamless
implementation of the algorithm.

3.3.1 OpenCV

OpenCV, or Open-Source Computer Vision Library, is a pivotal


component in this project, offering a comprehensive set of tools for image
and video processing. Renowned for its versatility and reliability, OpenCV
simplifies complex computer vision tasks such as face detection. Its
extensive suite of algorithms facilitates real-time image manipulation,
enabling crucial functions like resizing, color space conversion, and
feature extraction. OpenCV's robustness and widespread use in computer
vision applications make it an indispensable choice for this project,
43

providing the necessary framework to capture, preprocess, and analyze


video frames for effective facial recognition.

3.3.2 NumPy

NumPy, a powerful numerical computing library, plays a crucial role in


optimizing array operations for efficient image processing. With its array
manipulation capabilities, NumPy significantly enhances computational
speed, making it an essential component for handling
large datasets inherent in image recognition tasks. By providing a versatile
environment for mathematical operations on multi-dimensional arrays,
NumPy facilitates seamless integration with other image processing
libraries, enhancing the overall efficiency and performance of the facial
recognition algorithm. Its streamlined numerical computations contribute
to the project's success by providing a foundation for manipulating and
analysing image data with precision and speed.

3.4 Algorithm Explanation

Facial recognition algorithms play a pivotal role in automating attendance


tracking systems, offering a robust and efficient means of identifying
individuals. In this chapter, we delve into the intricacies of the Local
Binary Pattern Histogram (LBPH) algorithm and the Haar cascade
classifier, exploring their functionality and implementation within our
facial recognition system.
44

3.4.1 Local Binary Pattern Histogram (LBPH) Algorithm

The Local Binary Pattern Histogram (LBPH) algorithm is a texture


descriptor widely used in computer vision and image processing tasks,
including face recognition. Its popularity stems from its simplicity,
effectiveness, and robustness in handling varying environmental
conditions. Here's a breakdown of the LBPH algorithm's key steps:

Local Binary Patterns (LBP): LBP is a texture descriptor that captures


the local structure of an image.
It operates by comparing the intensity of each pixel with the intensity of
its neighbouring pixels. For each pixel in the image, LBP assigns a binary
value (0 or 1) based on whether the intensity of the neighboring pixels is
greater or lesser than the intensity of the central pixel.This process creates
a binary pattern for each pixel, encoding information about the local
texture around that pixel.

Image Partitioning: In LBPH, the face image is divided into smaller


regions or cells. Each cell typically contains a square or rectangular region
of pixels. The size and shape of the cells can vary based on the specific
implementation and requirements of the algorithm.

Feature Extraction: For each cell in the image, LBPH computes a Local
Binary Pattern (LBP) histogram. To compute the LBP histogram, LBPH
examines each pixel within the cell and calculates its binary pattern based
on the intensity comparisons with neighbouring pixels. The resulting
binary patterns are concatenated to form a histogram, which represents the
distribution of local texture patterns within the cell. The histogram
45

effectively captures the texture characteristics of the image region


represented by the cell.

Histogram Concatenation: Once the LBP histograms are computed for


all cells in the image, they are concatenated to form a single feature vector
representing the entire image. Each histogram contributes a set of features
that describe the local texture patterns within its corresponding
cell.Concatenating these histograms creates a comprehensive
representation of the facial texture features across the entire image.

Recognition: During recognition, LBPH compares the feature vector of


the input face image with those stored in the database. It computes a
similarity score between the input feature vector and each stored feature
vector using distance metrics such as Euclidean distance or cosine
similarity. The stored feature vector with the closest similarity score to the
input vector is considered the recognized face. The recognition result may
include the identity of the recognized individual or a confidence score
indicating the degree of similarity between the input image and the stored
images.

Training and Database Creation: Before recognition can be performed,


LBPH typically requires a training phase where it learns the facial texture
patterns from a set of labeled face images. During training, LBPH
computes the LBP histograms for each image in the training dataset and
stores them along with the corresponding labels (e.g., person names) in a
database. This database serves as the reference for recognition, allowing
LBPH to compare input images with the learned patterns to identify
individuals.
46

In summary, LBPH is a robust and efficient facial recognition algorithm


that leverages local texture patterns to characterize facial images. By
partitioning the image, computing LBP histograms for each region, and
concatenating them into a feature vector, LBPH achieves accurate
recognition results while remaining computationally feasible for real-time
applications.

3.4.2 Haar Cascade Classifier

In addition to LBPH, the Haar cascade classifier is another vital


component of our facial recognition system, specifically used for face
detection. Here's an overview of how the Haar cascade classifier operates:

Feature Extraction: The Haar cascade classifier employs the Haar


wavelet technique to extract features from images. These features are
rectangular patterns that resemble Haar wavelets, which are applied at
different scales and positions across the image.

Training the Classifier: The classifier is trained on a large dataset of


positive and negative samples. Positive samples contain images of faces,
while negative samples contain images without faces. During training, the
classifier learns to distinguish between features present in faces and those
absent in non-face images.

Cascade of Classifiers: The Haar cascade classifier consists of a cascade


of multiple stages, each comprising a set of weak classifiers. These weak
classifiers sequentially filter regions of the image, progressively
eliminating non-face regions. Regions passing through all stages are
deemed as potential face regions.
47

Face Detection: During face detection, the trained Haar cascade classifier
is applied to the input image. The classifier evaluates each region of the
image using the learned features, efficiently narrowing down the search
space to regions likely to contain faces.

Bounding Box Generation: Once potential face regions are identified,


bounding boxes are drawn around these regions to visually indicate the
presence of faces in the image.

By leveraging the Haar cascade classifier for face detection, our system
can efficiently locate faces within images, facilitating subsequent
recognition tasks using the LBPH algorithm.

In the following sections, we delve into the implementation details of both


the LBPH algorithm and the Haar cascade classifier within our facial
recognition system, discussing code examples and integration strategies.

3.5 Code Overview

The facerecognition.py script showcases a robust implementation of real-


time facial recognition and attendance logging. By leveraging OpenCV,
PostgreSQL, and Python modules, it offers a comprehensive solution for
face detection, recognition, and database interaction. Let's delve into its
intricacies to understand how it achieves seamless integration and
functionality.
48

3.5.1 Import Libraries and Variable Definitions


The script demonstrates meticulous preparation by importing a suite of
essential libraries, each playing a vital role in facilitating various aspects
of the facial recognition system.

cv2: As the cornerstone of the script, OpenCV (cv2) provides a rich array
of functionalities for computer vision tasks, including face detection and
image manipulation.

NumPy: This numerical computing library, abbreviated as np, empowers


the script with efficient array operations and mathematical functions,
enhancing performance and versatility.
OS: Operating System (OS) interactions are facilitated through this
module, enabling the script to navigate file systems and directories, crucial
for file handling operations.

psycopg2: The psycopg2 library serves as the bridge between Python and
PostgreSQL, enabling seamless communication and interaction with the
PostgreSQL database for storage and retrieval of attendance records.

datetime: The datetime module equips the script with capabilities for
working with dates and times, essential for timestamp generation and
time-based operations during attendance tracking.

time: This module provides functionality for time-related operations,


enabling the script to measure durations, control timing, and handle time-
based events effectively.
49

random: The random module enriches the script with the ability to
generate random numbers, empowering it to introduce variability and
randomness, such as in generating random check-out times for attendance
records.

These imported libraries collectively lay the groundwork for the script's
functionality, providing a comprehensive toolkit for image processing,
database connectivity, timestamp management, and randomization.

3.5.2 Variable Definitions and Configuration


In addition to importing libraries, the script meticulously defines a set of
variables, each serving a distinct purpose in configuring and customizing
the behavior of the facial recognition system.

size: This variable governs the scale factor used for resizing captured face
images, influencing the processing and recognition accuracy.

haar_file: The script specifies the file path to the Haar cascade classifier
XML file, which is instrumental in detecting faces within images and
video streams.

datasets: Denoting the directory containing training images for face


recognition, this variable guides the script in accessing and processing the
dataset used to train the recognition model.

output_folder: Designated as the directory where recognized face images


will be stored, this variable dictates the location for saving visual evidence
of recognized faces, facilitating record-keeping and verification.
50

database_uri: Serving as the connection URI for the PostgreSQL


database, this variable encapsulates crucial connection details such as
database name, user credentials, password, and host information. It
enables seamless interaction with the database for storing and retrieving
attendance records.

By defining these variables, the script achieves a high level of


configurability, enabling users to tailor the system to their specific
requirements and environments. This approach enhances the script's
versatility, adaptability, and usability in diverse deployment scenarios.
Code:

Figure 3.1 python code 1


51

3.5.3 Database Connection and Table Creation


The script defines a function create_connection() to establish a connection
to the PostgreSQL database specified by database_uri. This function
returns a connection object that can be used to interact with the database.

Another function create_table_if_not_exists(conn) is defined to ensure the


existence of the "attendance" table in the database. If the table does not
exist, it creates it with the required schema, including columns for ID,
name, image path, in_time, and out_time. This function utilizes the
provided database connection conn to execute SQL queries and commit
the changes to the database.

Code:

Figure 3.2 python code 2


52

3.5.4 Face Recognition Logic


The main functionality of the script is encapsulated within the recognize()
function. This function performs the following steps:

Database Setup: It establishes a connection to the PostgreSQL database


and ensures the existence of the "attendance" table by calling the
create_table_if_not_exists() function.

Output Folder Creation: It checks if the output folder specified by


output_folder exists and creates it if it does not.

Image and Label Preparation: It traverses the dataset directory specified


by datasets, reads images, and prepares corresponding labels for training
the face recognition model.

Model Training: It trains the LBPH (Local Binary Patterns Histograms)


face recognition model using the prepared images and labels.

Webcam Initialization: It initializes the webcam for capturing video


frames.

Face Detection and Recognition: It continuously captures frames from


the webcam, detects faces using the Haar cascade classifier, recognizes
faces using the trained model, and performs actions based on the
recognition results.

Attendance Logging: If a recognized face is detected and a sufficient time


interval has passed since the last entry, the script logs the attendance by
53

saving the recognized face image to the output folder, inserting attendance
data into the database, and updating the timestamp of the last entry.

User Interaction: The recognition process continues until the user presses
the 'q' key, at which point the webcam is released, and OpenCV windows
are closed.

Code:

Figure 3.3 Python code 3


54

Code:

Figure 3.3 Python code 4

In conclusion, the updated facerecognition.py script demonstrates


comprehensive functionality for real-time face recognition and attendance
logging. It efficiently integrates OpenCV for image processing,
PostgreSQL for database operations, and various Python modules for
system interaction and functionality. The script's modular structure, clear
variable definitions, and well-defined functions contribute to its
readability, maintainability, and extensibility. With appropriate
configuration and dataset preparation, this script can serve as a foundation
for developing advanced facial recognition systems for diverse
applications.

3.6 Conclusion

This chapter delves into the integration of facial recognition technology


as a core element of our automated attendance tracking system. Facial
recognition offers a seamless and efficient means of identity verification,
particularly suited for educational settings.

Our goal is to modernize attendance processes, making them more


efficient and contactless. Leveraging facial recognition aligns perfectly
with this objective, streamlining traditional methods with advanced
technology.
55

We adopt two key algorithms for facial recognition: the Local Binary
Pattern Histogram (LBPH) and the Haar cascade classifier. Implemented
within Python, these algorithms enable accurate face detection and
recognition across diverse environments.

Python's choice as our primary language is justified by its readability,


extensive library support, and collaborative community. Libraries like
OpenCV and NumPy provide essential tools for image processing and
machine learning tasks, further enhancing the robustness of our system.

A detailed code review of facerecognition.py demonstrates its


effectiveness in implementing facial recognition using LBPH and Haar
cascade classifier. The script seamlessly integrates OpenCV for image
processing and SQLite for database operations.

In conclusion, this chapter lays the groundwork for a sophisticated


attendance tracking system empowered by facial recognition. By
automating routine tasks and offering insights into attendance patterns,
we aim to enhance the educational experience through technological
innovation.
56

CHAPTER 4
DATABASE INTEGRATION

4.1 Introduction

In the modern educational landscape, efficient data management and


accessibility are paramount. As educational institutions transition towards
digital solutions, the need for streamlined attendance tracking systems
becomes increasingly apparent. In this chapter, we delve into the
development of a robust database and web application tailored to address
the specific needs of attendance management.

The integration of facial recognition technology, as discussed in previous


chapters, lays the foundation for automating attendance tracking.
However, to truly leverage the benefits of this technology, an efficient data
storage and retrieval mechanism is essential. This chapter explores the
development of a SQL-based database to store attendance records securely
and efficiently.

Furthermore, we delve into the creation of a web application interface,


providing teachers with convenient access to attendance data. Through
user authentication and intuitive navigation, the web app facilitates
seamless interaction with attendance records, empowering educators with
valuable insights into student attendance patterns.

By combining advanced facial recognition technology with a user-friendly


web interface, our aim is to revolutionize attendance tracking in
educational settings, ultimately enhancing the overall learning experience
for both educators and students.
57

4.2 Database Choice: SQL vs. MongoDB

In the development of our attendance tracking system, the choice of


database management system (DBMS) plays a crucial role in ensuring
efficient data storage, retrieval, and management. The decision to use SQL
(Structured Query Language) over MongoDB (a NoSQL database) was
made after careful consideration of various factors related to data
structure, scalability, and project requirements.

4.2.1 Why SQL


SQL was chosen for several reasons:
Structured Data Storage: SQL databases offer a structured approach to
data storage, where data is organized into tables with predefined schemas.
This structure aligns well with the nature of attendance records, which
typically have fixed fields such as student names, timestamps, and image
paths. By enforcing a consistent structure, SQL ensures data integrity and
facilitates efficient data retrieval.

Data Integrity and Consistency: SQL databases adhere to the ACID


(Atomicity, Consistency, Isolation, Durability) properties, ensuring data
integrity and consistency. ACID compliance guarantees that transactions
are executed reliably, and data remains in a valid state even in the event
of system failures or interruptions. This reliability is crucial for
maintaining the accuracy and reliability of attendance records.

Transaction Support: SQL databases provide robust transaction support,


allowing multiple operations to be grouped together as a single logical
unit. This ensures that attendance-related operations, such as recording
58

attendance entries or updating records, are executed atomically. In case of


failures or errors, transactions can be rolled back to maintain data
consistency.

Scalability: SQL databases are highly scalable, capable of handling


increasing volumes of attendance data as educational institutions grow.
Vertical scalability options allow for upgrading hardware resources to
accommodate greater data loads, while horizontal scalability options
enable distributed deployments across multiple servers. This scalability
ensures that our attendance tracking system can grow seamlessly with the
needs of educational institutions.

Overall, the use of SQL databases in our project provides numerous


advantages, including structured data storage, data integrity, transaction
support, query flexibility, scalability, reliability, and security. These
benefits collectively contribute to the development of a robust, efficient,
and secure attendance tracking system tailored to the needs of educational
institutions.
4.2.1 Comparison with MongoDB

While MongoDB offers certain advantages, such as flexibility in data


modelling and horizontal scalability, it was deemed less suitable for our
project due to the following reasons:

Scalability: While MongoDB excels in horizontal scalability, allowing for


distributed deployments and handling of large volumes of data, our
project's requirements did not necessitate such scalability. SQL databases,
with their vertical scalability options, were deemed sufficient to handle the
anticipated workload of attendance tracking in educational environments.
59

Query Flexibility: While MongoDB offers powerful query capabilities,


including support for nested documents and dynamic schemas, the
structured nature of SQL allows for efficient querying and indexing,
particularly for common attendance-related queries such as retrieving
attendance records for a specific date range or student.

Project Requirements: Given the project's emphasis on data integrity,


consistency, and transaction support, SQL databases emerged as the
preferred choice. The structured nature of SQL aligns well with the
predefined data fields and transactional requirements inherent in
attendance tracking systems.

In conclusion, while MongoDB offers certain advantages in flexibility and


scalability, the structured nature, transactional support, and reliability of
SQL databases make them the preferred choice for our attendance tracking
system. The decision to use SQL reflects a careful consideration of project
requirements and a commitment to ensuring robust and dependable
attendance management capabilities.

4.3 Psycopg2 Library

Psycopg2 is a Python library used for interacting with PostgreSQL


databases. It serves as a bridge between Python programs and PostgreSQL
database servers, facilitating various database operations directly from
Python code.
60

Here's a concise overview of psycopg2:

PostgreSQL Database Connectivity: Psycopg2 enables Python


applications to establish connections to PostgreSQL database servers,
allowing seamless communication between Python code and PostgreSQL
databases.

Executing SQL Queries: With psycopg2, developers can execute SQL


queries against PostgreSQL databases from within Python scripts. This
includes executing data manipulation language (DML) statements like
SELECT, INSERT, UPDATE, DELETE, and data definition language
(DDL) statements.

Parameterized Queries: Psycopg2 supports parameterized queries,


allowing developers to write SQL queries with placeholders for
parameters. These parameters are later replaced with actual values when
the query is executed. Parameterized queries help prevent SQL injection
attacks and improve performance.

Database Cursor Management: Psycopg2 provides cursor objects for


executing SQL queries and retrieving query results. Cursors allow
developers to iterate over result sets, fetch individual rows or batches of
rows, and perform other cursor-related operations.
Overall, psycopg2 is a versatile library that simplifies database
interactions in Python applications, making it a popular choice for
developers working with PostgreSQL databases.
61

4.4 Database Structure


Our database schema for storing attendance data is designed to efficiently
capture and organize key information related to student attendance. The
schema comprises several essential fields, each serving a specific purpose
in the attendance tracking process.Fig 4.1 shows the example database
structure.

Figure 4.1 Sample data


Fields:
Name: This field stores the name of the recognized individual, typically
corresponding to a student enrolled in the educational institution. By
associating each attendance record with a specific student, this field
enables accurate identification and tracking of individual attendance
patterns.

Image Path: The image path field stores the file path to the image of the
recognized individual. This image is captured during the facial recognition
process and serves as visual evidence of the student's presence during
attendance tracking sessions. Storing the image path allows for efficient
retrieval and display of attendance records along with corresponding
student images.

Timestamp: The timestamp field records the date and time at which the
attendance record was created. This temporal information is crucial for
tracking attendance over time, enabling educators to monitor student
62

attendance trends, identify patterns, and address any anomalies or


concerns promptly. Fig 4.1 represents sample data set.

4.5 Code Overview


In this section, we delve into the code responsible for interfacing with the
SQLite database in our facial recognition system. The database
management functions are integral for storing attendance data efficiently
and securely.Fig 4.2 and 4.3 represents the code.

4.5.1 Database Connection URI Definition:


The database_uri variable serves as the connection string required for
establishing a connection to the PostgreSQL database. This URI
encapsulates crucial information such as the database name, username,
password, and host address. The format of the URI follows a standard
convention, enabling the psycopg2 library to interpret and utilize it for
establishing a connection. By centralizing connection details within this
variable, the script enhances maintainability and allows for easy
configuration adjustments without modifying the code directly.

4.5.2 Database Connection Establishment Function:


The create_connection() function plays a pivotal role in establishing a
connection to the PostgreSQL database. It utilizes the psycopg2 library, a
popular PostgreSQL adapter for Python, to create a connection object.
This connection object acts as a communication channel between the
Python script and the database server, facilitating the execution of SQL
queries and transactions. The function encapsulates the complexity of
connection establishment, abstracting away low-level implementation
details and providing a convenient interface for interacting with the
database.
63

Figure 4.2 Database Code 1

4.5.3 Attendance Table Creation Function:


The create_table_if_not_exists(conn) function ensures the presence of the
"attendance" table within the PostgreSQL database schema. Upon
receiving a database connection object conn as input, the function utilizes
it to execute an SQL CREATE TABLE statement. This statement defines
the structure of the "attendance" table, specifying attributes such as ID,
name, image path, check-in time (in_time), and check-out time (out_time).
By checking for the existence of the table before creation, the function
prevents duplicate table creations and ensures database schema
consistency across multiple executions of the script.

Figure 4.3 Database Code 2


64

4.5.4 Attendance Record Insertion Operation:


The process of inserting attendance records into the database involves
several key steps. First, a cursor object is created using the database
connection (conn). This cursor serves as a virtual pointer within the
database, allowing the execution of SQL queries and retrieval of results.
The cur.execute() method is then employed to execute an SQL INSERT
statement, which adds a new record to the "attendance" table. This
statement specifies the columns to which values will be inserted (name,
image_path, in_time, out_time) and provides corresponding values as a
tuple. These values typically correspond to the recognized individual's
name, the file path of their recognized image, and the timestamps
indicating their check-in and check-out times.

By orchestrating these database-related operations, the Python script


seamlessly integrates attendance data storage and management into the
PostgreSQL database. This approach fosters data integrity, scalability, and
reliability, laying a robust foundation for attendance tracking in diverse
environments.

4.6 Web Application

In addition to implementing facial recognition for attendance tracking, we


have developed a user-friendly web application to enhance accessibility
and convenience for teachers and administrators. This web application
serves as a centralized platform for viewing attendance data, providing a
seamless experience for monitoring student attendance trends and
generating insightful reports.
65

Purpose:
The primary objective of the web application is to streamline the process
of accessing and analyzing attendance data. By offering a user-friendly
interface accessible from any internet-enabled device, educators can
effortlessly track attendance, identify patterns, and make data-driven
decisions to improve student engagement and performance.

Key Features:
Login Page: The web application includes a secure login page where
authorized users, such as teachers and administrators, can authenticate
themselves before accessing attendance data. User credentials are
validated to ensure only authorized individuals can access sensitive
information.
Attendance Display: Upon successful login, users are presented with an
intuitive interface for viewing attendance data. Attendance records are
displayed in a structured format, allowing users to quickly navigate
through different sessions, classes, or students.

Benefits:
Accessibility: The web appl

ication can be accessed from any device with an internet connection,


offering flexibility and convenience for users to monitor attendance data
anytime, anywhere.
Efficiency: By centralizing attendance data and providing intuitive search
and filter functionalities, the web application streamlines the process of
retrieving and analyzing attendance records, saving time and effort for
educators.
66

Insights and Decision-Making: Visualizations and reporting features


empower users to gain actionable insights from attendance data, enabling
informed decision-making to improve student outcomes and institutional
effectiveness.

The web application complements the facial recognition-based attendance


system by providing a user-friendly interface for accessing and analyzing
attendance data. By leveraging modern web technologies and intuitive
design principles, the application enhances the efficiency, accessibility,
and effectiveness of attendance tracking in educational settings.

4.7 Web Application Framework: Python Flask


Python Flask is a lightweight and versatile web framework that facilitates
the development of web applications in Python. It provides developers
with the necessary tools and utilities to build web-based applications
quickly and efficiently, making it an excellent choice for projects of
various sizes and complexities.

Key Features of Python Flask:


Routing: Flask allows developers to define URL routes and map them to
specific functions, known as view functions. This routing mechanism
enables the handling of incoming HTTP requests and the generation of
appropriate responses.

Templating: Flask supports Jinja2 templating, a powerful and flexible


templating engine that simplifies the generation of dynamic HTML
content. Jinja2 templates enable the embedding of Python code within
HTML templates, facilitating the creation of dynamic web pages.
67

Session Management: Flask provides utilities for session management,


allowing developers to store user-specific data across multiple requests.
This feature is essential for implementing user authentication,
authorization, and personalized user experiences in web applications.

Middleware Support: Flask supports the use of middleware, which are


components that intercept and process HTTP requests and responses
before they reach the application or after they leave the application.
Middleware can be used for tasks such as request logging, error handling,
and authentication enforcement.

Overall, Python Flask is an excellent choice for web application


development, offering simplicity, flexibility, and scalability. Its rich
feature set, robust ecosystem, and vibrant community make it a preferred
framework for building a wide range of web-based projects, from simple
APIs to complex web applications.

4.8 Login Page

The login page serves as the gateway for authorized users to access the
attendance tracking system. It provides a secure authentication mechanism
to verify the identity of users before granting them access to sensitive
attendance data. Below is a description of the login page functionality:

Functionality:
User Authentication: The login page prompts users to enter their
credentials, typically a username and password, to authenticate their
identity. This process ensures that only authorized users with valid
credentials can access the system.
68

Credential Validation:
Upon submitting the login form, the entered credentials are validated
against the system's user database. This validation process verifies the
accuracy and authenticity of the provided credentials, ensuring that only
registered users can log in.
Error Handling:
The login page includes error handling mechanisms to handle various
authentication scenarios, such as incorrect username or password, expired
sessions, or account lockouts due to multiple failed login attempts. Clear
and informative error messages are displayed to guide users in resolving
authentication issues.

Benefits:
Security: The login page enhances the security of the attendance tracking
system by requiring users to authenticate their identity before accessing
sensitive data. This helps prevent unauthorized access and protects the
integrity and confidentiality of attendance records.

The login page is a critical component of the attendance tracking system,


providing a secure and user-friendly authentication mechanism for
accessing attendance data. By enforcing authentication and access control
policies, the login page helps maintain the confidentiality, integrity, and
availability of attendance records while ensuring a seamless user
experience for authorized users.

4.9 Attendance Display Page

The Attendance Display Page serves as a pivotal component within our


comprehensive attendance tracking system, offering educators and
69

administrators a centralized platform to effortlessly monitor and analyze


attendance data. This page provides a comprehensive overview of student
attendance status, enabling users to gain valuable insights into attendance
trends, identify patterns, and make informed decisions to enhance student
engagement and academic performance.
Functionality:
Comprehensive Overview: The attendance display page offers a holistic
view of attendance data, summarizing student attendance status, dates, and
classes. This overview provides a quick understanding of attendance
trends.
Export Functionality: Users have the ability to export attendance data in
various formats, such as PDF or CSV, for further analysis or sharing with
stakeholders. This feature promotes collaboration and data-driven
decision-making.

Benefits:
Efficient Monitoring: The attendance display page streamlines attendance
monitoring, enabling users to track student attendance status and trends
with minimal effort.
Informed Decision-Making: Through detailed attendance records and
visualizations, users gain insights into attendance patterns, empowering
them to make informed decisions to improve student engagement and
outcomes.

The attendance display page is a crucial component of the attendance


tracking system, providing users with comprehensive attendance data and
insightful analysis tools. By offering a user-friendly interface, detailed
attendance records, and export functionality, the attendance display page
70

facilitates efficient monitoring and informed decision-making to support


student success.

4.10 Conclusion

In conclusion, the integration of a robust database and web application is


pivotal in enhancing the effectiveness and accessibility of attendance
tracking systems in educational environments. By leveraging facial
recognition technology, coupled with secure data storage and intuitive
user interfaces, our system offers a comprehensive solution for automating
attendance management.

The choice of SQL for database management provides a structured and


reliable framework for storing attendance records, ensuring data integrity
and consistency. Through careful consideration of project requirements
and a comparison with MongoDB, SQL emerged as the preferred choice,
offering transactional support, scalability, and matured reliability.

The database structure, comprising fields for student names, image paths,
and timestamps, serves as the foundation for efficient data management
and retrieval. Maintaining a structured schema facilitates data integrity,
query efficiency, and standardization, enabling educators to track
attendance accurately and analyze trends effectively.

Furthermore, the development of a user-friendly web application


complements the facial recognition-based attendance system by providing
convenient access to attendance data. The login page ensures secure
authentication, while the attendance display page offers comprehensive
overviews and export functionalities for informed decision-making.
71

Overall, the integration of facial recognition technology with a SQL-based


database and user-friendly web application represents a significant
advancement in attendance tracking systems. By streamlining attendance
management processes and empowering educators with valuable insights,
our system contributes to enhancing student engagement, improving
outcomes, and facilitating a seamless learning experience.
72

CHAPTER 5

CONCLUSION AND FUTURE SCOPE

5.1 Conclusion

In conclusion, the implementation of an automated attendance system


using contactless facial recognition represents a significant leap forward
in the realm of modern technology and educational management. This
project has successfully harnessed the power of facial recognition
technology to streamline and enhance the traditional attendance tracking
process. By eliminating the need for manual recording and verification,
the system not only saves valuable time but also mitigates the risk of errors
associated with manual data entry.
The contactless nature of facial recognition ensures a seamless and
hygienic experience, particularly in the context of heightened health
concerns and the need for touchless interactions.As we reflect on the
outcomes of this project, it becomes evident that automated attendance
systems hold immense potential not only in educational settings but also
in various industries where precise and efficient record-keeping is
paramount. The success of this venture encourages further exploration of
contactless technologies and their integration into diverse facets of our
daily lives, promising a future where routine tasks are executed with
unprecedented ease and sophistication.
73

5.2 Future Scope

The future scope for the automated attendance system using contactless
facial recognition is vast and promising, extending its application across
diverse sectors. In the educational domain, the system could evolve to
encompass more advanced analytics, providing insights into attendance
patterns, engagement levels, and even student sentiments through facial
expressions. Machine learning algorithms could be integrated to enhance
facial recognition accuracy and adapt to varying environmental
conditions, ensuring a robust and reliable system.
The system could be adapted for use in offices, conferences, and events,
simplifying registration processes and improving overall security.
Integration with access control systems could enhance workplace security
by ensuring that only authorized personnel gain entry.In essence, the
future scope for this project extends far beyond its current educational
application, with the potential to revolutionize attendance tracking and
access control in various sectors, contributing to a technologically
advanced and streamlined future.
74

REFERENCES

1) Parikh, H. D., & Patel, A. M. (2023). “Development of Automated


Attendance System using Face Recognition”. International Journal
of Science and Research (IJSR), 12(6), 1326-1330.
2) Mandal, S., & Nath, N. S. (2023). “A Novel Approach for
Automated Attendance System using Face Recognition
Technique”. 2023 4th International Conference on Emerging
Trends in Electrical, Electronics & Sustainable Energy Systems
(ICETEESES).
3) Zhao, Y., Wang, H., & Zhang, X. (2023). “Facial Recognition-
based Automated Attendance System for Educational Institutions”.
2023 International Conference on Artificial Intelligence and
Computer Engineering (ICAICE).
4) Ghosh, D., & Mishra, S. (2023). “Implementation of Automated
Attendance System using Face Recognition Technique”. 2023
International Conference on Advances in Computing,
Communication, Electrical and Electronics Engineering (CCEEE).
5) Li, W., & Zhang, J. (2023). “An Improved Automated Attendance
System Based on Face Recognition Technology”. 2023
International Conference on Artificial Intelligence and Industrial
Engineering (CAIIE).
6) Kumar, S., & Gupta, A. (2023). “Development of Automated
Attendance System Using Face Recognition”. International Journal
of Engineering and Advanced Technology (IJEAT), 12(6), 989-
994.
75

7) Singh, G., Mishra, S., & Kori, A. (2022). “Automated Attendance


System using Face Recognition”. International Journal of Recent
Technology and Engineering (IJRTE), 11(6).
8) Muhammad Sabirin Hadis, Junichi Akita, Masashi Toda,
Nurzaenab, "The Impact of Preprocessing on Face Recognition
using Pseudorandom Pixel Placement", 2022 29th International
Conference on Systems, Signals and Image Processing (IWSSIP),
vol.CFP2255E-ART, pp.1-5, 2022.
9) Subedi, A., Shakya, B., & Pokhrel, M. R. (2022). “A Review on
Automated Attendance System using Facial Recognition”. 2022
8th International Conference on Electrical Engineering and
Information & Communication Technology (ICEEICT).
10) Zhang, Y., Wang, Y., Liu, Z., & Wang, L. (2021). “Facial
recognition-based student attendance system using deep learning”.
In 2021 IEEE 4th International Conference on Advanced Robotics
and Mechatronics (ICARM) (pp. 449-454).
11) J. Deng, J. Guo, J. Yang, N. Xue, I. Cotsia and S.P. Zafeiriou,
"ArcFace: Additive Angular Margin Loss for Deep Face
Recognition", IEEE Transactions on Pattern Analysis and Machine
Intelligence, pp. 1-1, 2021.
12) Ashraf Khalil, Soha Glal Ahmed, Asad Masood Khattak and
Nabeel Al-Qirim, "Investigating Bias in Facial Analysis Systems:
A Systematic Review", IEEE Access, vol. 8, 2020.
13) Shashi. Yadav, "Deep Learning based Safe Social Distancing and
Face Mask Detection in Public Areas for COVID-19 Safety
Guidelines Adherence", International Journal for Research in
Applied Science and Engineering Technology, vol. 8, no. 7, pp.
1368-1375, 2020. 68
76

14) Steve Lawrence, C. Lee Giles, Ah Chung Tsoi, Andrew D. Back,


S.H. Lin, S.Y. Kung, et al., "Face recognition: A convolutional
neural-network approach", IEEE transactions on neural networks,
vol. 8, no. 1, pp. 98-113, 2020.
15) Syam Kakarla, Priyaranjan Gangula, M. Sai Rahul, C. Sai Charan
Singh and T. Hitendra Sarma, “Smart Attendance Management
System Based on Face Recognition Using CNN”, IEEE-
HYDCON, 2020.

You might also like